InstructLab

A new community-based approach to build truly open-source LLMs

Tag: Evaluation Team

This team maintains and develops evaluation of models for checkpoints as well as post-training; support for additional evaluation frameworks and different deployment models including RAG.

  • Principal Machine Learning Engineer

    Boston, Massachusetts
    InstructLab Model Evaluation Team

    This role involves evaluating the quality and performance of models produced by InstructLab. This includes model evaluation community and industry benchmarks as well as improvements to the model over time. Engineers in this role will run experiments, tests, and large-scale distributed jobs in support of evaluation-related AI product features. They will lead a variety of coding projects in different programming languages (primarily python), helping transition software components from research into product. Engineers in this role will also participate in and lead upstream communities with a focus towards model evaluation projects. They will also promote machine learning and data science technologies and ongoing machine learning projects with a variety of technical and non-technical stakeholders.

    Note “Apply Now” job descriptions are the official job postings.