MLOps Engineer
Job title
MLOps Engineer
Business Stream:
BRAINLY


Apply now

 

NOTICE: ONLINE RECRUITMENT PROCESS

LOCATION: KRAKÓW OR REMOTELY FROM POLAND / BARCELONA OR REMOTELY FROM SPAIN

SALARY: up to 35 000 PLN gross/monthly

Brainly is the world’s largest online learning community, uniting students, parents and teachers in solving their academic problems and exchanging knowledge.  Every month we are proud to be home to 350 million users around the world. Brainly’s knowledge base consists of hundreds of millions of Q&A contents in more than 12 languages and covering a broad spectrum of educational subjects at different grades.

Content Quality Systems team

Our vision is to transform Brainly’s knowledge base into a selection of high-quality educational contents. The Content Quality Systems team is responsible for providing the machine learning capabilities of:

  • Learn new knowledge representations for text and media-rich contents related to their validity and correctness
  • Enhance and automate the content management process
  • Ensure the quality and safety of Brainly’s knowledge base

Machine Learning Infra Team
Our vision is to give every machine learning practitioner at Brainly the tools and the know-how required to excel. We pair the unique challenges of the Brainly organization with the expertise and the standards about how machine learning should be developed and operated at scale.

We provide the infrastructure, best practices, and knowledge to enable other technology or product teams to develop novel product features and internal services using machine learning.

ABOUT THE ROLE

You will have the opportunity to turn machine learning artifacts into production systems and to participate in implementing state-of-the-art MLOps practices, and to improve your skills in NLP, machine learning, large scale data processing and information retrieval.

The ideal candidate is an enthusiast of educational technologies with a background in software development and a blend of cloud infrastructure, machine learning, and DataOps / DevOps skillset.

More importantly we are looking for an engineer with a strong expertise in DataOps and DevOps and willing to learn and master MLOps.

WHAT WILL YOU DO?

  • Turn machine learning artifacts into production systems, integrated with other product features or business processes.
  • Deploy and orchestrate data pipelines transforming raw data into features that are digestible for the ML algorithms and maintain the feature store up-to-date.
  • Deploy and maintain the label management system and automate the integration of the data labeling processes.
  • Build sanity checks and dashboards for monitoring data quality, model drifts, operational efficiency, and system performances.
  • Monitor and identify potential biases and unfair behaviors in model behaviors.
  • Work with ML Engineers to deploy and orchestrate robust pipelines for training, evaluation, and inference at scale.
  • Build tools for supporting experiments, development, and debugging of machine learning models.
  • Create and maintain the infrastructure required for both development and production environments using infrastructure-as-a-code.
  • Create automatic workflow for building, testing, tracking experiments, versioning, deployment, using CI/CD tools.
  • Implement safe release and deployment models (e.g. canary release, blue/green deployment, load autoscaling) for achieving resilience in case of component failures or traffic bursts.
  • Ensure DevOps culture and practices among the whole team.
  • Work with ML Infra team, Solutions Architects, and Automation infra team to identify and architect infrastructure solutions to empower the team to move faster, more effectively, and with a higher level of automation in their work.

WHAT WILL YOU NEED TO BE SUCCESSFUL IN THIS ROLE?

  • Experience
    • Required:
      • 2+ years practical experience and expert knowledge of cloud architectures (AWS, GCP, or Azure), services, and administration best-practices for stability and functionality (depending on seniority).
      • 2+ years experience with microservices, REST or GraphQL APIs, load balancing, production web-hosting networking (depending on seniority).
      • 2+ years experience with CI/CD pipelines or other code automation techniques  (depending on seniority).
    • Preferred:
      • 4+ years experience with production environments (depending on seniority).
      • 1+ years experience with machine learning production environments (depending on seniority).
      • Bachelor’s degree or above in computer science, software/computer/IT engineering or other STEM fields.
      • Working experience in a similar position in DevOps, Data Engineering, Back-end, or related fields.
      • Experience with modern cloud serverless services.
      • Experience with data storage and data processing technologies (e.g. relational/non-relational databases, warehouses, cloud storage solutions, different processing engines, Apache Spark).
      • Experience with orchestrating large volume ETL jobs or data streaming pipelines.
      • Experience with Golang.
  • Attributes
    • Required:
      • Motivation to learn fast and grow in the required areas to succeed in the job.
      • Passionate about automating workflows.
      • Culture of DevOps and high-quality software standards.
      • Ownership of problems/challenges from beginning-to-end.
      • Positive attitude and willingness to address challenges and complex problems.
      • Team player attitude and clear communications skills.
      • High level of self-organization.
  • Skills and systems
    • Required:
      • Terraform or other IaaC frameworks (e.g. CloudFormation, SAM, Google Deployment Manager, serverless.com or similar).
      • Bash and Unix command line toolkit (e.g. AWS cli or boto3 sdk).
      • CI/CD tools (e.g. GitHub actions, AWS CodePipeline, Travis, CircleCI, DVC/CML or similar).
      • Logging, debugging, monitoring, alerting tools (e.g. Elastic stack, DataDog, AWS CloudWatch, Sentry, AWS X-Ray, Thundra, NewRelic or similar).
      • Container technologies such as Docker and Kubernetes.
      • Working knowledge of Python, SQL, and bash.
      • Fluent in English.
    • Preferred:
      • Familiar with agile development and lean principles.
      • At least some of the data and cloud infrastructure technologies such as Spark, DataBricks, Glue, EMR, AWS Batch, AWS Lambda, Postgres, key-value stores, Redshift, Snowflake.
      • At least some of the deployment and orchestration technologies such as AWS StepFunctions, AWS Sagemaker pipelines, Seldon, KubeFlow, Tensorflow Extended, AirFlow.
      • At least some of the ML technologies such as Tensorflow, PyTorch, Spark ML, scikit-learn, XGBoost, MLFlow, Neptune.AI, or related frameworks.

ADDITIONAL DETAILS

Some of our benefits (slightly varies depending on the location):

  • Flexible working hours
  • Personal development budget 800$ per year +  unlimited time off policy for participation in conferences and workshops and access to an online learning platform with courses from Udemy, Harvard Manage Mentor and many others
  • Fully paid private health care packages for you and your family (dental care included) provided by Luxmed
  • Fully paid life insurance provided by Warta
  • Multisport Plus card
  • Access to the Mental health Helpline – providing virtual support of external psychologists, psychotherapists, and coaches
  • AskHenry services – personal concierge services to help you to settle your everyday matters (like Ikea shopping or shoemaker visit)
  • Possibility to join one of our Employee Resource Groups and initiatives (Inclusion Council, Ladies at Brainly, Brainly Cares)
  • If needed, additional budget for work remote work accessories

Apply now