***All Applicants must be located outside of the United States
We all speak different languages, and languages come in many forms. That language could be Mandarin or English, Python or Ruby. Languages offer unique views of the world, but can also create barriers in communication. At GLOBO, our collective, unique skills allow us to bridge the divide and enable communication, helping change the lives of some of the most vulnerable in our society. - SVP of Engineering
About the Role:
As an AI Engineer at GLOBO, you will sit at the exciting intersection of Software Engineering, Data Science, and Artificial Intelligence. You won’t just be tuning models in a vacuum; you will be building the intelligent engines that power the next generation of our communication platform.
In this role, you will leverage cutting-edge LLMs (specifically Claude via AWS Bedrock), traditional Machine Learning techniques, Serverless and Containerized Computing, and agentic frameworks to automate complex workflows and enhance human-to-human communication. You will work closely with our Engineering teams to operationalize AI—taking concepts from "cool demo" to scalable, production-grade features that help people communicate when it matters most.
If you like this idea, here’s more...
Reporting to our Director of Engineering, you will design and implement AI-driven solutions using a combination of homegrown systems and commercial tools like CrewAI and Agent Core. You will build robust data pipelines with Airflow and AWS Lambda to feed these models, ensuring our systems are not only smart but also reliable and performant. You will help us solve "gnarly" problems—like real-time translation context, quality assurance automation, and predictive resource allocation.
Key Responsibilities:
- Complex AI Platforms: Our custom-built AI applications utilize the best-of-breed technologies and models to power our in-house AI systems.
- Design & Build Agents: Architect and deploy autonomous AI agents using frameworks like CrewAI or Agent Core to automate internal workflows and enhance user experiences.
- LLM Integration: Integrate LLMs (Claude/Bedrock) into our core platform, optimizing for latency, cost, and response quality.
- Data Pipeline Engineering: Build and maintain scalable data pipelines using Airflow and AWS Lambda to support RAG (Retrieval-Augmented Generation) and model fine-tuning.
- Bridge the Gap: Act as the connective tissue between our Ruby on Rails backend and our Python-based AI services.
- Performance Optimization: Monitor token usage, hallucinations, and model drift to ensure our clients have a safe, error-free experience.
- Model Benchmarking & Evaluation: Develop model evals to identify the best-performing models for a specific use case, including the estimated cost of execution
- Innovate: Prototype new features rapidly, exploring how emerging AI tech can revolutionize the language services industry.
- Testing, Optimization, and Scaling: Leverage Test-Driven Development (TDD) to engineer reliable, repeatable systems, while continuously optimizing performance to support high-scale architecture.
About You:
You are a builder at heart. You understand that an LLM is only as good as the engineering wrapped around it. You are comfortable moving between "hacking" a prototype and engineering a stable system. You are excited by the chaos of the rapidly evolving AI landscape but disciplined enough to apply sound engineering practices to tame it.
Our Software & Stack:
To summarize, we've built a high-availability system that enables real-time interpretation. You will be introducing new AI capabilities into this ecosystem. Our stack includes:
- AI/ML: AWS Bedrock (Claude), CrewAI, LangChain, Agent Core, Sagemaker
- Data Engineering: Airflow, AWS Lambda, Python
- Core Platform: Ruby on Rails, PostgreSQL, React, Redis, ElasticSearch
- Infrastructure: AWS (EC2, RDS, S3), RabbitMQ, ECS/EKS, CDK
Requirements:
- Bachelor’s Degree in Computer Science, Data Science, or related field (or equivalent experience).
- 4+ years of experience in software engineering, data engineering, machine learning, or AI development.
- Strong proficiency in Python (experience with Ruby is a huge plus).
- Hands-on experience building applications with LLMs (OpenAI, Anthropic, Bedrock).
- Experience with Agentic Frameworks (CrewAI, AutoGPT, LangGraph, etc.).
- Experience designing data workflows with Airflow or similar orchestration tools.
- Experience building systems using Test-Driven Development (TDD), building tested, secure, scalable applications.
- Ability to communicate complex technical AI concepts to non-technical stakeholders.
About GLOBO:
GLOBO is a B2B communication platform provider, specializing in translation and interpretation technology, services, data, and insights. For the third year in a row, GLOBO has been ranked in the top-10 on the Philadelphia 100 list of fastest-growing privately held companies.
What’s it like to work here? We’re a close-knit team with big ideas and ambitions. We make the impossible happen, and make hard tasks easier. We don’t take ourselves too seriously, but we’re serious about our mission—helping people communicate when it matters most.