Embedded AI Developer/Data Scientist
Austin, TX Artificial Intelligence
Job Type
Full-time
Description

Company Overview


Ambiq's mission is to develop the lowest-power semiconductor solutions to enable intelligent devices everywhere by developing the lowest-power semiconductor solutions to drive a more energy-efficient, sustainable, and data-driven world. Ambiq has helped leading manufacturers worldwide develop products that last weeks on a single charge (rather than days), while delivering a maximum feature set in compact industrial designs. Ambiq's goal is to take Artificial Intelligence (AI) where it has never gone before in mobile and portable devices, using Ambiq's advanced ultra-low power system on chip (SoC) solutions. Ambiq has shipped more than 230 million units as of October 2023. For more information, visit www.ambiq.com.


Our innovative and fast-moving teams of research, development, production, marketing, sales, and operations are spread across several continents, including the US (Austin and San Jose), Taiwan (Hsinchu), China (Shenzhen and Shanghai), Japan (Tokyo), and Singapore. We value continued technology innovation, fanatical attention to customer needs, collaborative decision-making, and enthusiasm for energy efficiency. We embrace candidates who also share these same values. The successful candidate must be self-motivated, creative, and comfortable learning and driving exciting new technologies. We encourage and nurture an environment for growth and opportunities to work on complex, engaging, and challenging projects that will create a lasting impact. Join us on our quest for 100 billion devices. The endpoint intelligence revolution starts here.


This role can be in San Diego, CA or Austin, Texas. 


The expectation is that the candidate will be required to maintain a regular in-office presence five days per week.


Scope

At Ambiq, the Endpoint AI team enables state-of-the-art ML and DL model development across our hardware portfolio, using sophisticated model compression techniques to deploy previously impractical AI tasks to battery-powered environments. Our team of data scientists research neural architectures best suited to our customer’s needs, select those models most amenable to deployment on our platform, and train them carefully tuning for memory, compute, and energy constraint tradeoffs. Finally, we publish our findings to our Model Zoo and socialize them via conferences, workshops, and publications.


Beyond a healthy obsession with computational efficiency, the successful candidate will be comfortable with operating in a ‘version zero’ environment, marshaling internal, open source, and third-party resources to solve our customer’s problems quickly and elegantly.

Requirements
  • Identify, refine, and/or develop sophisticated ML and DL models for deployment on highly constrained environments.
  • Train models using SOTA compression techniques to fit in specific memory, compute, and power envelopes, making trade-offs between compression and accuracy.
  • Publish and maintain these models in a Model Zoo, including, documentation, and other assets needed by our customers to bootstrap their internal AI features.
  • Socialize their achievements via conferences, meetups, workshops, and publications.

Required Skills/Abilities

  • Experience with SOTA pruning, distillation, quantization approaches for CNNs and RNNs
  • Experience with one or more of the following AI task domains: audio classification, speech, vision, and/or time series tasks, including domain-specific feature extraction related to those tasks
  • Tensorflow (TFLite, TFLite for Microcontrollers, and/or PyTorch are a plus)
  • Dataset creation and curation

Bonus Qualifications

  • Past “TinyML” involvement or experience
  • Experience developing and optimizing for TFLite for Microcontrollers
  • Experience with embedded C/C++ environments
  • Experience with compression of attention-based architectures
  • Experience with model-to-binary compilers (IREE, MicroTVM, etc)
  • Experience with ONNX, TOSA, Jax, LLVM, and/or MLIR
  • Experience with optimizing for heterogenous AI compute (e.g. CPU+NPU+DSP)

Education and Experience

  • A bachelor’s degree in computer science or a related field is required with at least 2 years of relevant experience. A master’s degree or PhD in related topics is highly desirable.
Salary Description
95,000-165,000