Deepgram is a foundational AI company building state of the art, production-ready AI models that streamline human-computer interaction and amplify productivity. By enabling seamless communication between humans and machines, we believe we can harness the untapped potential of AI and help pave the way for a more productive future. We passionately believe in the potential of audio data to transform lives, businesses, and interactions across the globe – which is why Deepgram is trusted by well-respected companies like NASA, Twilio, Auth0, and Spotify to push the boundaries of what is possible in voice technology!
At Deepgram, we spend every day tackling big, real-world challenges in voice. Our customers hire us to solve their hardest problems, taking real, complex audio and transforming it into novel insights. And to raise the bar, everything we build needs scale in its DNA. We aren’t content with simple horizontal scaling: we intend to replace entire data centers dedicated to speech analytics with a single rack of servers. These challenges demand creativity and innovative problem-solving every day.
As a Research Scientist at Deepgram, you’ll have the freedom to explore and uncover breakthroughs. You’ll also have a mandate to build — applying the latest advancements in deep learning to develop accurate and performant voice AI models. You will collaborate with product & engineering to help deploy these models in the most scalable speech API on the planet. We look forward to you bringing your whole self to work, sharing learnings from your latest experiments, and collaborating with us to advance the state of AI and voice technology.
Deepgram is currently looking for an experienced Research Scientist who has worked extensively on building models to solve hard problems in voice AI domains including automatic speech recognition (ASR), text-to-speech (TTS), diarization and speaker identification, language detection, or code switching. Voice AI is a challenging problem space which involves dealing with raw audio waveforms generated by the human voice. The complexity of audio data poses unique infrastructure, engineering, and modeling challenges which are orders of magnitude more difficult than working with text. You should have extensive experience working on the hard technical aspects around deep learning for audio such as speech data curation and characterization, development of expressive and efficient neural network architectures for speech, distributed training at large-scales, and optimization of speech models for inference at scale.
What You’ll Do:
- Stay up to date with the latest advances in deep learning with a particular eye towards their implications and applications within our products.
- Design and carry out experimental programs to build new voice AI models that solve critical problems for our customers.
- Drive large-scale training jobs successfully on distributed computing infrastructure.
- Optimize model architecture to make them as fast and memory-efficient as possible; deploy new models into production for use at massive scale.
- Document and present results and complex technical concepts clearly for internal and external audiences
You’ll Love This Role If You:
- Are passionate about AI and excited about working on state of the art speech research
- Enjoy building from the ground up and love to create new systems from scratch
- Are obsessed with building and shipping practical solutions to real world problems
- Are data-driven and prefer to solve problems using iterative experimentation
- Have strong communication skills and are able to translate complex concepts in simple terms, depending on the target audience
It’s Important To Us That You Have:
- Prior industry experience in building deep learning models to solve audio problems, with a solid understanding toward the applications and implications of different neural network types, architectures, and loss mechanisms.
- Proven experience building models from a blank page and owning the entire deep learning stack including data curation, characterization and cleaning, architecture design and model building, distributed large-scale training, and model optimization for inference.
- Strong software engineering skills with particular emphasis on developing clean, modular code in Python and working with Pytorch.
- Prior experience in designing and conducting experimental programs with the ability to rapidly iterate and change course as needed.
It Would Be Great if You Had:
- Deep understanding and experience working with state-of-the-art network architectures including transformers.
- Experience building generative audio models for speech or music synthesis.
- Understanding of different parallelism paradigms for efficient distributed training.
- Up-to-date knowledge of recent techniques and developments in multiple voice AI problem domains (ASR, TTS, diarization, etc.)
Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding after closing our Series B funding round last year. If you’re looking to work on cutting-edge technology and make a significant impact in the AI industry, we’d love to hear from you!
Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.
To apply, please visit the following URL:https://jobicy.com/jobs/74847-research-scientist-2→