Aidan A. Singh
Hi, I’m Aidan — a software engineer with a focus on deep learning applications for language and audio.
I’m currently a Software Development Engineer at AWS Bedrock, a micro service for providing generative AI inference. I’ve contributed to many parts of the service including the LLM evaluations tools for comparing different foundation models, systems to classify safe AI use, and growing an internal data lake to over a petabyte in 6 months. My recent contributions have specifically supported Anthropic Claude inference, notably used for Claude Code and Chat.
My earlier experience spans software in the music and audio industries leveraging digital signal processing, data engineering, and machine learning. As an undergraduate researcher at NYU’s Music and Audio Research Lab, I leveraged algorithms to aggregate spatial audio data for machine learning. At Universal Music Group, I programmed a production data pipeline for ingesting streaming data from Meta, and analyzed Google Cloud Platform storage proposing cost-saving restructuring that was later implemented. At Cornell Tech’s startup studio, I created an AI powered tool to help music supervisors find the best songs for sync licensing opportunities in. At TulipAI, I supervised the creation of a cultural audio dataset and literature review for fine tuning Meta’s audiocraft models.