About | Me |
---|---|
Hi, I'm Thelonious Cooper. I am an Electrical Engineer graduat(ing/ed) from MIT in Spring 2025 and starting an EECS PhD at UC Berkeley starting Fall 2025. I seek to affect positive change in the world by leveraging my knowledge of mathematical modeling and hardware/software engineering to innovate across scales in the fields of embedded control, communication, and sensing. | ![]() |
Interests and Accolades |
---|
I want to center my work on tangible systems for large-scale human benefit centered on disadvantaged populations. I have broad interests lying at the intersection of applied mathematics and cybernetic systems. I will develop new methodologies for sensing and actuation under uncertainty alongside secure digital hardware platforms tailored to such applications. My full-stack knowledge and experience (Math -> Physics -> Electronics -> Firmware -> Software) allow me to think big and connect problem solving techniques across many domains of human endeavor. During my time at MIT I was awarded the title of MIT Climate Grand Challenges Undergraduate Research and Innovation Scholar for my work in the [SuperUROP](Thelonious Abraham Cooper - MIT SuperUROP – Advanced Undergraduate Research Opportunities Program) program. I have also served on the Undergraduate Advisory Group for the [Schwartzman College of Computing](MIT Schwarzman College of Computing), interfacing directly with leading faculty in MIT EECS and the SCC to advocate for undergraduates. I have also served academic chair for Chocolate City, a living group of underrepresented men of color in STEM I an alumni of the Earth Signals and Systems Group, where I studied mathematical methods for information-theoretic stochastic optimization. We apply techniques in this field such as Gaussian Process Regression and Markov-chain Monte Carlo to optimize climate models and forecast extremes for risk assessment. I further apply these methods to create embedded applications of nonlinear model-predicative-control |
Working with professor Zaijun Chen to develop integrated photonic devices and algorithms for real time data processing and sensor-fusion applications. Establishing frameworks and architectures for reliable and provably safe cybernetic systems.
CIVO is an industrial partnership center with the mission "To promote the development, use, and dissemination of innovative display, graphics, and optical technology for the healthy and diseased eye." Several high-tech companies in the AR/VR space (Apple, Meta, Google to name a few) are very interested in how the visual system interacts with their technology. During my time at UC Berkeley I worked with the Ocular Motor Control Lab and the Active Vision and Computational Neuroscience Lab to generate large-scale synthetic 3d scene data for computer vision applications. Building on the Nvidia [Omniverse](Omniverse Platform for OpenUSD | NVIDIA) and IsaacLab frameworks, I wrote a library that integrates a simulacrum of a human visual system into a 3d rendering environment. Goals for the use of this project include training CV models with the ability to decide where to look based on visual cues, and be able to reason about their location in the environment purely from visual data.
Developed and implemented novel methodology for the evaluation of resilience in sophisticated drone autopilots. Simulated communications failures within an RTOS embedded-linux flight stack and modeled autopilot recovery performance.
Presented at PX4 Developer Summit in 2023. Recorded talk available below
PX4-DevSummit YouTube
Published first-author publication detailing methods in 2024 IEEE International Conference on Unmanned Aerial Systems where I served as reliability systems program co-chair
IEEE Conference Publication | IEEE Xplore
Created novel PyTorch implementation of Informative Ensemble Kalman Learning, a variant of Gaussian Process Regression and Markov-chain Monte Carlo, to optimize climate models and forecast extremes for risk assessment
Worked under PhD candidate Mustafa Doga Dogan on research centered around spatial encoding schemas and computer vision. Implemented ArUco marker tracking on Infrared Cameras on Microsoft HoloLens for lightweight object tracking in AR. Bypassed Android hardware restriction to access Infrared camera on phone.
Worked in an embedded Linux environment to implement the BGP routing stack within the next generation firmware for the Meraki MX routing security appliance. Integrated legacy C software into a modern C++ development environment. Performed extensive integration testing in proprietary Python-based test environment and unit tests in Google's C++ GTest framework. Gained experience in an AGILE production environment with rigorous code review. Achieved an outstanding performance evaluation.
Worked under Professor Nir Grossman and Dr. David Wang to facilitate communication and control in C++ between EEG and custom visual response device for neuroscience experiments.
MIT 18.354 Nonlinear Dynamics: Continuum Systems individual final project.
This paper is a playful exploration and derivation of Kalman filtering and computational fluid simulation.
MIT 6.205: Digital Systems Laboratory individual final project
Bespoke is a system for synthesizing FPGA stream processors from 8-bit quantized neural network specifications.
Multithreaded Rust app to convert an audio stream into a midi stream to trigger external synthesizers.
21M.080 individual Final Project. Using ML to style transfer vocal timbre via differentiable digital signal processing (DDSP) in tensorflow.
WebGL fragment shader which renders the Mandelbrot and Julia Sets
This is a simple library for losslessly streaming compressed video in real-time