Thelonious Cooper | LinkedIn | Github | ESSG
About | Me |
---|---|
Hi, I'm Thelonious Cooper. I am a senior undergraduate at MIT studying Electrical Engineering, with minors in Applied Mathematics and Music Technology. I seek to affect positive change in the world by leveraging my knowledge of mathematical modeling and hardware/software engineering to innovate across scales in the fields of embedded medical technology and climate/sustainability systems. |
Accolades and Interests |
---|
During my time at MIT I have been awarded the title of MIT Climate Grand Challenges Undergraduate Research and Innovation Scholar for my work in the SuperUROP program. I have also served on the Undergraduate Advisory Group for the Schwartzman College of Computing, interfacing directly with leading faculty in MIT EECS and the SCC to advocate for undergraduates. I am also the standing academic chair for Chocolate City, a living group of underrepresented men of color in STEM. I am a member of the Earth Signals and Systems Group, where I study mathematical methods for information-theoretic stochastic optimization. We apply techniques in this field such as Gaussian Process Regression and Markov-chain Monte Carlo to optimize climate models and forecast extremes for risk assessment. I further apply these methods to create embedded applications of nonlinear model-predicative-control |
CIVO is an industrial partnership center with the mission "To promote the development, use, and dissemination of innovative display, graphics, and optical technology for the healthy and diseased eye." Several high-tech companies in the AR/VR space (Apple, Meta, Google to name a few) are very interested in how the visual system interacts with their technology. During my time at UC Berkeley I worked with the Ocular Motor Control Lab and the Active Vision and Computational Neuroscience Lab to generate large-scale synthetic 3d scene data for computer vision applications. Building on the Nvidia IsaacLab Framework, I wrote a library that integrates a simulacrum of a human visual system into a 3d rendering environment. Goals for the use of this project include training CV models with the ability to decide where to look based on visual cues, and be able to reason about their location in the environment purely from visual data.
Developed and implemented novel methodology for the evaluation of resilience in sophisticated drone autopilots. Simulated communications failures within an RTOS embedded-linux flight stack and modeled autopilot recovery performance.
Presented at PX4 Developer Summit in 2023. Recorded talk available below
PX4-DevSummit YouTube
Published first-author publication detailing methods in 2024 IEEE International Conference on Unmanned Aerial Systems
IEEE Conference Publication | IEEE Xplore
Worked under PhD candidate Mustafa Doga Dogan on research centered around spatial encoding schemas and computer vision. Implemented ArUco marker tracking on Infrared Cameras on Microsoft HoloLens for lightweight object tracking in AR. Bypassed Android hardware restriction to access Infrared camera on phone.
Worked in an embedded Linux environment to implement the BGP routing stack within the next generation firmware for the Meraki MX routing security appliance. Integrated legacy C software into a modern C++ development environment. Performed extensive integration testing in proprietary Python-based test environment and unit tests in Google's C++ GTest framework. Gained experience in an AGILE production environment with rigorous code review. Achieved an outstanding performance evaluation.
Worked under Professor Nir Grossman and Dr. David Wang to facilitate communication and control in C++ between EEG and custom visual response device for neuroscience experiments.
MIT 18.354 Nonlinear Dynamics: Continuum Systems individual final project.
This paper is a playful exploration and derivation of Kalman filtering and computational fluid simulation.
MIT 6.205: Digital Systems Laboratory individual final project
Bespoke is a system for synthesizing FPGA stream processors from 8-bit quantized neural network specifications.
Multithreaded Rust app to convert an audio stream into a midi stream to trigger external synthesizers.
Using ML to style transfer vocal timbre via differentiable digital signal processing (DDSP) in tensorflow.
WebGL fragment shader which renders the Mandelbrot and Julia Sets
This is a simple library for losslessly streaming compressed video in real-time