Gabriel Kulp
Projects
Research Direction: Security for High-Stakes Deployments
As a team, we will decide on which projects to pursue based on individual interest and skills. ISL's work fits in two lines of effort, and mentees are free to slot in wherever they fit and even propose new projects.
In the research / component-level behavior workstream:
Side-channel attacks: Extracting LLM-produced tokens based on physical measurements during runtime
Out-of-band verification: Classifying GPU workload (training vs inference) based on physical measurements during runtime
Covert evals: Evaluating agent capability to covertly communicate (via specific physical means) with other agents
In the development / SL5 prototyping workstream:
Developing a program synthesis pipeline with airgapped and open source tooling to translate a formal specification into Verilog
Minimizing software dependencies in a memory-safe agent scaffold
Developing an interactive system state visualizer for demos
What we’re looking for in a Mentee
We will provide you with a lot of autonomy and plug-and-play access to a rare combination of tools and equipment—in exchange we expect you to have a strong self-direction, intellectual ambition, and a lot of curiosity. We don't require any particular level of experience in any particular field, and a mix of breadth and depth is often quite helpful. Some specific skill profiles we expect to be able to leverage are: machine learning, electrical engineering, chip architecture, and formal methods.
What we’re like as Mentors
Gabriel stays hands-off but prefers to follow along with the deep details of your work. He relies on you to raise blockers and key decisions during recurring meetings and is available outside meetings for advice and debugging as needed.
Bio
Gabriel works on hands-on projects to build and test prototypes of secure compute infrastructure. He focuses on how to secure the most sensitive AI data centers against the most sophisticated current and future threats. Gabriel has also worked at RAND on hardware-enabled governance mechanisms (HEMs, at the intersection of GPU export control and hardware security) and on technical verification of agreements on the development and use of AI systems. He holds a master's degree in computer science.
