I’m Dylan Bourgeois. I build robots and their brains.

We live in a world full of promises around AI and robotics. Each advancement promises to make us more dependent on autonomous agents. In order to become true companions though, robots will need to understand and interact with the world the way we do. They will need to be predictable, reliable and safe in doing so. Whether we achieve this in the near term will determine whether robots will stay on the sidelines or truly change our lives for the better.

We need robots to understand and interact with the world the way we do, and I made it my career to study and build their brains to do so.

In 2016/17, I pioneered novel unsupervised methods to measure bias in news coverage, mapping the media landscape and quantifying the impact of acquisitions on news narratives. Then at CERN, I leveraged early generative AI techniques, including attention, to efficiently filter through vast amounts of collision data in search of new physics.

In 2018/19, I focused on developing novel methods for source code understanding at Stanford. These were foundational for the latest advancements in code generation with LLMs. Model interpretability and explainability were at the core of this work, both from a technical perspective (proposing novel methods for graph neural networks at NeurIPS 2019) and from a societal perspective (working with the legal community to draft standards and requirements, resulting in published work at the Privacy Law Scholars Conference in 2022 and currently serving in the pool of experts for the European Data Protection Bureau).

Robotics stems as the natural extension of these abstractions, requiring extremely vertical system-level thinking. I was employee #3 at Robust.AI where I architected various critical systems, from robot execution models to knowledge frameworks for common sense reasoning. There, I realized that robotics could not scale to its full potential, yet. This led me to co-found Claryo to push novel and intelligent representations of the world.