Luke Taylor

Luke Taylor

Just another AI guy

Oxford, United Kingdom

About

I’ve been fascinated by AI since a young age. I spent most of my teen years building robots. Then studied a bunch of math at university before doing a PhD in neuroscience and AI. One thing has become clear to me: Superintelligence is on the way. AI is being developed at an accelerating rate, and I bet it will continue to do so the more we scale the systems. However, AI is not facing the same evolutionary energy constraints of biological systems which is/will be a bottleneck to progress. If addressed, I think we will start to see some really funky things - I'm currently thinking/pondering about this.
For fun and to calm the mind I like to {🏃 🏋️ 🏊 🏄 ⛵ , 🎮 , 🎸 - used to drop a ♫ [listen]}
My open source projects: 🔥 DevTorch 🎥 CreateGif 🧠 BrainBox
My blog: ✍️ https://webstorms.github.io

Work Experience

University of Oxford

Researcher

Feb 2024 - Apr 2024
Oxford, UK
Under a casual work contract to finalize and publish the remaining manuscripts from my PhD 📝.

Citadel Securities

Quantitative Researcher Intern

May 2019 - Aug 2019
London, UK
In this 4-month internship at citsec I worked in the low latency team on a top-secret intern project to investigate how to make more 💸 with the 💸-machine.

Amazon Web Services

Software Engineer Intern

Mar 2019 - Apr 2019
Cape Town, SA
In this short 2-month internship I automated certain internals at AWS relating to the EC2 APIs.

Education

University of Oxford

🧠 PhD Neuro ∩ AI

Oct 2020 - present
Oxford, UK
The focus of my PhD was on spiking neural networks. I developed a new accelerated technique for simulating/training these networks, and used these networks to develop more realistic models of the visual system. Resulting first-author publications:
  • Hierarchical temporal prediction captures motion processing along the visual pathway (co-first publication at eLife [view]).
  • Addressing the speed-accuracy simulation trade-off for adaptive spiking neurons (published at NeurIPS 2023 [view]).
First-author manuscripts under consideration/preparation:
  • Temporal prediction captures retinal spiking responses across animal species (under review at Nature Communications [view]). This work was presented at the neuroAI workshop at MILA in Montreal.
  • Temporal prediction captures key differences between spiking excitatory and inhibitory V1 neurons (under review at PNAS [view]).
I've also been working on a yet-to-be-open-sourced interpretability Python library to do neuroscience on machine learning models (coming soon). This PhD was generously funded by the Clarendon Fund and Hertford college.

University of Oxford

🧠 MSc Neuroscience

Oct 2019 - Jul 2020
Oxford, UK
This masters took me on a deep dive into neuroscience covering various topics from the cellular to the systems level - my favourite course was neuro development. Wetlab wasn't for me, so I ended up working on two computational projects:
  • Project 1: Modelling V1 responses to natural images using unsupervised predictive normative models. Here I benchmarked different models of V1 for predicting V1 responses to natural images [view].
  • Project 2: Training multi-layered spiking neural networks with spike-timing using function approximated LIF neurons. Here I observed that neural dynamics are similar to sequence translation tasks and I developed a new model using the Multi-head Attention from Transformers to compute neural dynamics [view].
This masters was generously funded by the Clarendon Fund and Keble college.

University of Cape Town

𝞹 BSc (Hons) Applied mathematics

Feb 2018 - Nov 2018
Cape Town, SA
I took courses in numerical spectral methods, theory of statistics, reinforcement learning, cryptography, artificial intelligence and differential geometry. For my final year project I did a machine learning project:
  • Project: Deep Reinforcement Learning in Physical Environments containing Continuous Action Spaces using a Prior Model with Applications to Robotic Control. Here I tested the applicability of a robot arm to grab a can of beer by training the arm using RL in simulation [view]. This work was presented at the Deep Learning Indaba in Stellenbosch, South Africa.

University of Cape Town

𝞹 BSc Applied mathematics & computer science

Feb 2015 - Nov 2017
Cape Town, SA
Apart from taking various courses in maths and computer science I also did some competitive programming (placing 5th in the ACM ICPC Regionals in 2015 and 2016) and enrolled in an extra-credit research course where I got my first publication:
  • Improving deep learning with generic data augmentation (published at IEEE [view]).
For my final year project I did a machine learning project:
  • Project: Optimal Control in Non-Stationary Stochastic Heterogeneous Multi-Agent Systems. Here I came up with a solution for deploying RL in non-stationary environments - all written in some good ol' Java [view]! This work was presented at the WEP conference at KAUST in Saudi Arabia.

Passion projects

Besides AI I’ve dabbled on some other passion projects worthy of a mention:
  • 2023-2024: I’ve been doing some web development building https://scholarcrafter.com - a website to build your academic page (the one you are currently reading).
  • 2021-2022: I like games, and the most lucrative game I’ve played is arbing cryptos. I developed an automated arb system with my brother where we traded inefficiencies between the spot and perpetual futures of cryptos during the 2021 crypto hype, totalling 30+ million dollars in monthly volume (not a lot compared to MMs but very lucrative for two brothers making a buck). Unfortunately FTX went bust (🥲).
  • 2016-2017: I built a customizable IoT plug-and-play device to control home appliances from your phone. I teamed up with an ex-Intel engineer to prototype some PCB boards. Ultimately this project fizzled due to time commitments.
  • 2012-2014: I did a lot of game dev for Android, building a 2D game engine from scratch [code] and using this to implement a space-invader-like game [code]. I’d routinely answer (and ask) questions on Stackoverflow during this time, as evident by my top-4% reputation score [profile] (I used to be a very nerdy teen. I'm still very grateful to SO for sending me a thank-you T-Shirt ❤️).
  • 2012: I built an English-to-C parser to program these so-called Mindstorms robot [pitch, demo]. I managed to score an invite from Google as a finalist at their science fair and an invite from TED to give a talk [watch] - although I still get nightmares of Peter Norvig (then Director of Research at Google) asking me about context grammars and me having no idea what those are.
    Award from Vint Cerf. Teen-me got an award from Papa-Internet Vint Cerf.