Let's reflect on the latest scientific research, challenge the issues facing
global communities, and use technology to empower expansive reformation.
I study Computer Science, Statistics and Math at the University of Toronto with a
Currently, I am an undergraduate researcher at the Vector Institute
co-advised by Jimmy Ba and Shane Gu, and working
on model-based reinforcement learning and robotics.
I'm also making neural networks bigger and more data efficient as a
researcher at FOR.ai
and developing state-of-the-art computer vision algorithms for level 4
autonomous driving at aUToronto.
Most recently, I was integrating the best of cloud technology in remote
build toolchains at Google
as part of the Cloud Build Infrastructure team.
In the past, I was disrupting the next generation of carpooling platforms at
I seek biological explanations to complex mathematical phenomena and am
fascinated by intelligence.
Current research: deep reinforcement learning, graph neural
networks, transfer learning, robotics.
Previously, my research
where I developed machine learning and computational biology pipelines
to extract epigenetic insights from high dimensional
In the past, I was a biologist who engaged with topics mostly in
biotechnology and tissue engineering like
From exploring genetic editing and basic protein biology, to now
developing more robust deep learning systems and network representation
much of my research is about inferring the physical world, exploring the
anomalies of dynamical systems, and generating novel insights from data.
Computational genomics project that validated the utility of two novel
next-generation-sequencing assays in epigenetic
exploration of cancer transcriptomes
using custom bioinformatics pipelines, unsupervised
machine learning, and mathematical analyses.
Machine Intelligence Student
Team research project as part of the 2019 ICLR Reproducibility
Our chosen paper investigated a novel improvement to Equilibrium
Propagation and was later accepted.
This is how I've learned. I'm an inquisitive individual who always loves
From data mining to open source web development, entrepreneurship to
basic science, I am grateful to have
spent my time gaining and practising new skills to make a positive
impact on myself and others.
With recent headlines showing multiple fatalities from vaping, we sought
a solution that can mitigate nicotine addiction in Juul users. We
the Juul to function with a Gaussian Process backend that can analyze
patterns and usage frequency and dynamically adjust the nicotine output
accordingly. What differentiates our product is the gradual weaning
implementation, where the
percentage decrease is customized from user feedback.
Currently in conversation with the Centre for Addiction and Mental
Health to use this device
DOC is a tool for doctors to receive a second opinion on the diagnosis
of medical conditions directly through patient
interaction in real time through the analysis of the symptoms and
conversations they have with their patients.
Subsequently, DOC streamlines the recommendation for potential ailments
by providing further questions to ask in order
to generate an objective and accurate diagnosis that is wholistic in
1st Place BCGxGoogle Global Engineering Week Hackathon 2019
The SocialBit measures an individual’s social interactions throughout
the day. A glasses-mounted camera detects faces
and references them with an existing database of the user’s friends on
social media platforms. The data is then
visualized in a network plot and chord diagram that showcases the length
and location of one’s interaction, defined as
the points in time in which the friend’s face is within view. The result
is a beautiful and insightful display of our
daily social interactions - at the microscale.
Decuen (pronounced 'DQN') is a Deep Q-Learning Python library that
implements generic reinforcement learning algorithms in OpenAI Gym.
Currently features the DQN and DDQN agents
as well as implementation of prioritized experience replay for the
Cartpole environment. Also includes tensorboard integration for live
training and validation visualization.
A long term project for me to learn, grow, and explore new research in
the field of deep learning.