Short Bio

I am a postdoctoral researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford. I obtained my Ph.D. in the Machine Reading group at University College London under the supervision of Sebastian Riedel. I received a Google Ph.D. Fellowship in Natural Language Processing and a Microsoft Research Ph.D. Scholarship.

My research focus is on machine learning models that acquire reusable abstractions and that generalize from few training examples by incorporating various forms of prior knowledge. My work is at the intersection of deep learning, reinforcement learning, program induction, logic, and natural language processing.

I was fortunate to work as a Research Intern at Google DeepMind in Summer 2015 under the supervision of Edward Grefenstette. In 2012, I received my Diploma (equivalent to M.Sc) in Computer Science from the Humboldt-Universität zu Berlin. Between 2010 and 2012, I worked as a student assistant and in 2013 as research assistant in the Knowledge Management in Bioinformatics group of Ulf Leser.

I am co-organizer of the 2nd ICML 2018 and 1st NIPS 2016 Workshop on Neural Abstract Machines & Program Induction (NAMPI), the 6th NIPS 2017 and 5th NAACL 2016 Workshop on Automated Knowledge Base Construction (AKBC), and the 7th UAI 2017 International Workshop on Statistical Relational AI (StarAI), as well as scientific advisor for the London deep learning startup Bloomsbury AI.

Upcoming...

14/07/2018 I am co-organizing the 2nd Workshop on Neural Abstract Machines & Program Induction (NAMPI) at ICML 2018 in Stockholm, Sweden. Consider submitting a paper!

News

11/05/2018 Our paper on DiCE: The Infinitely Differentiable Monte Carlo Estimator got accepted at ICML 2018 in Stockholm, Sweden!
30/04/2018 I published a blog post on Einstein Summation (einsum) in Deep Learning.
27/03/2018 Invited talk on Deep Learning with Explanations at the Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) research training group at TU Darmstadt, Germany.
02/03/2018 Invited talk on Deep Learning with Explanations at the Structured and Probabilistic Intelligent Knowledge Engineering (SPIKE) group at Imperial College London, UK.
15/02/2018 Preprint of our paper DiCE: The Infinitely Differentiable Monte-Carlo Estimator is online!
29/01/2018 Our paper on TreeQN and ATreeC: Differentiable Tree Planning for Deep Reinforcement Learning got accepted at ICLR 2018 in Vancouver, Canada!
12/01/2018 Invited speaker at the Alan Turing Institute workshop on Logic and Learning in London, UK.
08/12/2017 I co-organized the 6th NIPS 2017 Workshop on Automated Knowledge Base Construction (AKBC).
01/11/2017 Preprint of our paper TreeQN and ATreeC: Differentiable Tree Planning for Deep Reinforcement Learning is online!
16/10/2017 I am now a Junior Research Fellow in Computer Science at Jesus College, University of Oxford!
12/10/2017 Invited speaker at the GPU Techonlogy Conference (GTC Europe) in Munich, Germany.
01/10/2017 I am now a Stipendiary Lecturer in Computer Science at Hertford College, University of Oxford!
27/09/2017 I gave a lecture on Deep Learning for Natural Language Processing at the 2nd International Summer School on Data Science (SSDS). Slides available here.
04/09/2017 Our paper on End-to-end Differentiable Proving got accepted for oral presentation (1.2% acceptance rate) at NIPS 2017 in Long Beach, CA!
29/08/2017 Invited talk on End-to-end Differentiable Proving at Google Research in Mountain View, CA.
15/08/2017 I co-organized the 7th Workshop on Statistical Relational AI (StarAI) at UAI 2017 in Sydney, Australia.
26/07/2017 Invited talk on End-to-end Differentiable Proving at DeepMind.

News Archive

Selected Publications

End-to-end Differentiable Proving

NIPS 2017
Neural networks for end-to-end differentiable proving that learn vector representations of symbols and induce first-order logic rules.
NIPS oral presentation (1.2% acceptance rate).

Reasoning about Entailment with Neural Attention

ICLR 2016

Deep recurrent neural networks with a neural attention mechanism for natural language inference.

Programming with a Differentiable Forth Interpreter

ICML 2017

An end-to-end differentiable interpreter to train neural networks from program input-output data.

TreeQN and ATreeC: Differentiable Tree-Structured Models for Deep Reinforcement Learning

ICLR 2018

Combining model-free and model-based reinforcement learning.

Adversarial Sets for Regularising Neural Link Predictors

UAI 2017

An adversarial model for regularizing neural networks by logical rules.

Injecting Logical Background Knowledge into Vector Representations

NAACL 2015

Differentiable logical rules for regularizing neural networks to incorporate background knowledge.

Contact

tim [dot] rocktaeschel [at] gmail [dot] com

Robert Hooke Building, Parks Road, Oxford OX1 3PR, United Kingdom