Short Bio

Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford.

Tim obtained his Ph.D. in the Machine Reading group at University College London under the supervision of Sebastian Riedel. He received a Google Ph.D. Fellowship in Natural Language Processing in 2017 and a Microsoft Research Ph.D. Scholarship in 2013. In Summer 2015, he worked as a Research Intern at Google DeepMind. In 2012, he obtained his Diploma (equivalent to M.Sc) in Computer Science from the Humboldt-Universität zu Berlin. Between 2010 and 2012, he worked as Student Assistant and in 2013 as Research Assistant in the Knowledge Management in Bioinformatics group at Humboldt-Universität zu Berlin.

Tim's research focuses on sample-efficient and interpretable machine learning models that learn from world, domain, and commonsense knowledge in symbolic and textual form. His work is at the intersection of deep learning, reinforcement learning, natural language processing, program synthesis, and formal logic.


20/12/2019 Two papers accepted at ICLR 2020 in Addis Ababa, Ethiopia: RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments and RTFM: Generalising to New Environment Dynamics via Reading!
11/11/2019 Two papers accepted at AAAI 2020 in New York, USA: Differentiable Reasoning on Large Knowledge Bases and Natural Language and Generating Interactive Worlds with Text!
29/08/2019 Lecture on Deep Learning for Natural Language Processing at the RANLP'19 Summer School on Deep Learning in NLP in Varna, Bulgaria.
13/08/2019 Two papers accepted at EMNLP 2019 in Hong Kong, China: Language Models as Knowledge Bases? and Learning to Speak and Act in a Fantasy Text Adventure Game!
21/06/2019 Invited talk on Learning with Explanations at the Institute for Language, Cognition and Computation (ILCC) Seminar at the University of Edinburgh, UK.
19/06/2019 Our paper HUNER: Improving Biomedical NER with Pretraining got accepted in Bioinformatics!
17/06/2019 Inaugural meeting for the forming of the UCL Natural Language Processing group!
28/05/2019 Our paper Neural Variational Inference For Estimating Knowledge Graph Embedding Uncertainty got accepted at the 14th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy) at IJCAI 2019 in Macao, China!
15/05/2019 Our paper NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language got accepted at ACL 2019 in Florence, Italy!
12/05/2019 Our paper A Survey of Reinforcement Learning Informed by Natural Language got accepted at IJCAI 2019 in Macao, China!
22/04/2019 Our paper A Baseline for Any Order Gradient Estimation in Stochastic Computation Graphs got accepted at ICML 2019 in Long Beach, CA!
20/03/2019 Invited talk on Learning with Explanations at the London Machine Learning Meetup.
07/03/2019 Preprint of our paper Learning to Speak and Act in a Fantasy Text Adventure Game is online!
21/12/2018 Our paper Stable Opponent Shaping in Differentiable Games got accepted at ICLR 2019 in New Orleans, LA!
15/11/2018 Invited talk on Learning with Explanations at the Language Technology Lab (LTT) Seminars at the University of Cambridge, UK.
01/11/2018 Invited speaker at the First Workshop on Fact Extraction and Verification (FEVER) at EMNLP 2018 in Brussels, Belgium.
05/09/2018 Our paper e-SNLI: Natural Language Inference with Natural Language Explanations got accepted at NeurIPS 2018 in Montreal, Canada!
20/08/2018 I am now a Research Scientist at Facebook AI Research London.
13/08/2018 I am now a Lecturer in the Department of Computer Science at University College London.

News Archive

Selected Publications

End-to-end Differentiable Proving

NIPS 2017
Neural networks for end-to-end differentiable proving that learn vector representations of symbols and induce first-order logic rules.
NIPS oral presentation (1.2% acceptance rate).

Reasoning about Entailment with Neural Attention

ICLR 2016

Deep recurrent neural networks with a neural attention mechanism for natural language inference.

Programming with a Differentiable Forth Interpreter

ICML 2017

An end-to-end differentiable interpreter to train neural networks from program input-output data.

TreeQN and ATreeC: Differentiable Tree-Structured Models for Deep Reinforcement Learning

ICLR 2018

Combining model-free and model-based reinforcement learning.

Adversarial Sets for Regularising Neural Link Predictors

UAI 2017

An adversarial model for regularizing neural networks by logical rules.

Injecting Logical Background Knowledge into Vector Representations

NAACL 2015

Differentiable logical rules for regularizing neural networks to incorporate background knowledge.


tim [dot] rocktaeschel [at] gmail [dot] com

Robert Hooke Building, Parks Road, Oxford OX1 3PR, United Kingdom