Short Bio

I am a final year PhD candidate in Sebastian Riedel's Machine Reading group at University College London. My studies are generously supported by a Microsoft Research PhD Scholarship.

I am interested in representation learning for natural language processing and automated knowledge base construction and inference. My research concerns the intersection of deep learning and first-order logic reasoning, as well as natural language inference. I worked on regularizing representations by first-order logic rules and recently became interested in neural-symbolic approaches to theorem proving and neural program induction.

I was fortunate to work as a Research Intern at Google DeepMind in Summer 2015 under the supervision of Edward Grefenstette. In the past, I contributed to the declarative, functional machine learning language and library Wolfe.

In 2012 I received my Diploma (equivalent to M.Sc) in Computer Science from the Humboldt-Universität zu Berlin. Between 2010 and 2012 I worked as a student assistant and in 2013 as research assistant in the Knowledge Management in Bioinformatics group of Ulf Leser, where I developed models for named entity recognition of chemicals, mutations, proteins and diseases in biomedical publications.

I am co-organizer of the 1st NIPS 2016 Workshop on Neural Abstract Machines & Program Induction (NAMPI), the 5th NAACL 2016 Workshop on Automated Knowledge Base Construction (AKBC), as well as scientific advisor for the London deep learning startup Bloomsbury AI.

Upcoming...

26/03/2017 I am invited to speak about deep learning and automated proving at the 2nd Conference on Artificial Intelligence and Theorem Proving in Obergurgl, Austria.
20/03/2017 I am invited to give a talk at the London Machine Learning Meetup.
10/12/2016 I am co-organizing the 1st Workshop on Neural Abstract Machines & Program Induction (NAMPI) at NIPS 2016 in Barcelona, Spain.

News

16/11/2016 Talk on "What Can Deep Learning Learn from Symbolic Inference?" at Imperial College London.
01/11/2016 Paper on emoji2vec: Learning Emoji Representations from their Description wins Best Paper Award at the EMNLP 2016 Workshop on Natural Language Processing for Social Media (SocialNLP)!
30/07/2016 Papers on Lifted Rule Injection for Relation Embeddings and Stance Detection with Bidirectional Conditional Encoding got accepted at EMNLP 2016!
17/06/2016 I co-organized the 5th Workshop on Automated Knowledge Base Construction (AKBC) at NAACL 2016 in San Diego, California.
10/06/2016 Talk at the University of Cambridge Natural Language and Information Processing Seminar Series.
18/04/2016 Application note on SETH got accepted at Bioinformatics.
13/04/2016 Guest lecture on deep learning for natural language processing at General Assembly's data science course in London.
04/04/2016 Papers on Learning Knowledge Base Inference with Neural Theorem Provers and Regularizing Relation Representations by First-order Implications got accepted at AKBC 2016!

...

Projects

Neural Theorem Provers

AKBC 2016

End-to-end differentiable counterparts of discrete theorem provers that learn representations of symbols and rules.

Reasoning about Entailment with Neural Attention

ICLR 2016

Long short-term memory recurrent neural networks with word-by-word attention for recognizing textual entailment.

Injecting Logical Background Knowledge into Embeddings

EMNLP 2016, AKBC 2016, NAACL 2015, SP14, StarAI 2014

Differentiable logical formulae for regularizing vector representations of relations and entity-pairs.

Wolfe.ml: Declarative, Functional Machine Learning

StarAI 2014

Scala domain-specific language for probabilistic and differentiable programming in a declarative and functional way.

BioNLP

SemEval 2013, BioCreative 2013, Bioinformatics 2012

Probabilistic graphical models for bio-molecular event-extraction and named entity recognition of chemicals, mutations, proteins and diseases.

Contact

tim [dot] rocktaeschel [at] gmail [dot] com

G.6 5th Floor, One Euston Square, 40 Melton Street, London NW1 2FD, United Kingdom