AIPHES Scientific Workshop 2017


(new!) Ask our panel of invited speakers your questions!



Thursday, 11.05.2017.

9.30 - 10.00


coffee / pretzels / fruits

10.00 - 10.30


   introduction to the program

   poster teasers

10.30 - 11.30

INVITED TALK: Simone Teufel

Proposition-based summarisation of general-knowledge texts

11.30 - 12.30


Bayesian methods for NLP applications

12.30 - 13.30



13.30 - 15.00


coffee served at 14.30

15.00 - 15.15


15.15 - 17.30



closing of the first day



Rouge Restaurant (Altstadt)

Friday, 12.05.2017.

9.30 - 10.00


coffee / pretzels / fruits

10.00 - 11.00

INVITED TALK: Bonnie Webber

Explicit adverbials and implicit coherence in discourse

11.00 - 12.00

INVITED TALK: Sebastian Riedel

Reading and reasoning with vector representations

12.00 - 13.00



13.00 - 14.30


14.30 - 14.45


14.45 >


Session chair: Michael Strube

Closing remarks


Invited talks

Simone Teufel

Proposition-based summarisation of general-knowledge texts

I will present recent work in Cambridge on a mainly symbolic summariser based on Kintsch and van Dijk's text understanding model from 1978. The summariser segments text into proposition-sized information units and manipulates these in a discourse tree. Their processing model is based on a theory of memorising, forgetting and relating information units. I will in particular explain how argument overlap, the main driving force behind the model, is calculated. We have also recently added a generation component to the model, and I will report on that experiments. Results are reported in terms of ROUGE scores.


Annie Louis

Bayesian methods for NLP applications

When people look for information, they seek relevant content. But they also like to find information which is of high quality, is suitably organized, and is summarized into an easy to digest format. In this talk, I will demonstrate some robust natural language processing models which tackle such information access problems with new insights and approaches.

In particular, I will focus on models which are sensitive to the "context" in which a document or conversation should be interpreted. For example, a user looking to troubleshoot his computer display has different needs from a user wanting to learn about different graphic cards. I will talk about how we have used Bayesian techniques for learning such models from small amounts of data with little dependence on costly annotations. I will describe two models: one for adding structure to online forum conversations, and the other for creating summaries of evolving topics. Current information access technology can be hugely impacted by such language processing models that 1) are able to find user intentions and perceptions expressed through language, and 2) can can tailor models to tasks and domains without costly annotations.


Bonnie Webber

Explicit Connectives and Implicit Coherence in Discourse

Several years ago, in annotating the Penn Discourse TreeBank (PDTB-2) for discourse connectives and their arguments, I noticed that many discourse adverbials could appear either alone or with another discourse connective -- e.g. "so instead", "but instead", "because otherwise", "and otherwise", "and then", "but then", etc.

I focussed on "instead", asking what licensed its use and (2) whether could one predict, for a given token of "instead" on its own, what other discourse connective (and hence, what other discourse relation) a reader would infer. The answers could help discourse parsing in two ways -- first, to identify the first argument of "instead" (taking its second argument to be the matrix clause in which it was embedded), and second, to identify what other discourse relation might also hold when it wasn't explicitly signalled by another connective.

The first question I addressed in [Webber, 2013], which I will briefly review. The second question I couldn't answer. But multi-judgment experiments that we have recently carried out on discourse

adverbials to determine which connectives (if any) they pair with -- suggest what additional discourse relations can be concurrently conveyed, both in general and in a given context [Rohde et al, 2015; 2016; submitted].

They show that the answer to the second question may be more interesting that I could have thought.


Sebastian Riedel

Reading and reasoning with vector representations

In recent years, vector representations of knowledge have become popular in NLP and beyond. They have at least two core benefits: reasoning with (low-dimensional) vectors tends to lead to better generalisation, and usually scales very well. But they raise their own set of questions: What type of inferences do they support? How can they capture asymmetry? How can explicit background knowledge be injected into vector-based architectures? How can we provide "proofs" that justify predictions? In this talk, I sketch some initial answers to these questions based on work we have developed recently. In particular, I will illustrate how a vector space can simulate the workings of logic.


Poster sessions


Thursday, 13.30 - 15.00

  1. Andreas Hanselowski. Automated Claim Validation based on Evidence Aggregated in Large Collections of Text

  2. Avinesh PVS. Interactive Personalized Multi-Document Summarization

  3. Benjamin Heinzerling. Entity Linking via Multiply-Coherent Entity Representations

  4. Markus Zopf. Learning Information Importance for Text Summarization

  5. Maxime Peyrard. Evaluating and Learning Summarization Models


Friday, 13.00 - 14.30

  1. Teresa Botschen. Frame-to-Frame Relations in FrameNet

  2. Thomas Arnold. Ego Network Motifs of Semantic Frames

  3. Tobias Falke. Summarizing Document Collections as Concept Maps

  4. Todor Mihaylov. Enhancing Reading Comprehension With Common Knowledge

  5. Ana Marasović. SRL4ORL: Semantic Role Labelling for Opinion Role Labelling


Research roundtable

The roundtable session will be organised with 4 tables for 4 invited speakers. Each table and invited speaker is assigned one central topic that should be discussed at the table, chosen by interest of PhD students in AIPHES and expertise of invited speakers. Chosen topics assigned to invited speakers are:


T1: Annie Louis - incorporating background knowledge in representation learning


T2: Sebastian Riedel - relation extraction


T3: Simone Teufel - summarization


T4: Bonnie Webber - text quality


Invited speakers will stay at the table during the session, while participants will rotate every half an hour from table to table.



Panel discussion

We invite you to comment, discuss and ask questions during the workshop using Slido: Registration is not necessary. Collected comments will be discussed with invited speakers in the panel discussion.


The workshop will take place in Studio Villa Bosch, Schloß-Wolfsbrunnenweg 33, 69118 Heidelberg. Use Deutsche Bahn website and DB Fahrbahnauskunft: Darmstadt Hbf -> Schlierbach Villa Bosch, Heidelberg for directions.


Other directions to Heidelberg Institute for Theoretical Studies (HITS), Schloß-Wolfsbrunnenweg 35, 69118 Heidelberg.

A A A | Drucken Print | Impressum Impressum | Sitemap Sitemap | Suche Search | Kontakt Contact | Webseitenanalyse: Mehr Informationen
zum Seitenanfangzum Seitenanfang