Press "Enter" to skip to content

ACL 2013, Day 2

Today it was the official conference opening. Obviously, this year’s ACL is one of the biggest conferences. There were almost 1000 papers submitted with acceprance rate of 26%. During the conference there will also be presentations of journal papers from the new Transactions of ACL.

IMAG0436 IMAG0427

The keynote was given by Prof. Dr. Rolf Harald Baayen, who is a pioneer in empirical linguistic research. He was talking about understanding of a language by observing the focus of human eyes when reading english compounds. For example, what does a handbag mean, worker, etc. from the perspective od learning a computer program to understand their notions. Mostly these words do not have direct meaning in the text and this is a problem.

During the first session I attended the talk Recognizing Rare Social Phenomena in Conversation: Empowerment Detection in Support Group Chatrooms given by Elijah Mayfield, David Adamson and Carolyn Penstein Rosé

They were talking about processing of chats. Interestingly, to get the important meaning or best extractions, they found out the best way to achieve this is to remove everything before a sentence that ends with an exclamation mark. They also mentioned a general IE tool, named LightSide.

Next lecture was Decentralized Entity-Level Modeling for Coreference Resolution by Greg Durrett, David Hall and Dan Klein

They proposed a new architecture with classic entity level features. Their approach is decentralized as each mention has a cloud of semantic properties, which enables to maintain the tractability of a pairwise system. Furthermore, they separate properties and mentions to form two separate models and connect them via factors. The resulting model is non-convex, but they still could perform standard training and inference using belief propagation technique. They tested their system against CoNLL 2011 ST dataset with three different settings. The first used baseline features, the second standard entity features (i.e. gender, animacy, NE tags) and the third was enriched using semantic features. Their system gained a 1% of accuracy over a baseline system in the first setting, but was worse or equal in other two settings.

During the “Student Lunch” I and found out an interesting idea that an important person said from a person that I would also rather not mention: “IR is grep” 🙂 The IR people were obviously insulted, but on a very basic level, it is true :):):)

In the second session I attended A Computational Approach to Politeness with Application to Social Factors by Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec and Christopher Potts

First slide started with a picture of two dogs and one of them saying: “I only sniffed his ass to be polite”. They focused into detecting and measure politeness. They use data from Wikipedia – 35k (4,5k annotated) requests – actions and StackExchange – 373k (6,5k annotated) requests. They had 5 annotators to annotate the dataset and opened it to public. They also showed some interesting notions how a sentence should be formed to sound polite. Lastly, the most interesting thing they presented was how politeness changes for political candidates. Before elections, people that would win are mostly more polite than others. After the elections the politenes of the winners lowers and “loosers” become more polite.

The second talk I attended was Modeling Thesis Clarity in Student Essays by Isaac Persing and Vincent Ng.

After the coffee break I listened to the following talks:

Exploiting Topic-based Twitter Sentiment for Stock Prediction by Jianfeng Si, Arjun Mukherjee, Bing Liu, Qing Li, Huayi Li and xiaotie Deng

They crawled Twitter for company hashtags and predicting if a specific stock will raise or fall.

Learning Entity Representation for Entity Disambiguation by Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Houfeng Wang and Longkai Zhang

They try to link entities to an ontology by directly optimizing similarity using a two-stage approach.

Natural Language Models for Predicting Programming Comments by Dana Movshovitz-Attias and William Cohen

They proposed a model to suggest autocompletion of words when writing source code comments. All the data they used was from lucene library and StackOverflow posts that use the word Java. Their results show that prediction is better when using more data and bag-of-words approach. Next to the basic experiments they also measured how good is prediction somewhere in the middle of software project development.

Paraphrasing Adaptation for Web Search Ranking by Chenguang Wang, Nan Duan, Ming Zhou and Ming Zhang

They presented and adapting paraphrasing technique to web search from three aspects: a search-oriented paraphrasing model, an NDCG-based parameter optimization algorithm and an enhanced ranking model leveraging augmented features computed on paraphrases of original queries. They also showed that the search performance can be significantly improved by up to 3% in NDCG gains.

In the evening, two poster sessions were organized, which lasted until 9pm. There were really a lot of posters and demos. Especially interesting is ARGO (http://argo.nactem.ac.uk/) – IOBIE should also go this way. I attach also some interesting images:

IMAG0437 IMAG0435 IMAG0434

All images:

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.