Category Archives: Research

ACL 2013, Day 2

Today it was the official conference opening. Obviously, this year’s ACL is one of the biggest conferences. There were almost 1000 papers submitted with acceprance rate of 26%. During the conference there will also be presentations of journal papers from the new Transactions of ACL.

IMAG0436 IMAG0427

The keynote was given by Prof. Dr. Rolf Harald Baayen, who is a pioneer in empirical linguistic research. He was talking about understanding of a language by observing the focus of human eyes when reading english compounds. For example, what does a handbag mean, worker, etc. from the perspective od learning a computer program to understand their notions. Mostly these words do not have direct meaning in the text and this is a problem.

During the first session I attended the talk Recognizing Rare Social Phenomena in Conversation: Empowerment Detection in Support Group Chatrooms given by Elijah Mayfield, David Adamson and Carolyn Penstein Rosé

They were talking about processing of chats. Interestingly, to get the important meaning or best extractions, they found out the best way to achieve this is to remove everything before a sentence that ends with an exclamation mark. They also mentioned a general IE tool, named LightSide.

Next lecture was Decentralized Entity-Level Modeling for Coreference Resolution by Greg Durrett, David Hall and Dan Klein

They proposed a new architecture with classic entity level features. Their approach is decentralized as each mention has a cloud of semantic properties, which enables to maintain the tractability of a pairwise system. Furthermore, they separate properties and mentions to form two separate models and connect them via factors. The resulting model is non-convex, but they still could perform standard training and inference using belief propagation technique. They tested their system against CoNLL 2011 ST dataset with three different settings. The first used baseline features, the second standard entity features (i.e. gender, animacy, NE tags) and the third was enriched using semantic features. Their system gained a 1% of accuracy over a baseline system in the first setting, but was worse or equal in other two settings.

During the “Student Lunch” I and found out an interesting idea that an important person said from a person that I would also rather not mention: “IR is grep” 🙂 The IR people were obviously insulted, but on a very basic level, it is true :):):)

In the second session I attended A Computational Approach to Politeness with Application to Social Factors by Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec and Christopher Potts

First slide started with a picture of two dogs and one of them saying: “I only sniffed his ass to be polite”. They focused into detecting and measure politeness. They use data from Wikipedia – 35k (4,5k annotated) requests – actions and StackExchange – 373k (6,5k annotated) requests. They had 5 annotators to annotate the dataset and opened it to public. They also showed some interesting notions how a sentence should be formed to sound polite. Lastly, the most interesting thing they presented was how politeness changes for political candidates. Before elections, people that would win are mostly more polite than others. After the elections the politenes of the winners lowers and “loosers” become more polite.

The second talk I attended was Modeling Thesis Clarity in Student Essays by Isaac Persing and Vincent Ng.

After the coffee break I listened to the following talks:

Exploiting Topic-based Twitter Sentiment for Stock Prediction by Jianfeng Si, Arjun Mukherjee, Bing Liu, Qing Li, Huayi Li and xiaotie Deng

They crawled Twitter for company hashtags and predicting if a specific stock will raise or fall.

Learning Entity Representation for Entity Disambiguation by Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Houfeng Wang and Longkai Zhang

They try to link entities to an ontology by directly optimizing similarity using a two-stage approach.

Natural Language Models for Predicting Programming Comments by Dana Movshovitz-Attias and William Cohen

They proposed a model to suggest autocompletion of words when writing source code comments. All the data they used was from lucene library and StackOverflow posts that use the word Java. Their results show that prediction is better when using more data and bag-of-words approach. Next to the basic experiments they also measured how good is prediction somewhere in the middle of software project development.

Paraphrasing Adaptation for Web Search Ranking by Chenguang Wang, Nan Duan, Ming Zhou and Ming Zhang

They presented and adapting paraphrasing technique to web search from three aspects: a search-oriented paraphrasing model, an NDCG-based parameter optimization algorithm and an enhanced ranking model leveraging augmented features computed on paraphrases of original queries. They also showed that the search performance can be significantly improved by up to 3% in NDCG gains.

In the evening, two poster sessions were organized, which lasted until 9pm. There were really a lot of posters and demos. Especially interesting is ARGO (http://argo.nactem.ac.uk/) – IOBIE should also go this way. I attach also some interesting images:

IMAG0437 IMAG0435 IMAG0434

All images:

ACL 2013, Day 1

Association for Computational Linguistic (ACL) conference is one of the top ranked conferences in the field of natural language processing. This year I am attending ACL 2013 (http://www.acl2013.org/site/) in Sofia, Bulgaria. Surprisingly, the main conference sponsor is Baidu. My presentation is scheduled on Friday, 9. Aug 2013 at 16:30 (GMT+2) within the BioNLP Workshop, Gene Regulation Network Shared Task, which Marinka Žitnik and I won.

Yesterday I flew via Vienna to International Airport Sofia and took a cab to William Gladstone Street 44, where is my hotel – Art’Otel and I am staying here until saturday. The hotel is nothing special, but it is good enough (In my opinion not worth of all ****) and quite close to the conference venue – 10minute walk.

First impressions about Sofia: I thought Bulgaria was in very bad condition, but it is not. Cars are normal, people look European-like, streets are clean. I also like the peace in the city. There is no rush, not so car-overcrowded and there are exactly enough people on the streets.  The only thing one would notice is that most of the buildings are older. For instance, the National Center of Culture (NDK) is enormous building, very nice, but should be renovated to look more modern. The same goes for the park in front of it, etc.. So to conclude, the only thing Sofia would need in my opinion is buildings renovation.

Today, Sun, August 4th, there was a tutorial day at the conference. There were four parallel tutorial in the morning and another four in the afternoon.

In the morning I attended the tutorial Variational Inference for Structured NLP Models by David Burkett and Dan Klein.

The tutorial was very informative and greatly presented. The focus of the talk was how to efficiently implement inference over already given factor graph with static structure. The intro started with introduction into HMM and then into different CRF types (linear, arbitrary, tree-like). Firstly, we were introduced to inference using Mean Field and then its approximation when trying to learn two interdependent labeling tasks. We continued with the problem of joint parsing and alignment. Lastly, we were talking about (“loopy”) belief propagation and using it for inference of dependency parsing.

During the lunch break I went to Boom – this appears to be the best place to eat Burger in Sofia. I got this inside info by my friend Didka (like there will be many other tips during my stay in Bulgaria :)).

In the afternoon I attended the tutorial Robust Automated Natural Language Processing with Multiword Expressions and Collocations by Valia Kordoni and Markus Egg.

The talk was about identifying multiword expressions, for example “take the clothes off”, which means the same as “undress”. I saw no technical information of algorithms, approaches, …, just raw history of research in this fields, so therefore I went to another tutorial session after the coffee break, even though I had not apply for it.

I moved to the Exploiting Social Media for Natural Language Processing: Bridging the Gap between Language-centric and Real-world Applications by Simone Paolo Ponzetto and Andrea Zielinksi.

This tutorial was a bit more interesting, but kept on a very general level. Friends later told me that the first part was better as there were more technical details given. The second part was a review of work in entity and event extraction from twitter along with some practical systems presentations. For example, the talk focused into extraction of person names, e.g. “Steve Jobs” and events, e.g. “DEATH”. Two interesting systems were about earthquake reporting and location-based disease information aggregation.

In the evening there was Welcome reception at Sky Plaza – on the top of NDK. There we got Bulgarian food, drinks and some live music. After few hours of mingling I went back to the hotel and here I am writing this post …