Tag Archives: natural language processing

Posts related to natural language processing

What You Need To Know about Text Analytics

You have enough to worry about. You know your industry inside and out. You know your products and services and how they compare with the competition’s strengths and weakness. In business, you have to be an expert in a range of topics. What you don’t need to worry about is the in’s and out’s of every technology, algorithm and software program.

This is especially true of an inherently complex technology such as natural language processing. As a business owner you have enough to worry about. Do you really have time to understand morphological segmentation?Text analytics should just be another tool in your toolbox to achieve your business ends. The only thing you need to know is what problems you have that can be solved by Natural Language Processing. Anaphoric referencing? Don’t worry about it. We have it covered and anything else you might need from language technology.

Text Analytics

What you do need to know about text analytics?

Text analytics goes by many names: natural language processing, NLP, text analysis, text mining, computational linguistics. There are shades of difference in these terms but let the boffins work that out. What you need to know is that these terms describe a variety of algorithms and technology that is able to process raw text written in a human language (often referred to as a natural language) to provide enriched text. That enrichment could mean a number of things:

  • Categorization. The categorization of the text according to themes, categories or a taxonomy.
  • Topic Extraction. The identification of the key named entities and concepts being talked about in the text such as people, place, organizations and brands.
  • Sentiment Analysis. The analysis of whether the text is talking about those concepts in a positive or negative light.

Continue reading

Is Cognitive Computing too Cool to Be True?

According to IBM, “Cognitive Computing systems learn and interact naturally with people to extend what either humans or machines could do on their own. They help human experts make better decisions by penetrating the complexity of Big Data.” Dharmendra Modha, Manager of Cognitive Computing at IBM Research, talks about cognitive computing as an algorithm being able to solve a vast array of problems.

With this definition in mind, it seems that this algorithm requires a way to interact with humans in order to learn and to think as they do. Nice, great words! Anyway, it is the same well-known goal of Artificial Intelligence (AI), a more common name that almost everybody has heard about. Why change it? Ok, when a company is investing at least $1 billion in something, it must be cool and fancy enough to draw people’s attention, and AI is quite old-fashioned. Nevertheless, machines still cannot think! And I believe it will take some time.

How does Cognitive Computing work? According to the given definition, to enable the human-machine interaction, some kind of voice and image processing solutions must be integrated. I am not an expert on image processing, but voice recognition systems, dialog management models and Natuking-640388_1280ral Language Processing techniques have been studied for a while. Even Question Answering methods (i.e. the ability of a software system to return the exact answer to a question instead of a set of documents as traditional search engines do) have been deeply studied. We ourselves have been doing (and still do) research on this topic since 2007, which resulted in the development of virtual assistants, a combination of dialogue management and question answering techniques. Do you remember Ikea’s example called Anna? In spite of the fame she gained at that time, she is not working anymore. Perhaps, for users, that kind of interaction through a website was not effective enough. On the other hand, virtual assistants like Siri, supported by an enormous company as Apple, are gaining attention. There are other virtual assistants for environments different from iOS but they are far less known, perhaps because the companies behind them are quite smaller than Apple.

Several aspects of the thinking capabilities required by the mentioned algorithm have to do with the concept of Machine Learning. There are a lot of well-known algorithms which are able to generate models from a set of examples or even from raw data (in the case of unsupervised processes). This enables a machine to learn how to classify things or to group items together, like a baby piling up those coloured geometric pieces. So, combining Machine Learning and NLP models it is possible for a machine to understand a text. This process is what we call Structuring Unstructured Data (much less fancy than Cognitive Computing). That is, making your information actionable. We have been working on this during several years, but now it is called cognitive computing.

So, as you might imagine, Cognitive Computing techniques are not different from the ones we have already developed; a lot of researchers and companies have been combining them. And, if you think about it, does it really matter if a machine thinks or not? The relevant added value of this technology is helping humans to do their job with all the relevant information at hand, at the right moment, so they can make thoughtful and reasonable decisions. This is our goal at MeaningCloud.

Textalytics sponsors the Sentiment Analysis Symposium

Next March 5-6, New York will host a new edition of the Sentiment Analysis Symposium. This is the seventh event of a series organized by industry expert Seth Grimes since year 2010 in San Francisco and NYC.

This is a unique conference in several aspects. First, it is designed specifically to serve the community of professionals interested in Human Analytics and its business application. Second, its audience is integrated by a mix of experts, strategists, practitioners, researchers, and solution providers, which makes a perfect breeding ground for discussion and exchange of points of view. Third, it is designed by just one person (not by a committee), a guarantee of consistency. Being an expert in the consultancy business, Seth Grimes achieves an excellent balance of presentations covering from technology to business application. I attended the New York 2012 edition, where I gave a lightening talk, and I can tell that the experience was really enriching.

Sentiment Analysis Symposium 2014

Do not be misled by the title: do not interpret “Sentiment Analysis” in a narrow sense. The conference is about discovering business value in opinions, emotions, and attitudes in social media, news, and enterprise feedback. Moreover, the scope is not limited to text sources: speech and image are terms of the equation too.

Continue reading

Recognizing entities in a text: not as easy as you might think!

Entities recognition: the engineering problem

As in every engineering endeavor, when you face the problem of automating the identification of entities (proper names: people, places, organizations, etc.) mentioned in a particular text, you should look for the right balance between quality (in terms of precision and recall) and cost from the perspective of your goals. You may be tempted to compile a simple list of such entities and apply simple but straightforward pattern matching techniques to identify a predefined set of entities appearing “literally” in a particular piece of news, in a tweet or in a (transcribed) phone call. If this solution is enough for your purposes (you can achieve high precision at the cost of a low recall), it is clear that quality was not among your priorities. However… What if you can add a bit of excellence to your solution without technological burden for… free? If you are interested in this proposition, skip the following detailed technological discussion and go directly to the final section by clicking here.

Continue reading

Semantic Analysis and Big Data to understand Social TV

We recently participated in the Big Data Spain conference with a talk entitled “Real time semantic search engine for social TV streams”. This talk describes our ongoing experiments on social media analytics and combines our most recent developments on using semantic analysis on social networks and dealing with real-time streams of data.

Social TV, which exploded with the use of social networks while watching TV programs is a growing and exciting phenomenon. Twitter reported that more than a third of their firehose in the primetime is discussing TV (at least in the UK) while Facebook claimed 5 times more comments behind his private wall. Recently Facebook also started to offer hashtags and the Keywords Insight API for selected partners as a mean to offer aggregated statistics on Social TV conversations inside the wall.

As more users have turned into social networks to comment with friends and other viewers, broadcasters have looked into ways to be part of the conversation. They use official hashtags, let actors and anchors to tweet live and even start to offer companion apps with social share functionalities.

While the concept of socializing around TV is not new, the possibility to measure and distill the information around these interactions opens up brand new possibilities for users, broadcasters and brands alike.  Interest of users already fueled Social TV as it fulfills their need to start conversations with friends, other viewers and the aired program. Chatter around TV programs may help to recommend other programs or to serve contextually relevant information about actors, characters or whatever appears in TV.  Moreover, better ways to access and organize public conversations will drive new users into a TV program and engage current ones.

Continue reading