Category Archives: Customization

This category describes MeaningCloud’s cutomization engine.

Recorded webinar: Solve the most wicked text categorization problems

Thank you all for your interest in our webinar “A new tool for solving wicked text categorization problems” that we delivered last June 19th, where we explained how to use our Deep Categorization customization tool to cope with text classification scenarios where traditional machine learning technologies present limitations.

During the session we covered these items:

  • Developing categorization models in the real world
  • Categorization based on pure machine learning
  • Deep Categorization API. Pre-defined models and vertical packs
  • The new Deep Categorization Customization Tool. Semantic rule language
  • Case Study: development of a categorization model
  • Deep Categorization – Text Classification. When to use one or the other
  • Agile model development process. Combination with machine learning

IMPORTANT: this article is a tutorial based on the demonstration that we delived and that includes the data to analyze and the results of the analysis.

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)
Continue reading


Tutorial: create your own deep categorization model

As you have probably know by now if you follow us, we’ve recently released our new customization console for deep categorization models.

Deep Categorization models are the resource we use in our Deep Categorization API. This API combines the morphosyntactic and semantic information we obtain from our core engines (which includes sentiment analysis as well as resource customization) with a flexible rule language that’s both powerful and easy to understand. This enables us to carry out accurate categorization in scenarios where reaching a high level of linguistic precision is key to obtain good results.

In this tutorial, we are going to show you how to create our own model using the customization console: we will define a model that suits our needs and we will see how we can reflect the criteria we want to through the rule language available.

The scenario we have selected is a very common one: support ticketing categorization. We have extracted (anonymized) tickets from our own support ticketing system and we are going to create a model to automatically categorize them. As we have done in other tutorials, we are going to use our Excel add-in to quickly analyze our texts. You can download the spreadsheet here if you want to follow the tutorial along. If you don’t use Microsoft Excel, you can use the Google Sheets add-on.

The spreadsheet contains two sheets with two different data sets, the first one with 30 entries, the second one with 20. For each data set, we have included an ID, the subject and the description of the ticket, and then a manual tagging of the category it should be categorized into. We’ve also added an additional column that concatenates the subject and the description, as we will use both fields combined in the analysis.

To get started, you need to register at MeaningCloud (if you haven’t already), and download and install the Excel add-in on your computer. Here you can read a detailed step by step guide to the process. Let’s get started! Continue reading


New Release: Deep Categorization Customization Console

One of the APIs that has had more “movement” lately in our updates is the Deep Categorization API, which — as many of you already know — provides an easier, more flexible and precise way to categorize texts. Most of this movement has come in the form of new supported models such as Intention Analysis, as well as many under-the-hood improvements.

We are happy to announce that we have finally released the Deep Categorization customization console in our web.

This console will allow you to create accurate models for those scenarios where you need a very high level of linguistic precision to differentiate between the different categories you want to detect.

MeaningCloud release

Continue reading


Easy Text Analytics using MeaningCloud’s Zapier integration

We at MeaningCloud love Zapier. It lets us build workflows connecting email, Slack, etc. We wanted to contribute our bit to its ecosystem, so we created MeaningCloud’s Zapier integration. Thanks to it, we can perform Text Analytics in any Zapier workflow easily.

Many organizations use workflows to automate tasks. Chat rooms and bots are a common way of triggering events. For instance, the Slash commands in Slack or Hubot respond to well-formed commands with strict patterns to avoid ambiguity, which is something desirable under some circumstances.

Zapier logo

Where these approaches do not fit specially well is, precisely, one of the most exciting aspects of using Text Analytics in automatization: it can react to the outside world. A company can analyze all communications received from clients, measure reputation, detect weaknesses, or even analyze the employee satisfaction. And all that information can be injected in an automated process and react conveniently.

In this article, we will learn how to integrate MeaningCloud in any Zapier workflow. Continue reading


Text Classification in Excel: build your own model

Customized Text Classification for Excel

In the previous tutorial we published about Text Classification and MeaningCloud’s Excel add-in, we showed you step by step how to carry out an automatic text classification using an example spreadsheet.

In this tutorial, we are going a bit further: instead of just using one of the predefined classification models we provide, we are going to create our own model using the model customization console in order to classify according to whichever categories we want.

We are going to work with the same example as before: London restaurants reviews extracted from Yelp. We will use some data from the previous tutorial, but for this one we need more texts, so we’ve added some. You can download the spreadsheet here if you want to follow the tutorial along.

If you followed the previous tutorial, you might remember that we tried to use the IAB model (a predefined model for contextual advertisement) to classify the different restaurant reviews and find out what type of restaurants they were. We had limited success: we did obtain a restaurant type for some of them, but for the rest we just got a general category, “Food & Drink“, which didn’t tell us anything new.

This is where our customization tools come in. Our classification models customization console allows you to create a model with the categories you want and lets you define exactly the criteria to use in the classification.

So how do we create this user model?
Continue reading


Learn to develop custom text classifiers (recorded webinar)

Last October 5th we presented our webinar “Learn to develop custom text classifiers with MeaningCloud”. Thank you all for your attendance.

We began by presenting how to do text classification with MeaningCloud and why it is necessary to develop models that are adapted to each specific application scenario. The bulk of the presentation consisted in using a practical case (analysis of restaurant reviews) to show how these models can be developed using our product.

Continue reading


Learn to develop custom text classifiers with MeaningCloud (webinar)

Learn in this webinar how to use MeaningCloud’s tools to create classification models completely adapted to your scenario

Users frequently ask us through our support line how to perform text classification according to application-specific taxonomies. For example, somebody needing to analyzing a bank’s contact center calls and open survey responses might be interested in classifying such messages according to the institution’s different types of products and services (deposits, loans, mortgages, etc.) or the type of interaction (request for information, contracting, complaint, etc.).

Custom classification

Continue reading


Sentiment Analysis in Excel: optimizing for your domain

In previous tutorials about Sentiment Analysis and our Excel add-in, we showed you step by step how to carry out a sentiment analysis with an example spreadsheet. In the first tutorial we focused in how to do the analysis, and then we took a look at the global polarity we obtained. In the second tutorial, we showed you how to customize the aspect-based sentiment analysis to detect exactly what you want in a text through the use of user dictionaries.

In this tutorial we are going to show you how to adapt the sentiment analysis to your own subdomain using of our brand new sentiment model customization functionality.

We are going to continue to use the same example as in the previous tutorials, as well as refer to some of the concepts we explain there, so we recommend to check them out beforehand, specially if you are new to our Excel add-in. You can download here the Excel spreadsheet with the data we are going to use.

The data we have been working on are restaurant reviews extracted from Yelp, more specifically reviews on Japanese restaurants in London.

In the last tutorial, we saw that some of the results we obtained could be improved. The issue in these cases was that certain expressions do not have the same polarity when we are talking about food or a restaurant than when we are using them in a general context. A clear example of this is the verb ‘share’. It is generally considered something positive, but in restaurant reviews it’s mostly mentioned when people order food to share, which has little to do with the sentiment expressed in the review.

This is where the sentiment model customization functionality helps us: it allows us to add our own criteria to the sentiment analysis.

Let’s see how to do this!
Continue reading


Sentiment Analysis in Excel: customizing aspect-based analyses

In the previous tutorial we published about Sentiment Analysis and MeaningCloud’s Excel add-in, we showed you step by step how to do a sentiment analysis using an example spreadsheet. Then we showed you a possible analysis you could obtain with its global polarity results.

In this tutorial we are going a bit further: instead of analyzing the global polarity obtained for different texts, we are going to focus on the analysis of different aspects that appear in them and how to use our dictionaries customization console to improve them and to extract easily the exact information you are interested in.

We are going to work withe same example as before: reviews for Japanese restaurants in London extracted from Yelp. If you don’t have it already from the previous tutorial, you can download the spreadsheet with the data here.

If you followed the previous tutorial, you will remember that when you run the sentiment analysis without changing its default settings, two new sheets appear: Global Sentiment Analysis and Topics Sentiment Analysis. Topics Sentiment Analysis shows you the concepts and entities detected in each one of the texts and the sentiment analysis associated to each one of them.

But what can we do when these are not the aspects of the text we are interested in analyzing? This is where our customization tools come in. Our dictionaries customization console allows you to create a dictionary with any of the concepts or entities you want to detect in your analysis, down the type you want them to have associated.

So how do we create this user dictionary?
Continue reading


A sentiment analysis entirely tailored to your needs with our new customization tool

The adaptation to the domain is what makes the difference between a good sentiment analysis and an exceptional one. Until now, the possibilities of adapting MeaningCloud’s sentiment analysis to your domain relied on the use of personal dictionaries – to create new entities and concepts that the Sentiment Analysis API employed to carry out its aspect-based analysis – or you had to ask our Professional Services Department to develop a tailor-made sentiment model.

Sentiment Models buttonWith the release of Sentiment Analysis 2.1, we incorporated a new customization tool designed to facilitate the creation of personal sentiment models. This tool fully employs our Natural Language Processing technology to enable you to be autonomous and develop —without programming— powerful sentiment analysis engines tailored to your needs.

Other tools for customizing sentiment analysis available on the market, mostly permit to define “bags of words” with either positive or negative polarity. Our tools go far beyond and enable you to:

  • Define the role of a word as a polarity vector (container, negator, modifier), allowing to use lemmas to easily incorporate all the possible variants of each word
  • Specify particular cases of a word’s polarity, depending on the context in which it appears or its morphosyntactic function in each case
  • Define multiword expressions as priority elements in the evaluation of polarity
  • Manage how these custom polarity models complement or replace the general dictionaries of every language.

Continue reading