Category Archives: Customization

This category describes MeaningCloud’s cutomization engine.

Text Classification 2.0: Migration Guide

We’ve recently published a new version of our Text Classification API, which comes hand in hand with a new version of the Classification Models Customization console.

In both these new versions, the main focus is on user models. We know how important it is to easily define the exact criteria you need, so the new classification API supports a new type of resource, the one generated by the Classification Model Customization Console 2.0.

In this post, we will talk about how to migrate to these new versions if you are currently using the old ones. Text Classification 1.1 and Classification Models 1.0 will be retired on 15/Sep/2020. Continue reading


New Release: Text Classification 2.0

We’re happy to announce we have just published a new version of our Text Classification API, which comes hand in hand with a new version of the Classification Models Customization console.

In both these new versions, the main focus is on user-defined models. We know how important it is to easily define the exact criteria you need, so the new classification API supports a new type of resource, the one generated with the Classification Models Customization console 2.0.

With these new versions, we’ve aimed to:

  • Make criteria definition easier: more user-friendly operators to improve overall rule readability, and new operators to provide more flexibility.
  • Remove dependencies between categories in a model that made their maintenance and evolution cumbersome.
  • Give the user more control over where the relevance assigned to the categories comes from.
MeaningCloud release

Let’s see with a little more detail what’s new. Continue reading


Recorded webinar: Solve the most wicked text categorization problems

Thank you all for your interest in our webinar “A new tool for solving wicked text categorization problems” that we delivered last June 19th, where we explained how to use our Deep Categorization customization tool to cope with text classification scenarios where traditional machine learning technologies present limitations.

During the session we covered these items:

  • Developing categorization models in the real world
  • Categorization based on pure machine learning
  • Deep Categorization API. Pre-defined models and vertical packs
  • The new Deep Categorization Customization Tool. Semantic rule language
  • Case Study: development of a categorization model
  • Deep Categorization – Text Classification. When to use one or the other
  • Agile model development process. Combination with machine learning

IMPORTANT: this article is a tutorial based on the demonstration that we delived and that includes the data to analyze and the results of the analysis.

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)
Continue reading


Tutorial: create your own deep categorization model

As you have probably know by now if you follow us, we’ve recently released our new customization console for deep categorization models.

Deep Categorization models are the resource we use in our Deep Categorization API. This API combines the morphosyntactic and semantic information we obtain from our core engines (which includes sentiment analysis as well as resource customization) with a flexible rule language that’s both powerful and easy to understand. This enables us to carry out accurate categorization in scenarios where reaching a high level of linguistic precision is key to obtain good results.

In this tutorial, we are going to show you how to create our own model using the customization console: we will define a model that suits our needs and we will see how we can reflect the criteria we want to through the rule language available.

The scenario we have selected is a very common one: support ticketing categorization. We have extracted (anonymized) tickets from our own support ticketing system and we are going to create a model to automatically categorize them. As we have done in other tutorials, we are going to use our Excel add-in to quickly analyze our texts. You can download the spreadsheet here if you want to follow the tutorial along. If you don’t use Microsoft Excel, you can use the Google Sheets add-on.

The spreadsheet contains two sheets with two different data sets, the first one with 30 entries, the second one with 20. For each data set, we have included an ID, the subject and the description of the ticket, and then a manual tagging of the category it should be categorized into. We’ve also added an additional column that concatenates the subject and the description, as we will use both fields combined in the analysis.

To get started, you need to register at MeaningCloud (if you haven’t already), and download and install the Excel add-in on your computer. Here you can read a detailed step by step guide to the process. Let’s get started! Continue reading


New Release: Deep Categorization Customization Console

One of the APIs that has had more “movement” lately in our updates is the Deep Categorization API, which — as many of you already know — provides an easier, more flexible and precise way to categorize texts. Most of this movement has come in the form of new supported models such as Intention Analysis, as well as many under-the-hood improvements.

We are happy to announce that we have finally released the Deep Categorization customization console in our web.

This console will allow you to create accurate models for those scenarios where you need a very high level of linguistic precision to differentiate between the different categories you want to detect.

MeaningCloud release

Continue reading


Easy Text Analytics using MeaningCloud’s Zapier integration

We at MeaningCloud love Zapier. It lets us build workflows connecting email, Slack, etc. We wanted to contribute our bit to its ecosystem, so we created MeaningCloud’s Zapier integration. Thanks to it, we can perform Text Analytics in any Zapier workflow easily.

Many organizations use workflows to automate tasks. Chat rooms and bots are a common way of triggering events. For instance, the Slash commands in Slack or Hubot respond to well-formed commands with strict patterns to avoid ambiguity, which is something desirable under some circumstances.

Zapier logo

Where these approaches do not fit specially well is, precisely, one of the most exciting aspects of using Text Analytics in automatization: it can react to the outside world. A company can analyze all communications received from clients, measure reputation, detect weaknesses, or even analyze the employee satisfaction. And all that information can be injected in an automated process and react conveniently.

In this article, we will learn how to integrate MeaningCloud in any Zapier workflow. Continue reading


Text Classification in Excel: build your own model

Customized Text Classification for Excel

In the previous tutorial we published about Text Classification and MeaningCloud’s Excel add-in, we showed you step by step how to carry out an automatic text classification using an example spreadsheet.

In this tutorial, we are going a bit further: instead of just using one of the predefined classification models we provide, we are going to create our own model using the model customization console in order to classify according to whichever categories we want.

We are going to work with the same example as before: London restaurants reviews extracted from Yelp. We will use some data from the previous tutorial, but for this one we need more texts, so we’ve added some. You can download the spreadsheet here if you want to follow the tutorial along.

If you followed the previous tutorial, you might remember that we tried to use the IAB model (a predefined model for contextual advertisement) to classify the different restaurant reviews and find out what type of restaurants they were. We had limited success: we did obtain a restaurant type for some of them, but for the rest we just got a general category, “Food & Drink“, which didn’t tell us anything new.

This is where our customization tools come in. Our classification models customization console allows you to create a model with the categories you want and lets you define exactly the criteria to use in the classification.

So how do we create this user model?
Continue reading


Learn to develop custom text classifiers (recorded webinar)

Last October 5th we presented our webinar “Learn to develop custom text classifiers with MeaningCloud”. Thank you all for your attendance.

We began by presenting how to do text classification with MeaningCloud and why it is necessary to develop models that are adapted to each specific application scenario. The bulk of the presentation consisted in using a practical case (analysis of restaurant reviews) to show how these models can be developed using our product.

Continue reading


Learn to develop custom text classifiers with MeaningCloud (webinar)

Learn in this webinar how to use MeaningCloud’s tools to create classification models completely adapted to your scenario

Users frequently ask us through our support line how to perform text classification according to application-specific taxonomies. For example, somebody needing to analyzing a bank’s contact center calls and open survey responses might be interested in classifying such messages according to the institution’s different types of products and services (deposits, loans, mortgages, etc.) or the type of interaction (request for information, contracting, complaint, etc.).

Custom classification

Continue reading


Sentiment Analysis in Excel: optimizing for your domain

In previous tutorials about Sentiment Analysis and our Excel add-in, we showed you step by step how to carry out a sentiment analysis with an example spreadsheet. In the first tutorial we focused in how to do the analysis, and then we took a look at the global polarity we obtained. In the second tutorial, we showed you how to customize the aspect-based sentiment analysis to detect exactly what you want in a text through the use of user dictionaries.

In this tutorial we are going to show you how to adapt the sentiment analysis to your own subdomain using of our brand new sentiment model customization functionality.

We are going to continue to use the same example as in the previous tutorials, as well as refer to some of the concepts we explain there, so we recommend to check them out beforehand, specially if you are new to our Excel add-in. You can download here the Excel spreadsheet with the data we are going to use.

The data we have been working on are restaurant reviews extracted from Yelp, more specifically reviews on Japanese restaurants in London.

In the last tutorial, we saw that some of the results we obtained could be improved. The issue in these cases was that certain expressions do not have the same polarity when we are talking about food or a restaurant than when we are using them in a general context. A clear example of this is the verb ‘share’. It is generally considered something positive, but in restaurant reviews it’s mostly mentioned when people order food to share, which has little to do with the sentiment expressed in the review.

This is where the sentiment model customization functionality helps us: it allows us to add our own criteria to the sentiment analysis.

Let’s see how to do this!
Continue reading