Category Archives: APIs

Posts about Meaningcloud’s APIs.

What is the Voice of the Employee (VoE)?

Voice of the Employee. Silhouettes with bubbles representing dialog

Finding committed employees is one of public and private organizations’ top priorities. Thus, listening to the Voice of the Employee by systematically collecting, managing and acting on the employee feedback on a variety of valuable topics is essential.

The relationship between Voice of the Employee (VoE) and Engagement is very similar to the one between Voice of the Customer (VoC) and Customer Experience. VoC provides information to improve customer experience. Voice of the Employee promotes employees’ engagement in the company and their work. See: Voice of the Employee, Voice of Customer and NPS

Voice of the Employee collects the needs, wishes, hopes, and preferences of the employees of a given company. VoE considers specific needs, such as salaries, career, health, and retirement, as well as implicit requirements to satisfy the employee and gain the respect of colleagues and managers.
Continue reading


Recorded webinar: When to use the different Text Analytics tools?

Last February 9th we presented our webinar “Classification, topic extraction, clustering… When to use the different Text Analytics tools?”. Thank you all for your interest.

During the session we covered the following agenda:

  • An introduction to Text Analytics.
  • Which application scenarios can benefit most from Text Analytics? Conversation analysis, 360° vision, intelligent content, knowledge management, e-discovery, regulatory compliance… Benefits and challenges.
  • What are the different Text Analytics functions useful for? Information extraction, categorization, clustering, sentiment analysis, morphosyntactic analysis… Description, demonstration and applications.
  • What features should a Text Analytics tool have? Is it all a question of precision? How to enhance quality?
  • A look at MeaningCloud’s roadmap.

IMPORTANT: The data analyzed during the webinar can be found in this tutorial.

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)
Continue reading


Text Analytics & MeaningCloud 101

One of the questions we get most often at our helpdesk is how to apply the text analytics functionalities that MeaningCloud provides to specific scenarios.

Users know they want to incorporate text analytics into their processes but are not sure how to translate their business requirements into something they can integrate into their pipeline.

If you add the fact that each provider has a different name for the products they offer to carry out specific text analytics tasks, it becomes difficult not just to get started, but even to know exactly what you need for your scenario.

homer-simpson-confused

In this post, we are going to explain what our different products are used for, the NLP (Natural Language Processing) tasks they are tied to, the added value they provide, and the requirements they fulfill.

[This post was last updated in October 2018 to include our new functionalities.]
Continue reading


Classification, topic extraction, clustering… When to use the different Text Analytics tools? (webinar)

How to leverage Text Analytics technology for your business

Text AnalyticsMost valuable information for organizations is hidden in unstructured texts (documents, contact center interactions, social conversations, etc.). Text Analytics helps us to structure such data and turn it into useful information. But which text analytical tools are the most appropriate for each case? When should I use information extraction, categorization, or clustering? Which applications can benefit most from Text Analytics? What are the challenges?

Register for this MeaningCloud webinar on Wednesday, February 8th at 9:00 PDT and discover answers to these and other questions through practical examples.

UPDATE: this webinar has already taken place. See the recording here.

(Este webinar también se realizó en español, ver la grabación aquí.)


Automatic IAB tagging enables semantic ad targeting

Our Text Classification API supports IAB’s standard contextual taxonomy, enabling content tagging in compliance with this model in large volumes and with great speed, and easing the participation in the new online advertising ecosystem. The result is the impression of ads in the most appropriate context, with higher performance and brand protection for advertisers.

What is IAB’s contextual classification and what is it good for

The IAB QAG contextual taxonomy was initially developed by the Interactive Advertising Bureau (IAB) as the center of its Quality Assurance Guidelines program, whose aim was to promote the advertised brands’ safety, assuring advertisers that their ads would not appear in a context of inappropriate content. The QAG program provided certification opportunities for all kinds of agents in the digital advertising value chain, from ad networks and exchanges to publishers, supply-side platforms (SSPs), demand-side platforms (DSPs), and agency trading desks (ATDs).

The Quality Assurance Guidelines serve as a self-regulation framework to guarantee advertisers that their brands are safe, enhance the advertisers’ control over the placement and context of their ads, and offers transparency to the marketplace by standardizing the information flowing among agents. All this, by providing a clear, common language that describes the characteristics of the advertising inventory and the transactions across the advertising value chain.

Essentially, the contextual taxonomy serves to tag content and is made of standard Tiers, 1 and 2 – specifying, respectively, the general category of the content and a set of subcategories nested under this main category – and a third Tier (or more) that can be defined by each organization. The following pictures represent those standard tiers.
Continue reading


A sentiment analysis entirely tailored to your needs with our new customization tool

The adaptation to the domain is what makes the difference between a good sentiment analysis and an exceptional one. Until now, the possibilities of adapting MeaningCloud’s sentiment analysis to your domain relied on the use of personal dictionaries – to create new entities and concepts that the Sentiment Analysis API employed to carry out its aspect-based analysis – or you had to ask our Professional Services Department to develop a tailor-made sentiment model.

Sentiment Models buttonWith the release of Sentiment Analysis 2.1, we incorporated a new customization tool designed to facilitate the creation of personal sentiment models. This tool fully employs our Natural Language Processing technology to enable you to be autonomous and develop —without programming— powerful sentiment analysis engines tailored to your needs.

Other tools for customizing sentiment analysis available on the market, mostly permit to define “bags of words” with either positive or negative polarity. Our tools go far beyond and enable you to:

  • Define the role of a word as a polarity vector (container, negator, modifier), allowing to use lemmas to easily incorporate all the possible variants of each word
  • Specify particular cases of a word’s polarity, depending on the context in which it appears or its morphosyntactic function in each case
  • Define multiword expressions as priority elements in the evaluation of polarity
  • Manage how these custom polarity models complement or replace the general dictionaries of every language.

Continue reading


Sentiment Analysis 2.1: Migration guide

We have released a new version of our sentiment analysis API, Sentiment Analysis. In Sentiment Analysis 2.1:

  • We’ve changed how the sentiment model is sent in order to enable the use of custom sentiment models across all the APIs that support sentiment analysis.
  • Support to analyze documents and URLs has been added.
  • A configurable interface language has been added to improve multilingual analyses.

As you would see, this is a minor version upgrade, so the migration process will be fast and painless. In this post, we explain what you need to know to migrate your applications from Sentiment Analysis 2.0 to Sentiment Analysis 2.1.
Continue reading


New release of MeaningCloud

We have just published a new release of MeaningCloud that affects Topics Extraction, Lemmatization, POS and Parsing, and Text Classification APIs. Although there are several new features in terms of new functionalities and parameters, the most important aspect of this release lies under the hood and essentially consists of a refactoring of the way in which concept-type topics are internally handled, much more in line with the use of other semantic resources. This lays the foundations for better performance and new features related to the extraction of this type of information. Sty tuned for great improvements in this area in future releases.

The other two great lines of this release are the enrichment of the morphosyntactic analysis with information extraction and sentiment analysis elements (which enable new and richer types of analyses that combine the text’s structure with topics and polarity) and a new predefined classification model.

Here are some details about the developments in the different APIs:

Continue reading


Lemmatization, PoS, Parsing 2.0: Migration guide

We have released a new version of our core linguistic analyzer: Lemmatization, PoS and Parsing. In Lemmatization, PoS and Parsing 2.0:

  • More analysis possibilities have been included to allow you to combine a complete morphosyntactic analysis with other types of analysis such as Sentiment Analysis and Topics Extraction.
  • Configuration options have been changed to provide more flexibility in the analyses and to make the options available more understandable.
  • We’ve refactored our code to:
    • Improve the quality of the concepts/keywords extraction.
    • Make easier and more flexible the use and traceability of user dictionaries.
    • Give the possibility of obtaining a more complex integrated analysis to give flexibility in complex scenarios where the standard output is not enough.
  • A new type of topic has been added, quantity expressions, to cover a specific type of information that was hard to obtain with previous versions.
  • Some fields in the output have been modified, either to give them more appropriate names or to make them easier to use and understand.
  • Some use modes have been retired as the information provided was redundant with what a morphosyntactic analysis already gives.

All these improvements mean the migration process is not as fast as it would be with a minor version. These are the things you need to know to migrate your applications from Lemmatization, PoS and Parsing 1.2 to Lemmatization, PoS and Parsing 2.0.
Continue reading