MeaningCloud achieves ISO/IEC 27001 certification

In MeaningCloud, we know how important it is to manage and ensure information security, even more so for a platform that processes all kinds of texts — including texts with sensitive information — to help you extract insightful information from them. For this reason, at the end of last year we prioritized confirming and improving our good practices by obtaining the ISO 27001 certification, which we achieved in our first attempt in February after following an extensive audit process carried out by RINA.

For those unfamiliar with it, ISO/IEC 27001 is an information security standard that specifies a management system that is intended to bring information security under management control and gives specific requirements.

Organizations that meet the requirements may be certified by an accredited certification body following successful completion of an audit. The standard is published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) under a joint subcommittee.

ISO27001

The certification obtained applies to both MeaningCloud in its SaaS and its on-premises version, and includes all its stages: development, maintenance and deployment.


MeaningCloud adheres to the Privacy Shield Framework

Privacy Shield Framework

At MeaningCloud, privacy issues represent a major concern. That’s because we have adhered to the Privacy Shield Framework, to guarantee our EU and Swiss customers full compliance to the European regulation of data privacy issues, as established by the EU General Data Protection Regulation (GDPR).

What is the EU-US Privacy Shield

The EU–US Privacy Shield is a framework for regulating transatlantic exchanges of personal data for commercial purposes between the European Union and the United States. One of its objectives is to enable US companies to more easily receive personal data from EU entities under EU privacy laws meant to protect European Union citizens. The EU–US Privacy Shield is a replacement for the International Safe Harbor Privacy Principles, which were declared invalid by the European Court of Justice in October 2015.

Continue reading


New Release: Financial Industry Vertical Pack

Some text analytics scenarios need more than general purpose resources to get the results you need. If you are familiar with MeaningCloud, you’ll know that resource customization is one of our main features and great advantages. The parametrization available in the different analyses we offer enables you to adapt our tools to exactly the type of analysis you want. You can do this in two ways: using any of our predefined resources or creating your own with our customization consoles.

In this line, we are happy to announce that we have released a new vertical pack for the finance industry. This pack will allow you to analyze your financial contents and interpret them according to a standard vocabulary (FIBO)

MeaningCloud release

Continue reading


How Artificial Intelligence makes RPA smarter: two use cases

RPA-automation-computer-robot-tools and statistics

Artificial Intelligence and RPA

Many organizations could be gaining huge operational efficiencies if they combined Artificial Intelligence and RPA (Robotic Process Automation).

In a previous post (The leading role of Natural Language Processing in Robotic Process Automation) we introduced the subject of NLP in RPA. In this post, we are seeing two use cases where Natural Language Processing (also known as Text Analytics) integrated with RPA/BPM software suites, is mature enough to solve typical insight extraction problems, conveniently and cost-effectively.

Continue reading


Liberty Shared: how an NGO uses Text Analytics

Liberty Shared[EDITOR’S NOTE: This is a guest post by Xinyi Duan, Director of Technology and Data Research at Liberty Shared.]

Liberty Shared is committed to ensuring that the experiences of vulnerable and exploited workers around the world is represented in our markets, legal systems, and information infrastructures. To do this, we have to take on the daunting task of wrangling some of the messiest data that have been previously un-mined and unstructured.

MeaningCloud has enabled us to quickly and effectively deploy NLP techniques to tackle these problems, and it works easily for team members who are using NLP statistical models already to those without that technical background. It is also powerful enough to grow with our programs. As we learn more about the problem, it is easy to update the models to reflect our learnings.

Continue reading


We The Humans: Artificial Intelligence for social good

MeaningCloud partners with the think tank “We the Humans“, sponsoring the challenge “Artificial Intelligence for social good”.

The mission of “We the Humans” consists in:

  • Encouraging the social debate about the correct use and development of Artificial Intelligence.
  • Bringing these concerns to the public agenda.
  • Supporting organizations in the development and adoption of an ethical AI.


We The Humans Think Tank
Continue reading


Invoking the MeaningCloud Sentiment Analysis API from Minsait’s Onesait Platform

Minsait’s Onesait Platform is an IoT & Big Data Platform designed to facilitate and accelerate the construction of new systems and digital solutions and thus achieve the transformation and disruption of business.

Minsait has published a post about the procedure to invoke an external API from the integrated flow engine of the Onesait Platform (formerly known as Sofia2).

MeaningCloud integrated with Minsait Onesait Platform

The post titled HOW TO INVOKE AN EXTERNAL REST API FROM THE SOFIA2 FLOW ENGINE? uses as an example the integration of MeaningCloud Sentiment Analysis API (in Spanish).

The article illustrates one of the strengths of MeaningCloud: how easy it is to integrate its APIs into any system or process.

 

 

 

 


Recorded webinar: Solve the most wicked text categorization problems

Thank you all for your interest in our webinar “A new tool for solving wicked text categorization problems” that we delivered last June 19th, where we explained how to use our Deep Categorization customization tool to cope with text classification scenarios where traditional machine learning technologies present limitations.

During the session we covered these items:

  • Developing categorization models in the real world
  • Categorization based on pure machine learning
  • Deep Categorization API. Pre-defined models and vertical packs
  • The new Deep Categorization Customization Tool. Semantic rule language
  • Case Study: development of a categorization model
  • Deep Categorization – Text Classification. When to use one or the other
  • Agile model development process. Combination with machine learning

IMPORTANT: this article is a tutorial based on the demonstration that we delived and that includes the data to analyze and the results of the analysis.

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)
Continue reading


Tutorial: create your own deep categorization model

As you have probably know by now if you follow us, we’ve recently released our new customization console for deep categorization models.

Deep Categorization models are the resource we use in our Deep Categorization API. This API combines the morphosyntactic and semantic information we obtain from our core engines (which includes sentiment analysis as well as resource customization) with a flexible rule language that’s both powerful and easy to understand. This enables us to carry out accurate categorization in scenarios where reaching a high level of linguistic precision is key to obtain good results.

In this tutorial, we are going to show you how to create our own model using the customization console: we will define a model that suits our needs and we will see how we can reflect the criteria we want to through the rule language available.

The scenario we have selected is a very common one: support ticketing categorization. We have extracted (anonymized) tickets from our own support ticketing system and we are going to create a model to automatically categorize them. As we have done in other tutorials, we are going to use our Excel add-in to quickly analyze our texts. You can download the spreadsheet here if you want to follow the tutorial along. If you don’t use Microsoft Excel, you can use the Google Sheets add-on.

The spreadsheet contains two sheets with two different data sets, the first one with 30 entries, the second one with 20. For each data set, we have included an ID, the subject and the description of the ticket, and then a manual tagging of the category it should be categorized into. We’ve also added an additional column that concatenates the subject and the description, as we will use both fields combined in the analysis.

To get started, you need to register at MeaningCloud (if you haven’t already), and download and install the Excel add-in on your computer. Here you can read a detailed step by step guide to the process. Let’s get started! Continue reading