Category Archives: Deep Semantic Analytics

Deep Semantic Analytics

IAB Taxonomy Level 3 now available in our Deep Categorization API

IAB - Interactive Advertising BureauDigital marketing is becoming a fundamental pillar, by leaps and bounds, in the business plans of practically every business model. Methods are being refined and the search for the connection between brand and user is expected to become increasingly more precise: a related advertisement is no longer sufficient, now the advertisement must appear at the right time and in the right place. This is where categorization proves to be an exceedingly useful tool.

That is why, at MeaningCloud, we have improved our IAB categorization model in English, that is integrated in our Deep Categorization API:

  • Adding a third level of content taxonomy to the hierarchy of categories (IAB Taxonomy Level 3).
  • Improving the precision of pre-existing categories.
  • Including the unique identifiers defined by IAB itself for each of the categories.

Continue reading


Recorded webinar: Deep Text Analytics for More Scalable and Valuable Market Intelligence

Thank you for your interest in our webinar “Use Deep Text Analytics to Achieve More Scalable and Valuable Market Intelligence” held on June 23rd.

We explained how deep text analytics automatically understand detailed Market Intelligence information and enable applications that enable you to identify business opportunities and capture value from your market much more effectively.

In the session we covered these items:

  • Introduction to Market Intelligence
    • Benefits and limitations
  • Applying deep text analytics
    • Integrating multiple sources
    • Discovering business opportunities
    • Understanding our customers in depth
    • Analyzing the environment
    • Detecting signs of growth

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)

Continue reading


Communication during the Coronavirus (I): Thematic analysis in Spanish digital news media

While it is obvious that the priority during this pandemic is to cure the sick, to prevent new cases from surfacing and to ensure there are economic and social measures in place to help the people and businesses most afflicted overcome the current situation; without a doubt, in the near future, the analysis of content related to the coronavirus that has been generated by the media and social network users will be the object of research for numerous disciplines such as sociology, philology, linguistics, audio-visual communication, and politics, to name a few.

At MeaningCloud we want to do our bit in this area, by applying our experience and our Text Analytics solutions to analyze the enormous volume of information in natural language, in Spanish and in other languages, in Spain and in other countries, given that, unfortunately, this is a global crisis.

This first article in the series centers on the thematic analysis of content that has been generated in Spanish by digital media platforms in Spain over the last month, how it has evolved during this period of time and the informative positioning of the main media platforms in Spain.

These other articles (only available, at the moment, in Spanish) analyse conversation topics on Twitter in Spain (both from the hashtags and general topics perspective and also applying a specific thematic categorization) and the linguistic analysis of presidential speeches related to this crisis.

Continue reading


NLP technologies: state of the art, trends and challenges

This post presents MeaningCloud’s vision on the state of Natural Language Processing technology by the end of 2019, based on our work with customers and research projects.

NLP technology has practically achieved human quality (or even better) in many different tasks, mainly based on advances in machine learning/deep learning techniques, which allow to make use of large sets of training data to build language models, but also due to the improvement in core text processing engines and the availability of semantic knowledge databases.

Continue reading


Recorded webinar: Solve the most wicked text categorization problems

Thank you all for your interest in our webinar “A new tool for solving wicked text categorization problems” that we delivered last June 19th, where we explained how to use our Deep Categorization customization tool to cope with text classification scenarios where traditional machine learning technologies present limitations.

During the session we covered these items:

  • Developing categorization models in the real world
  • Categorization based on pure machine learning
  • Deep Categorization API. Pre-defined models and vertical packs
  • The new Deep Categorization Customization Tool. Semantic rule language
  • Case Study: development of a categorization model
  • Deep Categorization – Text Classification. When to use one or the other
  • Agile model development process. Combination with machine learning

IMPORTANT: this article is a tutorial based on the demonstration that we delived and that includes the data to analyze and the results of the analysis.

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)
Continue reading


MeaningCloud participates in the first Global Legal Hackathon

global legal hackaton

The first phase of the first Global Legal Hackathon (GLH) was held February 23-25, 2018. David Fisher, organizer of the event and founder of the technological and legal company Integra Ledger, estimates that the GLH will have a great impact. He hasn’t spoken too soon; global participation in the GLH nearly matched that of an earlier event organized by NASA, and it has been considered the largest hackathon organized to date. For 54 hours, more than 40 cities across six continents participated simultaneously. The teams were made up of engineers, jurists, lawyers, and people in business who all worked toward a common goal: to lay the foundations for legal projects that can improve legal work or access to legal information through an app, program, or software. Continue reading


Our experience on Adverse Drug Reactions (ADR) identification at TAC2017

MeaningCloud and LaBDA research group were present at the TAC 2017 conference held on November 13th – 14th at NIST headquarters in Washington. In the Text Analysis Conferences, research groups from all over the world were invited to develop software systems to tackle text analytics-related problems. This year, one task was devoted to the automatic identification of adverse drug reactions (ADRs) appearing in drug labels, including features defining the ADR, such as its severity or if it is characteristic of a drug class instead of just a given drug. There has been a specific subtask to link the identified ADRs with their corresponding MedDRA codes and lexical terms. More than 10 research teams have taken part in the project, all of them applying some kind of deep learning approach to the problem. Results show that it is possible to reach 85% accuracy when identifying ADRs.

We were delighted to present our text analytics-based system for ADRs identification on drug labels, which combines natural language processing and machine learning algorithms. The system has been built as a joint effort between MeaningCloud and LaBDA research group at the Universidad Carlos III de Madrid. Identifying ADRs is a basic task for pharmacovigilance, and that is the reason why the Federal Drug Administration (FDA) is involved in the funding and definition of the ADRs identification tasks in the framework of the Text Analysis Conferences. We have learned a lot these days (e.g., a BiLSTM deep neural network is the best choice for the purpose), and shared pleasant moments with our colleagues at Washington. We hope to be able to attend next year’s edition, which will focus on the extraction of drug-drug interactions (DDI), another interesting task aimed at detecting situations where the use of a combination of drugs may lead to an adverse effect.

We have now a dedicated business exclusively focused on the health and pharmaceutical sectors

Konplik.Health begins operations with the health-related assets from MeaningCloud, including its leading natural language processing, deep semantic analysis, AI platform, and adaptations specific to the life sciences.


Recorded webinar: Why You Need Deep Semantic Analytics

Last July 13th we delivered our webinar “Why You Need Deep Semantic Analytics”, where we explained how to achieve a deep, automatic understanding of complex documents. Thank you all for your interest.

During the session we covered these items:

  • Automatic understanding of unstructured documents.
  • What is Deep Semantic Analytics? Comparison with conventional text analytics.
  • Where it can be applied.
  • Case study: due diligence process.
  • Ideal features of a Deep Semantic Analytics solution.
  • MeaningCloud Roadmap in Deep Semantic Analytics.

IMPORTANT: you can find a more literary explanation of some of the items we covered, including the due diligence practical case, in this article.

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)
Continue reading


Deep Semantic Analytics: A Case Study

Scenarios that can benefit from unstructured content analysis are becoming more and more frequent: from industry or company news to processing contracts or medical records. However, as we know, this content does not lend itself to automatic analysis.

Text analytics has come to meet this need, providing powerful tools that allow us to discover topics, mentions, polarity, etc. in free-form text. This ability has made it possible to achieve an initial level of automatic understanding and analysis of unstructured documents, which has empowered a generation of context-sensitive semantic applications in areas such as Voice of the Customer analysis or knowledge management.

Continue reading


Why you need Deep Semantic Analytics (webinar)

Achieve a deep, automated understanding of complex documents

Conventional Text Analytics enable a first level of automatic understanding of unstructured content, achieved through its ability to extract mentions of entities and concepts, assign general categories or identify the polarity of opinions and facts that appear in the text. However, these isolated information elements do not reflect the wealth of information provided by these documents and impose limitations when it comes to finding, relating or analyzing them automatically.

Deep Semantic Analytics represents a step beyond conventional text analytics by providing features such as snippet-level granular categorization, detection of complex patterns, and extraction of semantic relationships between information elements in the document.

Continue reading