Tag Archives: semantic technology

Posts related to semantic technology

Recorded webinar: Why You Need Deep Semantic Analytics

Last July 13th we delivered our webinar “Why You Need Deep Semantic Analytics”, where we explained how to achieve a deep, automatic understanding of complex documents. Thank you all for your interest.

During the session we covered these items:

  • Automatic understanding of unstructured documents.
  • What is Deep Semantic Analytics? Comparison with conventional text analytics.
  • Where it can be applied.
  • Case study: due diligence process.
  • Ideal features of a Deep Semantic Analytics solution.
  • MeaningCloud Roadmap in Deep Semantic Analytics.

IMPORTANT: you can find a more literary explanation of some of the items we covered, including the due diligence practical case, in this article.

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)
Continue reading


Deep Semantic Analytics: A Case Study

Scenarios that can benefit from unstructured content analysis are becoming more and more frequent: from industry or company news to processing contracts or medical records. However, as we know, this content does not lend itself to automatic analysis.

Text analytics has come to meet this need, providing powerful tools that allow us to discover topics, mentions, polarity, etc. in free-form text. This ability has made it possible to achieve an initial level of automatic understanding and analysis of unstructured documents, which has empowered a generation of context-sensitive semantic applications in areas such as Voice of the Customer analysis or knowledge management.

Continue reading


Why you need Deep Semantic Analytics (webinar)

Achieve a deep, automated understanding of complex documents

Conventional Text Analytics enable a first level of automatic understanding of unstructured content, achieved through its ability to extract mentions of entities and concepts, assign general categories or identify the polarity of opinions and facts that appear in the text. However, these isolated information elements do not reflect the wealth of information provided by these documents and impose limitations when it comes to finding, relating or analyzing them automatically.

Deep Semantic Analytics represents a step beyond conventional text analytics by providing features such as snippet-level granular categorization, detection of complex patterns, and extraction of semantic relationships between information elements in the document.

Continue reading


Automatic IAB tagging enables semantic ad targeting

Our Text Classification API supports IAB’s standard contextual taxonomy, enabling content tagging in compliance with this model in large volumes and with great speed, and easing the participation in the new online advertising ecosystem. The result is the impression of ads in the most appropriate context, with higher performance and brand protection for advertisers.

What is IAB’s contextual classification and what is it good for

The IAB QAG contextual taxonomy was initially developed by the Interactive Advertising Bureau (IAB) as the center of its Quality Assurance Guidelines program, whose aim was to promote the advertised brands’ safety, assuring advertisers that their ads would not appear in a context of inappropriate content. The QAG program provided certification opportunities for all kinds of agents in the digital advertising value chain, from ad networks and exchanges to publishers, supply-side platforms (SSPs), demand-side platforms (DSPs), and agency trading desks (ATDs).

The Quality Assurance Guidelines serve as a self-regulation framework to guarantee advertisers that their brands are safe, enhance the advertisers’ control over the placement and context of their ads, and offers transparency to the marketplace by standardizing the information flowing among agents. All this, by providing a clear, common language that describes the characteristics of the advertising inventory and the transactions across the advertising value chain.

Essentially, the contextual taxonomy serves to tag content and is made of standard Tiers, 1 and 2 – specifying, respectively, the general category of the content and a set of subcategories nested under this main category – and a third Tier (or more) that can be defined by each organization. The following pictures represent those standard tiers.
Continue reading


Semantic Publishing: a Case Study for the Media Industry

Semantic Publishing at Unidad Editorial: a Client Case Study in the Media Industry 

Last year, the Spanish media group Unidad Editorial deployed a new CMS developed in-house for its integrated newsroom. Unidad Editorial is a subsidiary of the Italian RCS MediaGroup, and publishes some of the newspapers and magazines with highest circulation in Spain, besides owning nation-wide radio stations and a license of DTTV incorporating four TV channels.

Newsroom El Mundo

Newsroom El Mundo

When a journalist adds a piece of news to the system, its content has to be tagged, which constitutes one of the first steps in a workflow that will end with the delivery of this item in different formats, through different channels (print, web, tablet and mobile apps) and for different mastheads. After evaluation of different provider’s solutions in the previous months, the company then decided that semantic tagging would be done through Daedalus’ text analytics technology. Semantic publishing included, in this case, the identification (with disambiguation) of named entities (people, places, organizations, etc.), time and money expressions, concepts, classification according to the IPTC scheme (an international standard for the media industry, with around 1400 classes organized in three levels), sentiment analysis, etc.

Continue reading


Our new Semantic Publishing API is now available in MeaningCloud

This API allows you to produce and publish more valuable contents, more quickly and at lower costs

UPDATE: this API has been discontinued. Use instead our Solution for Semantic Publishing, featuring APIs like Topics Extraction, Text Classification and Automatic Summarization.

At MeaningCloud we keep developing our roadmap and offering new vertical APIs, optimized for different industries and applications. We are pleased to announce that our Semantic Publishing solutions include a new API, designed especially for media, publishers and content providers in general.

It is a logical step for us, since at MeaningCloud we have been collaborating for years with the most significant enterprises in these industries (PRISA, Unidad Editorial, Vocento, RTVE, lainformacion.com, etc.) and this is one of the markets where we are detecting more demand and where our solutions are gaining more traction.

The Semantic Publishing API incorporates the know-how we have been developing when working with these large companies and packages it in the form of semantic resources, process pipelines and specific configurations for the most common applications and scenarios of this industry: archive management, content generation, customization of information products, etc.

Continue reading


Recognizing entities in a text: not as easy as you might think!

Entities recognition: the engineering problem

As in every engineering endeavor, when you face the problem of automating the identification of entities (proper names: people, places, organizations, etc.) mentioned in a particular text, you should look for the right balance between quality (in terms of precision and recall) and cost from the perspective of your goals. You may be tempted to compile a simple list of such entities and apply simple but straightforward pattern matching techniques to identify a predefined set of entities appearing “literally” in a particular piece of news, in a tweet or in a (transcribed) phone call. If this solution is enough for your purposes (you can achieve high precision at the cost of a low recall), it is clear that quality was not among your priorities. However… What if you can add a bit of excellence to your solution without technological burden for… free? If you are interested in this proposition, skip the following detailed technological discussion and go directly to the final section by clicking here.

Continue reading