Tag Archives: semantic analysis

Posts related to semantic analysis

TASS 2018: Fostering Research on Semantic Analysis in Spanish

MeaningCloud and University of Jaen have been the organizers of TASS, the Workshop on Semantic Analysis in Spanish language at SEPLN (International Conference of the Spanish Society for Natural Language Processing), again in 2018.

TASS logo

During the years, the research has extended to other tasks related to the processing of the semantics of texts that attempt to further improve natural language understanding systems. Apart from sentiment analysis, other tasks attracting the interest of the research community are stance classification, negation handling, rumor identification, fake news identification, open information extraction, argumentation mining, classification of semantic relations, and question answering of non-factoid questions, to name a few.

TASS 2018 was the 7th event of the series and was held in conjunction with the 34rd International Conference of the Spanish Society for Natural Language Processing, in Seville (Spain), on September 18th, 2018. Four research tasks were proposed. MeaningCloud sponsored this edition with prizes for the best systems in each of the tasks. A comprehensive description paper is (to be) published in Procesamiento del Lenguaje Natural journal, vol 62: TASS 2018: The Strength of Deep Learning in Language Understanding Tasks.

Continue reading


MeaningCloud participates in the first Global Legal Hackathon

global legal hackaton

The first phase of the first Global Legal Hackathon (GLH) was held February 23-25, 2018. David Fisher, organizer of the event and founder of the technological and legal company Integra Ledger, estimates that the GLH will have a great impact. He hasn’t spoken too soon; global participation in the GLH nearly matched that of an earlier event organized by NASA, and it has been considered the largest hackathon organized to date. For 54 hours, more than 40 cities across six continents participated simultaneously. The teams were made up of engineers, jurists, lawyers, and people in business who all worked toward a common goal: to lay the foundations for legal projects that can improve legal work or access to legal information through an app, program, or software. Continue reading


Recorded webinar: Why You Need Deep Semantic Analytics

Last July 13th we delivered our webinar “Why You Need Deep Semantic Analytics”, where we explained how to achieve a deep, automatic understanding of complex documents. Thank you all for your interest.

During the session we covered these items:

  • Automatic understanding of unstructured documents.
  • What is Deep Semantic Analytics? Comparison with conventional text analytics.
  • Where it can be applied.
  • Case study: due diligence process.
  • Ideal features of a Deep Semantic Analytics solution.
  • MeaningCloud Roadmap in Deep Semantic Analytics.

IMPORTANT: you can find a more literary explanation of some of the items we covered, including the due diligence practical case, in this article.

Interested? Here you have the presentation and the recording of the webinar.

(También presentamos este webinar en español. Tenéis la grabación aquí.)
Continue reading


Deep Semantic Analytics: A Case Study

Scenarios that can benefit from unstructured content analysis are becoming more and more frequent: from industry or company news to processing contracts or medical records. However, as we know, this content does not lend itself to automatic analysis.

Text analytics has come to meet this need, providing powerful tools that allow us to discover topics, mentions, polarity, etc. in free-form text. This ability has made it possible to achieve an initial level of automatic understanding and analysis of unstructured documents, which has empowered a generation of context-sensitive semantic applications in areas such as Voice of the Customer analysis or knowledge management.

Continue reading


Why you need Deep Semantic Analytics (webinar)

Achieve a deep, automated understanding of complex documents

Conventional Text Analytics enable a first level of automatic understanding of unstructured content, achieved through its ability to extract mentions of entities and concepts, assign general categories or identify the polarity of opinions and facts that appear in the text. However, these isolated information elements do not reflect the wealth of information provided by these documents and impose limitations when it comes to finding, relating or analyzing them automatically.

Deep Semantic Analytics represents a step beyond conventional text analytics by providing features such as snippet-level granular categorization, detection of complex patterns, and extraction of semantic relationships between information elements in the document.

Continue reading


Automatic IAB tagging enables semantic ad targeting

Our Text Classification API supports IAB’s standard contextual taxonomy, enabling content tagging in compliance with this model in large volumes and with great speed, and easing the participation in the new online advertising ecosystem. The result is the impression of ads in the most appropriate context, with higher performance and brand protection for advertisers.

What is IAB’s contextual classification and what is it good for

The IAB QAG contextual taxonomy was initially developed by the Interactive Advertising Bureau (IAB) as the center of its Quality Assurance Guidelines program, whose aim was to promote the advertised brands’ safety, assuring advertisers that their ads would not appear in a context of inappropriate content. The QAG program provided certification opportunities for all kinds of agents in the digital advertising value chain, from ad networks and exchanges to publishers, supply-side platforms (SSPs), demand-side platforms (DSPs), and agency trading desks (ATDs).

The Quality Assurance Guidelines serve as a self-regulation framework to guarantee advertisers that their brands are safe, enhance the advertisers’ control over the placement and context of their ads, and offers transparency to the marketplace by standardizing the information flowing among agents. All this, by providing a clear, common language that describes the characteristics of the advertising inventory and the transactions across the advertising value chain.

Essentially, the contextual taxonomy serves to tag content and is made of standard Tiers, 1 and 2 – specifying, respectively, the general category of the content and a set of subcategories nested under this main category – and a third Tier (or more) that can be defined by each organization. The following pictures represent those standard tiers.
Continue reading


Semantic Publishing: a Case Study for the Media Industry

Semantic Publishing at Unidad Editorial: a Client Case Study in the Media Industry 

Last year, the Spanish media group Unidad Editorial deployed a new CMS developed in-house for its integrated newsroom. Unidad Editorial is a subsidiary of the Italian RCS MediaGroup, and publishes some of the newspapers and magazines with highest circulation in Spain, besides owning nation-wide radio stations and a license of DTTV incorporating four TV channels.

Newsroom El Mundo

Newsroom El Mundo

When a journalist adds a piece of news to the system, its content has to be tagged, which constitutes one of the first steps in a workflow that will end with the delivery of this item in different formats, through different channels (print, web, tablet and mobile apps) and for different mastheads. After evaluation of different provider’s solutions in the previous months, the company then decided that semantic tagging would be done through Daedalus’ text analytics technology. Semantic publishing included, in this case, the identification (with disambiguation) of named entities (people, places, organizations, etc.), time and money expressions, concepts, classification according to the IPTC scheme (an international standard for the media industry, with around 1400 classes organized in three levels), sentiment analysis, etc.

Continue reading


Our new Semantic Publishing API is now available in MeaningCloud

This API allows you to produce and publish more valuable contents, more quickly and at lower costs

UPDATE: this API has been discontinued. Use instead our Solution for Semantic Publishing, featuring APIs like Topics Extraction, Text Classification and Automatic Summarization.

At MeaningCloud we keep developing our roadmap and offering new vertical APIs, optimized for different industries and applications. We are pleased to announce that our Semantic Publishing solutions include a new API, designed especially for media, publishers and content providers in general.

It is a logical step for us, since at MeaningCloud we have been collaborating for years with the most significant enterprises in these industries (PRISA, Unidad Editorial, Vocento, RTVE, lainformacion.com, etc.) and this is one of the markets where we are detecting more demand and where our solutions are gaining more traction.

The Semantic Publishing API incorporates the know-how we have been developing when working with these large companies and packages it in the form of semantic resources, process pipelines and specific configurations for the most common applications and scenarios of this industry: archive management, content generation, customization of information products, etc.

Continue reading


Semantic Analysis and Big Data to understand Social TV

We recently participated in the Big Data Spain conference with a talk entitled “Real time semantic search engine for social TV streams”. This talk describes our ongoing experiments on social media analytics and combines our most recent developments on using semantic analysis on social networks and dealing with real-time streams of data.

Social TV, which exploded with the use of social networks while watching TV programs is a growing and exciting phenomenon. Twitter reported that more than a third of their firehose in the primetime is discussing TV (at least in the UK) while Facebook claimed 5 times more comments behind his private wall. Recently Facebook also started to offer hashtags and the Keywords Insight API for selected partners as a mean to offer aggregated statistics on Social TV conversations inside the wall.

As more users have turned into social networks to comment with friends and other viewers, broadcasters have looked into ways to be part of the conversation. They use official hashtags, let actors and anchors to tweet live and even start to offer companion apps with social share functionalities.

While the concept of socializing around TV is not new, the possibility to measure and distill the information around these interactions opens up brand new possibilities for users, broadcasters and brands alike.  Interest of users already fueled Social TV as it fulfills their need to start conversations with friends, other viewers and the aired program. Chatter around TV programs may help to recommend other programs or to serve contextually relevant information about actors, characters or whatever appears in TV.  Moreover, better ways to access and organize public conversations will drive new users into a TV program and engage current ones.

Continue reading