Category Archives: Language Technology

Posts about language technology.

Text analytics explained: MeaningCloud in French

Due to the rise of Natural Language Processing technologies, Text Analytics is on everyone’s lips. However, most services in this field are provided in English and, depending on the language you are interested in, it can become difficult to find the functionality you are looking for.

No worries. French, for instance, is a language not only used in all the five continents and with almost 300 million of speakers, but is also either the first or the second language of communication in many international organizations [1]. No wonder why we have it as a part of our Standard Languages Pack!

Hello in many languages

Whether the concept “Text Analytics” sounds rather hazy or you are looking for something more specifically language-related, this post is for you. We keep in mind the language diversity and we want to show you all the functionalities we provide in French.

Continue reading


Easy Text Analytics using MeaningCloud’s Zapier integration

We at MeaningCloud love Zapier. It lets us build workflows connecting email, Slack, etc. We wanted to contribute our bit to its ecosystem, so we created MeaningCloud’s Zapier integration. Thanks to it, we can perform Text Analytics in any Zapier workflow easily.

Many organizations use workflows to automate tasks. Chat rooms and bots are a common way of triggering events. For instance, the Slash commands in Slack or Hubot respond to well-formed commands with strict patterns to avoid ambiguity, which is something desirable under some circumstances.

Zapier logo

Where these approaches do not fit specially well is, precisely, one of the most exciting aspects of using Text Analytics in automatization: it can react to the outside world. A company can analyze all communications received from clients, measure reputation, detect weaknesses, or even analyze the employee satisfaction. And all that information can be injected in an automated process and react conveniently.

In this article, we will learn how to integrate MeaningCloud in any Zapier workflow. Continue reading


MeaningCloud sponsors the award for Author Profiling Research at PAN also in 2018

Author Profiling and Text Forensics Research

CLEF Conference 2018Since 2009 the PAN Lab organizes shared tasks on digital text forensics in general, and in author profiling in particular. Pan Lab is part of CLEF, the European Conference and Evaluation Forum around Information Retrieval. CLEF consists of an independent peer-reviewed conference on a broad range of issues in the field of multilingual and multimodal information access evaluation, and a set of labs and workshops designed to test different aspects of mono and cross-language information retrieval systems. CLEF 2018 will be hosted by the University of Avignon, France, 10-14 September 2018.

MeaningCloud has been sponsoring the award to the best performing team in the author profiling task at CLEF since 2015.

Author profiling is a task that given a document has the aim to infer what are the traits of its author.
In 2017 the task focused on gender and language variety identification in Twitter addressing four languages and several of their varieties: English (Australia, Canada, Great Britain, Ireland, New Zealand, United States), Spanish (Argentina, Chile, Colombia, Mexico, Peru, Spain, Venezuela), Portuguese (Brazil, Portugal), and Arabic (Egypt, Gulf, Levantine, Maghrebi).

Paolo Rosso delivers the 2017 PAN Author Profiling Price to the team of University of Groningen

Paolo Rosso delivers the 2017 PAN Price to the team of University of Groningen

Twenty-two were the participating teams from all over the world in 2017 and the best results were obtained by Angelo Basile, Gareth Dwyer, Maria Medvedeva, Josine Rawee, Hessel Haagsma, and Malvina Nissim, from the University of Groningen, The Netherlands.

This year the task will go multimodal and not only textual information in tweets will be taken into account but also images of URLs will be used as information sources in order to infer gender demographics. Three will be the languages that will be addressed: English, Spanish and Arabic [http://pan.webis.de/clef18/pan18-web/author-profiling.html].

Paolo Rosso
Universitat Politècnica de València, Spain
Co-organizer of the author profiling task at PAN

References

Rangel F., Rosso P., Potthast M., Stein B. (2017). Overview of the 5th Author Profiling Task at PAN 2017: Gender and Language Variety Identification in Twitter. In: Cappellato L., Ferro N., Goeuriot L, Mandl T. (Eds.) CLEF 2017 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings. CEUR-WS.org, vol. 1866. [http://ceur-ws.org/Vol-1866/invited_paper_11.pdf]

Potthast M., Rangel F., Tschuggnall M., Stamatatos E., Rosso P., Stein B. (2017). Overview of PAN’17: Author Identification, Author Profiling, and Author Obfuscation. In: 8th Int. Conf. of CLEF on Experimental IR Meets Multilinguality, Multimodality, and Visualization, CLEF 2017,
Springer-Verlag, LNCS(10456), pp. 275–290 [http://www.uni-weimar.de/medien/webis/publications/papers/stein_2017k.pdf]


MeaningCloud participates in the first Global Legal Hackathon

global legal hackaton

The first phase of the first Global Legal Hackathon (GLH) was held February 23-25, 2018. David Fisher, organizer of the event and founder of the technological and legal company Integra Ledger, estimates that the GLH will have a great impact. He hasn’t spoken too soon; global participation in the GLH nearly matched that of an earlier event organized by NASA, and it has been considered the largest hackathon organized to date. For 54 hours, more than 40 cities across six continents participated simultaneously. The teams were made up of engineers, jurists, lawyers, and people in business who all worked toward a common goal: to lay the foundations for legal projects that can improve legal work or access to legal information through an app, program, or software. Continue reading


The Text Proofreading API moves to Stilus

Even since the very beginnings of MeaningCloud, we have offered a Text Proofreading API in Spanish which allow you to standardize and ensure the quality of your contents through spelling, grammar and style proofreading.

Stilus logo

On the 2nd of April, we will definitely move this API and its functionality to Stilus, an application where we take advantage of the functionality provided by the API and show everything you can do with it.

To those of you who currently use it, the migration process can be done in three easy steps:

  1. Register at Stilus.
  2. Contact us at support telling us about your volume requirements and which Stilus user would use the API. We will inform you of the conditions and tell you how to subscribe to the API.
  3. Once you have subscribed, you will only have to change the API endpoint and the key parameter value in your integration, and you will be all set to keep using the Text Proofreading API.

If you’d rather use directly the text proofreading functionality online or from Word, check out all the ways in which you can use Stilus!


Applying text analytics to financial compliance

In one of our previous posts we talked about Financial Compliance, FinTech and its relation to Text Analytics. We also showed the need for normalized facts for mining text in search of suspects of financial crimes and proposed the form SVO (subject, verb, object) to do so.

financial crime

Financial crime

Thus, we had defined clause as the string within the sentence capable to convey an autonomous fact. Finally, we had explained how to integrate with the Lemmatization, PoS and Parsing API in order to get a fully syntactic and semantic enriched JSON-formatted tree for input text, from which we will work extracting SVO clauses.

In this post, we are going to continue with the extraction process, seeing in detail how to work to extract those clauses from the response returned by the Parsing API.

Continue reading


How to build a Financial Compliance model ready for FinTech

What is Financial Compliance and what is FinTech?

financial crime

Financial crime

Financial crime has increasingly become of concern to governments throughout the world. The emergence of vast regulatory environments furthered the degree of compliance expected even from other non-governmental organizations that conduct financial transactions with consumers, including credit card companies, banks, credit unions, payday loan companies, and mortgage companies.

Technology has helped financial services address the increased burden of compliance in innovative ways which have also yielded other benefits, including improved decision-making, better risk management, and an enhanced user experience for the consumer or investor.

The rapid development and employment of AI (Artificial Intelligence) techniques within this specific domain have the potential to transform the financial services industry.

FinTech (Financial Technology) solutions have recently arised as the new applications, processes, products, or business models in the financial services industry, composed of one or more complementary financial services and provided as an end-to-end process via the Internet. You can find additional interesting information in this article.

Continue reading


Voice of the Employee Dashboard

Voice of the Employee gathers the needs, wishes, hopes, and preferences of all employees within an organization. The VoE takes into account both explicit needs, such as salaries, career, health, and retirement, as well as tacit needs such as job satisfaction and the respect of co-workers and supervisors. This post follows the line of Voice of the Customer in Excel: creating a dashboard. We are creating another dashboard, this time for the Voice of the Employee.

Text-based data sources are a key factor for any organization that wants to understand the “whys”.

Continue reading


Our experience on Adverse Drug Reactions (ADR) identification at TAC2017

MeaningCloud and LaBDA research group were present at the TAC 2017 conference held on November 13th – 14th at NIST headquarters in Washington. In the Text Analysis Conferences, research groups from all over the world were invited to develop software systems to tackle text analytics-related problems. This year, one task was devoted to the automatic identification of adverse drug reactions (ADRs) appearing in drug labels, including features defining the ADR, such as its severity or if it is characteristic of a drug class instead of just a given drug. There has been a specific subtask to link the identified ADRs with their corresponding MedDRA codes and lexical terms. More than 10 research teams have taken part in the project, all of them applying some kind of deep learning approach to the problem. Results show that it is possible to reach 85% accuracy when identifying ADRs.

We were delighted to present our text analytics-based system for ADRs identification on drug labels, which combines natural language processing and machine learning algorithms. The system has been built as a joint effort between MeaningCloud and LaBDA research group at the Universidad Carlos III de Madrid. Identifying ADRs is a basic task for pharmacovigilance, and that is the reason why the Federal Drug Administration (FDA) is involved in the funding and definition of the ADRs identification tasks in the framework of the Text Analysis Conferences. We have learned a lot these days (e.g., a BiLSTM deep neural network is the best choice for the purpose), and shared pleasant moments with our colleagues at Washington. We hope to be able to attend next year’s edition, which will focus on the extraction of drug-drug interactions (DDI), another interesting task aimed at detecting situations where the use of a combination of drugs may lead to an adverse effect.