Do you have any questions? Just write us an email or ask us through the feedback section.


Requests are made using GET or POST data submissions to the API entry point. Typically, a POST method is recommended in order to overcome the parameter maximum length limit associated to the GET method.



This is the endpoint to access the API.

Service Method Url
Lemmatization, PoS and Parsing POST Console

If you are working with an on-premises installation, you will need to substitute by your own server address.


These are the supported parameters.

Name Description Values Default
key The access key is required for making requests to any of our web services. You can get a valid access key for free just by creating an account at MeaningCloud. Required
of Output format. xml
Optional. Default: of=json
lang It specifies the language in which the text must be analyzed. en: English
es: Spanish
it: Italian
fr: French
pt: Portuguese
ca: Catalan
ilang It specifies the language in which the values returned will appear (in the case where they are known). Check the response section to see which fields are affected. en: English
es: Spanish
it: Italian
fr: French
pt: Portuguese
ca: Catalan
Optional. Default: same as lang
verbose Verbose mode. When active, it shows additional information about the morphosyntactic tagsets and sentiment analysis. It provides an in depth description of the tagsets and shows the changes applied to the basic polarity of the different polarity terms detected. y: enabled
n: disabled
Optional. Default: verbose=n
txt Input text that's going to be analyzed. UTF-8 encoded text (plain text, HTML or XML). Optional. Default: txt=""
txtf The text format parameter specifies if the text included in the txt parameter uses markup language that needs to be interpreted (known HTML tags and HTML code will be interpreted, and unknown tags will be ignored). plain
Optional. Default: txtf=plain
url URL with the content to analyze. Currently only non-authenticated HTTP and FTP are supported. The content types supported for URL contents can be found here. Optional. Default: url=""
doc Input file with the content to analyze. The supported formats for file contents can be found here. Optional. Default: doc=""
uw Deal with unknown words. This feature adds a stage to the topic extraction in which the engine, much like a spellchecker, tries to find a suitable analysis to the unknown words resulted from the initial analysis assignment. It is specially useful to decrease the impact typos have in text analyses. y: enabled
n: disabled
Optional. Default: uw=n
rt Deal with relaxed typography. This parameter indicates how reliable the text (as far as spelling, typography, etc. are concerned) to analyze is, and influences how strict the engine will be when it comes to take these factors into account in the topic extraction. y: enabled
u: enabled only for user dictionary
n: disabled
Optional. Default: rt=n
dm Type of disambiguation applied. It is accumulative, that is, the semantic disambiguation mode will also include morphosyntactic disambiguation. n: no disambiguation
m: morphosyntactic disambiguation
s: semantic disambiguation
Optional. Default: dm=s
sdg Semantic disambiguation grouping. This parameter will only apply when semantic disambiguation is activated (dm=s). See disambiguation grouping for a more in depth explanation. n: none
g: global intersection
t: intersection by type
l: intersection by type - smallest location
Optional. Default: sdg=l
cont Disambiguation context. Context prioritization for entity semantic disambiguation. See context disambiguation for a more in depth explanation. Optional. Default: cont=""
ud The user dictionary allows to include user-defined entities and concepts in the analysis. It provides a mechanism to adapt the process to focus on specific domains or on terms relevant to a user's interests, either to increase the precision in any of the domains already taken into account in our ontology to include a new one, or just to add a new semantic meaning to known terms. Several dictionaries can be combined separating them with |. Name of your user dictionaries. Optional. Default: ud=""
tt The list of topic types to extract will be specified through a string with the letters assigned to each one of the topic types that are to be extracted. e: named entities
c: concepts
t: time expressions
m: money expressions
n: quantity expressions [beta]
o: other expressions
q: quotations
r: relations
a: all
Optional. Default: tt=""
st Show subtopics. This parameter will indicate if subtopics are to be shown. See subtopics for a more in depth explanation. y: enabled
n: disabled
Optional. Default: st=n
timeref This value allows to set a specific time reference to detect the actual value of all the relative time expressions detected in the text. It only applies when time expressions are enabled in tt. YYYY-MM-DD hh:mm:ss GMT±HH:MM Optional. Default: current time at the moment the request is made.
sm Sentiment model chosen. If sent empty, sentiment analysis info will not be included in the response. general: general model. Automatic language detection. Optional. Default: sm=""
egp Expand global polarity. This mode allows you to choose between two different algorithms for the polarity detection of entities and concepts. Enabling the parameter gives less weight to the syntactic relationships, so it's recommended for short texts with unreliable typography. It only applies when sm!="". y: enabled
n: disabled
Optional. Default: egp=n


The fields txt, doc and url are mutually exclusive; in other words, at least one of them must not be empty (a content parameter is required), and in cases where more than one of them has a value assigned, only one will be processed. The precedence order is txt, url and doc.