Once you have created a model and its categories with their training texts, term lists or both, you can update the binary files for the classification system by compiling the model. This can be achieved through the Build model action, clicking the Build button in the sidebar of the different model views. Doing this, you ensure that you are using the most recent version of the model for your classification task, and that all the rules, stopwords and different settings are working properly. After that, you can evaluate how is it working and then polish the results if necessary.
MeaningCloud provides a testing tool for this purpose. You can access it by clicking the Test button either in the models dashboard, in the sidebar of the different model views or in the build page. You will enter the Text Classification API Test Console with your license key and your model already selected.
When you click the Test button on a model, the verbose parameter will be automatically selected in the Test Console. This will show in the results a term_list with the words used to classify the text in a category.
This is very helpful when tuning your models.
In the image above, we are testing our example model with a synopsis from IMDB. The chosen movie is 'Taken 3'. These are the results:
The results are shown as the API returns them, either in JSON or XML. In the list of categories, the ones that apply for the text are shown with these fields:
The field that will help us the most will be Term list, as it will show us which terms are being used in the classification. This will enable us to see how the text tokenization has been done and which are the words that have been taken into account for each category.
If the model has rules in its term lists, the field form will show terms from all the lists. In other words, terms that diminish relevance to a category will also be shown.
If you want to go into detail about model testing and tuning, refer to the fine tuning section.