RapidAPI logo

Sign Up

Log In

Ontology-Based Topic Detection

FREEMIUM
By Proxem
Updated 3 months ago
Data
2.9/10
Popularity Score
4473ms
Latency
100%
Success Rate

Ontology-Based Topic Detection API Documentation

A text analysis service to find out what any text is about by extracting the most relevant Wikipedia’s categories through a patented NLP technology

View API Details
POSTGet categories
POSTGet corpus categories
POSTGet categories

Returns the top themes associated to the given text

Header Parameters
X-RapidAPI-HostSTRING
REQUIRED
X-RapidAPI-KeySTRING
REQUIRED
AcceptSTRING
OPTIONALThe expected type of the response
Required Parameters
Document
REQUIREDThe document to analyze
Optional Parameters
nbtopcatNUMBER
OPTIONALThe max numbers of expected categories (max 50)
cleanupBOOLEAN
OPTIONALTry to remove the less useful categories (default to true)
srclangSTRING
OPTIONALSet the language of the given document (prevent the auto-detection)
edgesBOOLEAN
OPTIONALSet to true to receive parent/child relations between categories
Request Snippet
unirest.post("https://proxem-thematization.p.rapidapi.com/api/wikiAnnotator/GetCategories?nbtopcat=20")
.header("X-RapidAPI-Host", "proxem-thematization.p.rapidapi.com")
.header("X-RapidAPI-Key", "SIGN-UP-FOR-KEY")
.header("Accept", "application/json")
.header("Content-Type", "text/plain")
.send("At Proxem, our clients ask us to extract information from e-mails, social medias, press articles, and basically any type of text you can imagine. In the standard case, the text to process is written in various languages. To establish systems that support a wide scale of languages and formats is one of the mission of our Research team.Another goal of ours is to develop cross-lingual algorithms, that is algorithms which take as input texts in different languages and output an information computed on all those texts. For example on a task called sentiment analysis, which consists in detecting the \"polarity\" of a document (\"is this document rather positive or negative?\"), we want to implement a unique algorithm that would take as input sentences in English, Chinese, Spanish, etc and would compute a score. There are multiple reasons for us to aim at this. One is for simplicity sake. Indeed, we do not want to implement as many algorithms as languages we may have to handle. Another reason for that choice is that we want to leverage the important amount of available data for some languages to improve the accuracy on languages where data is rare.")
.end(function (result) {
  console.log(result.status, result.headers, result.body);
});
Sample Response
Log inSign up

Install SDK for NodeJS

Installing

To utilize unirest for node.js install the the npm module:

$ npm install unirest

After installing the npm package you can now start simplifying requests like so:

var unirest = require('unirest');

Creating Request

unirest.post("https://proxem-thematization.p.rapidapi.com/api/wikiAnnotator/GetCategories?nbtopcat=20")
.header("X-RapidAPI-Host", "proxem-thematization.p.rapidapi.com")
.header("X-RapidAPI-Key", "SIGN-UP-FOR-KEY")
.header("Accept", "application/json")
.header("Content-Type", "text/plain")
.send("At Proxem, our clients ask us to extract information from e-mails, social medias, press articles, and basically any type of text you can imagine. In the standard case, the text to process is written in various languages. To establish systems that support a wide scale of languages and formats is one of the mission of our Research team.Another goal of ours is to develop cross-lingual algorithms, that is algorithms which take as input texts in different languages and output an information computed on all those texts. For example on a task called sentiment analysis, which consists in detecting the \"polarity\" of a document (\"is this document rather positive or negative?\"), we want to implement a unique algorithm that would take as input sentences in English, Chinese, Spanish, etc and would compute a score. There are multiple reasons for us to aim at this. One is for simplicity sake. Indeed, we do not want to implement as many algorithms as languages we may have to handle. Another reason for that choice is that we want to leverage the important amount of available data for some languages to improve the accuracy on languages where data is rare.")
.end(function (result) {
  console.log(result.status, result.headers, result.body);
});
OAuth2 Authentication
Client ID
Client Secret
OAuth2 Authentication