Dictanova Developer Portal

Empower your data with semantic analysis. Integrate new features of customer feedback analysis with the Dictanova API.

Get Started

Import data and get analysis results

This tutorial will guide you through the process of setting up a simple dataset and import data.

Let's say you want to analyse customer reviews for each of your stores. Each review is associated with a NPS score and the reviews are in English.

Request your token

You'll need to request an access token and identify to follow this guide. Please refer to Authentication and security.

Replace ACCESS_TOKEN with your access token in all samples below.

Create a dataset

Use the Create dataset endpoint to declare the dataset you will use to import your data. Notice that we declare two metadata fields corresponding to a NPS score and a Store name.

curl -X POST \
  /management/datasets \
  -H 'Authorization: Bearer ACCESS_TOKEN' \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "Store reviews",
  "lang": "en",
  "type": "SATISFACTION_SURVEY_RESPONSE",
  "industry": "RETAIL_B2C",
  "metadataDefinitions": [
    {
      "code": "nps",
      "type": "NPS",
      "mandatory": true
    },
    {
      "code": "store",
      "type": "STRING",
      "mandatory": true
    }
  ]
}'

Dictanova replies with a newly created dataset with a unique ID (5ab38a1a0a4617000165562a).

{
    "name": "Store reviews",
    "lang": "en",
    "type": "SATISFACTION_SURVEY_RESPONSE",
    "industry": "RETAIL_B2C",
    "metadataDefinitions": [
        {
            "code": "nps",
            "type": "NPS",
            "bounds": {
                "upper": 10,
                "lower": 0
            },
            "mandatory": true,
            "personalData": false,
            "allowedValues": []
        },
        {
            "code": "store",
            "type": "STRING",
            "mandatory": true,
            "personalData": false,
            "maxLength": 200,
            "allowedValues": []
        }
    ],
    "id": "5ab38a1a0a4617000165562a",
    "state": "ACTIVE",
    "createdAt": "2018-03-22T10:48:58.417Z",
    "updatedAt": "2018-03-22T10:48:58.421Z",
    "monthlyImportedDocuments": []
}

Import your data

With your dataset ID, you can now create an import with your documents to be analyzed. Use the
Create import endpoint. Add the dataset ID in the URL.

For human readability, this sample shows how to send two documents but you can send as many as needed.

curl -X POST \
  /management/datasets/5ab38a1a0a4617000165562a/imports \
  -H 'Authorization: Bearer ACCESS_TOKEN' \
  -H 'Content-Type: application/json' \
  -d '{
  "documents": [
    {
      "lang": "en",
      "type": "SATISFACTION_SURVEY_RESPONSE",
      "content": "Very nice store with very large choice of products and nice staff.",
      "externalId": "my-external-id-1",
      "metadata": [
        {
          "code": "nps",
          "value": 9
        },
        {
          "code": "store",
          "value": "London"
        }
      ]
    },
    {
      "lang": "en",
      "type": "SATISFACTION_SURVEY_RESPONSE",
      "content": "A bit expensive for the quality of the products. But staff was ok.",
      "externalId": "my-external-id-2",
      "metadata": [
        {
          "code": "nps",
          "value": 6
        },
        {
          "code": "store",
          "value": "Paris"
        }
      ]
    }
  ]
}'

Dictanova will send you back the import unique identifier (5ab38bbf0a4617000165562e).

{
    "id": "5ab38bbf0a4617000165562e",
    "createdAt": "2018-03-22T10:55:59.142Z",
    "updatedAt": "2018-03-22T10:55:59.143Z",
    "state": "STARTED",
    "totalDocuments": 2,
    "processedDocuments": 0,
    "errors": []
}

Use this identifier to get the status of your import (read Asynchronous processing for more information).

curl -X GET \
 /management/datasets/5ab38a1a0a4617000165562a/imports/5ab38bbf0a4617000165562e \
  -H 'Authorization: Bearer ACCESS_TOKEN'

Import is now completed as the state field with value COMPLETED indicates you in the reply.

{
    "id": "5ab38bbf0a4617000165562e",
    "createdAt": "2018-03-22T10:56:00.078Z",
    "updatedAt": "2018-03-22T10:55:59.143Z",
    "endedAt": "2018-03-22T10:56:03.844Z",
    "state": "COMPLETED",
    "totalDocuments": 2,
    "processedDocuments": 2,
    "errors": []
}

Read the analysis result

Access all documents

You can use a specific endpoint to access all documents analyzed during a given import.

curl -X GET \
  /management/datasets/5ab38a1a0a4617000165562a/imports/5ab38bbf0a4617000165562e/documents \
  -H 'Authorization: Bearer ACCESS_TOKEN'

The response will be a paginated list of documents. See below for full details on document object structure.

Access a single document

We'll now check the result of the analysis of the first document sent in the create import request. The external ID provided was "my-external-id-1".

curl -X GET \
   /management/datasets/5ab38a1a0a4617000165562a/documents/my-external-id-1 \
  -H 'Authorization: Bearer ACCESS_TOKEN' 

Dictanova sends you back the documents with Opinion enrichment fields.

{
    "externalId": "my-external-id-1",
    "lang": "en",
    "type": "SATISFACTION_SURVEY_RESPONSE",
    "content": "Very nice store with very large choice of products and nice staff.",
    "metadata": [
        {
            "code": "nps",
            "value": 9
        },
        {
            "code": "store",
            "value": "London"
        }
    ],
    "id": "5ab38bc0e2812d44dbef5a6a",
    "createdAt": "2018-03-22T10:56:00.806Z",
    "updatedAt": "2018-03-22T10:56:02.317Z",
    "state": "ANALYZED",
    "enrichments": [
        {
            "term": "store__NOUN__0",
            "opinion": "POSITIVE",
            "offset": {
                "begin": 10,
                "end": 15
            }
        },
        {
            "term": "very_large_choice__NOUN__0",
            "opinion": "NEUTRAL",
            "offset": {
                "begin": 21,
                "end": 38
            }
        },
        {
            "term": "large_choice_of_product__NOUN__0",
            "opinion": "NEUTRAL",
            "offset": {
                "begin": 26,
                "end": 50
            }
        },
        {
            "term": "staff__NOUN__0",
            "opinion": "POSITIVE",
            "offset": {
                "begin": 60,
                "end": 65
            }
        }
    ],
    "lastOpinionEnrichment": "2018-03-22T10:56:02.317Z"
}

Receive the analysis result

You can receive the analysis result in your system by building a simple endpoint in your system and registering it as a webhook on the ENDED_IMPORT event. Read the Create webhook documentation to get started.

When you have received the webhook, you still have to get analysis results with a single call.

curl -X GET \
  /management/datasets/5ab38a1a0a4617000165562a/imports/5ab38bbf0a4617000165562e/documents \
  -H 'Authorization: Bearer ACCESS_TOKEN'

Import data and get analysis results