Skip to main content
Version: 0.7.1

Ingest API

In this tutorial, we will describe how to send data to Quickwit using the ingest API.

You will need a local Quickwit instance up and running to follow this tutorial.

To start it, run ./quickwit run in a terminal.

Create an index

First, let's create a schemaless index.

# Create the index config file.
cat << EOF > stackoverflow-schemaless-config.yaml
version: 0.7
index_id: stackoverflow-schemaless
mode: dynamic
commit_timeout_secs: 30
# Use the CLI to create the index...
./quickwit index create --index-config stackoverflow-schemaless-config.yaml
# Or with cURL.
curl -XPOST -H 'Content-Type: application/yaml' 'http://localhost:7280/api/v1/indexes' --data-binary @stackoverflow-schemaless-config.yaml

Ingest data

Let's first download a sample of the StackOverflow dataset.

# Download the first 10_000 Stackoverflow posts articles.
curl -O

You can ingest data either with the CLI or with cURL. The CLI is more convenient for ingesting several GB as Quickwit may return 429 responses if the ingest queue is full. Quickwit CLI will automatically retry ingestion in this case.

# Ingest the first 10_000 Stackoverflow posts articles with the CLI...
./quickwit index ingest --index stackoverflow-schemaless --input-path stackoverflow.posts.transformed-10000.json --force

# OR with cURL.
curl -XPOST -H 'Content-Type: application/json' 'http://localhost:7280/api/v1/stackoverflow-schemaless/ingest?commit=force' --data-binary @stackoverflow.posts.transformed-10000.json

Execute search queries

You can now search the index.

curl 'http://localhost:7280/api/v1/stackoverflow-schemaless/search?query=body:python'

Tear down resources (optional)

curl -XDELETE 'http://localhost:7280/api/v1/indexes/stackoverflow-schemaless'

This concludes the tutorial. You can now move on to the next tutorial to learn how to ingest data from Kafka.