Convert NDJSON (Newline Delimited JSON / JSONL) to a JSON array, or a JSON array back to NDJSON. Free, private, runs entirely in your browser.
NDJSON (Newline Delimited JSON), also called JSONL (JSON Lines), is a text format where each line is a complete, valid JSON object. Unlike a regular JSON array, there is no wrapping [ ] and no commas between records — just one JSON object per line.
{"id": 1, "name": "Alice", "role": "admin"}
{"id": 2, "name": "Bob", "role": "user"}
{"id": 3, "name": "Carol", "role": "user"}
[
{"id": 1, "name": "Alice", "role": "admin"},
{"id": 2, "name": "Bob", "role": "user"},
{"id": 3, "name": "Carol", "role": "user"}
]
| Use Case | Best Format | Reason |
|---|---|---|
| API responses | JSON array | Standard, easy to parse |
| Log files | NDJSON | Append one line at a time |
| BigQuery / data exports | NDJSON | Required format for bulk load |
| Elasticsearch bulk API | NDJSON | Required by the API spec |
| ML training datasets | NDJSON / JSONL | Stream line by line, memory efficient |
| Config files | JSON | Easier to read and edit |
| OpenAI fine-tuning | JSONL | Required format for training files |
To go the other direction (JSON array to NDJSON), select "JSON Array → NDJSON" mode and paste your JSON array.
NDJSON files use the .ndjson or .jsonl extension. Both are identical in format. Google BigQuery documentation uses .ndjson, while the JSON Lines specification uses .jsonl. Most tools accept both extensions.
Read the file line by line and parse each line as JSON:
import json
with open('data.ndjson') as f:
records = [json.loads(line) for line in f if line.strip()]
const records = fs.readFileSync('data.ndjson', 'utf8')
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
BigQuery processes data in parallel across many workers. NDJSON allows each worker to read a chunk of lines independently without needing to parse the entire file structure first. A JSON array requires knowing where the array starts and ends, which prevents efficient parallel processing of large files.
Free, instant, 100% private. No account needed.
NDJSON (Newline Delimited JSON), also known as JSONL (JSON Lines), is a plain-text data format where each line contains exactly one complete, self-contained JSON value — typically a JSON object. The file has no wrapping structure: no outer array brackets, no commas between records, and no shared root element. Every line is independently parseable.
This design makes NDJSON the preferred format for streaming, logging, and large dataset processing because:
NDJSON is defined at ndjson.org and uses the file extension .ndjson. The identical JSON Lines specification at jsonlines.org uses the extension .jsonl. The two formats are completely interchangeable.
Converting NDJSON to a JSON array is straightforward: read each non-empty line, parse it as a JSON object, and wrap all the objects inside a [ ] array. The resulting JSON array is the standard format accepted by most REST APIs, JavaScript code, and JSON processing tools.
// NDJSON Input (each line is a separate JSON object)
{"id": 1, "event": "login", "user": "alice", "ts": "2024-03-15T09:00:00Z"}
{"id": 2, "event": "view", "user": "alice", "ts": "2024-03-15T09:01:32Z"}
{"id": 3, "event": "purchase","user": "bob", "ts": "2024-03-15T09:04:11Z"}
{"id": 4, "event": "logout", "user": "alice", "ts": "2024-03-15T09:07:55Z"}
// JSON Array Output (standard JSON, wraps all objects in [ ])
[
{"id": 1, "event": "login", "user": "alice", "ts": "2024-03-15T09:00:00Z"},
{"id": 2, "event": "view", "user": "alice", "ts": "2024-03-15T09:01:32Z"},
{"id": 3, "event": "purchase", "user": "bob", "ts": "2024-03-15T09:04:11Z"},
{"id": 4, "event": "logout", "user": "alice", "ts": "2024-03-15T09:07:55Z"}
]
// JavaScript / Node.js
const ndjsonToArray = (ndjson) =>
ndjson.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Python
import json
def ndjson_to_array(text):
return [json.loads(line) for line in text.splitlines() if line.strip()]
// Bash (using jq)
jq -s '.' input.ndjson > output.json
// Command line (Node.js one-liner)
node -e "const d=require('fs').readFileSync('data.ndjson','utf8');
console.log(JSON.stringify(d.split('\n').filter(l=>l.trim()).map(JSON.parse),null,2));"
// JavaScript
const arrayToNdjson = (arr) => arr.map(obj => JSON.stringify(obj)).join('\n');
// Python
def array_to_ndjson(arr):
return '\n'.join(json.dumps(obj) for obj in arr)
// Bash (using jq)
jq -c '.[]' input.json > output.ndjson
Choosing between NDJSON and a JSON array depends on how you plan to read, write, and process the data. Here is a detailed comparison of both formats:
| Feature | NDJSON / JSONL | JSON Array |
|---|---|---|
| Structure | One JSON object per line, no wrapper | All objects inside [ ] with commas |
| File size | Slightly smaller (no commas between records) | Slightly larger due to commas and brackets |
| Streaming | Process one line at a time; O(1) memory per record | Must parse entire document before reading any record |
| Appending records | Trivial: append a new line to the file | Must parse and rewrite to insert the closing ] |
| Partial failure recovery | Only the incomplete line is lost | A missing ] renders the entire file unparseable |
| Readability | Each record is compact; easy to scan with grep/head | Pretty-printed JSON is easier to read in a text editor |
| API compatibility | Required by BigQuery, Elasticsearch, OpenAI fine-tuning | Standard format for REST API responses |
| Parallel processing | Split by lines; workers process independent chunks | Requires index-based splitting after parsing |
| File extensions | .ndjson or .jsonl | .json |
Also useful: JSON Formatter | JSON to CSV | JSON to XML | JSON Diff | JSON Validator