How to convert csv to ndjson for bigquery / snowflake
- Step 1Export the source table as CSV — Run your SELECT or platform export to pull the records into a UTF-8 CSV.
- Step 2Convert to NDJSON — Drop the CSV here and pick 'NDJSON / JSON Lines' as the output format.
- Step 3Upload to your storage bucket — Move the .ndjson file to GCS, S3, or Snowflake stage using gsutil, aws s3 cp, or PUT.
- Step 4Run the load job — Use bq load --source_format=NEWLINE_DELIMITED_JSON, or COPY INTO ... FILE_FORMAT = (TYPE = 'JSON').
Frequently asked questions
What is the difference between NDJSON and a JSON array?+
NDJSON is one JSON object per line with no surrounding brackets and no commas between records. Most warehouse loaders require this format because it streams cleanly without parsing the whole file.
Will type inference break STRING columns in my schema?+
If your warehouse schema declares the column STRING, set type inference off so numeric-looking IDs stay as quoted strings and avoid load type-mismatch errors.
Can I use this for partitioned table loads?+
Yes. NDJSON output works with date- and integer-partitioned tables; the partition column simply needs to appear in every record.
Privacy first
Conversion runs locally in your browser. No file is uploaded — only metadata counters are saved for signed-in dashboard stats.