The Type Parameter

Create custom functionality with the 'type' parameter.

The type parameter lets you change the output format of your export endpoint on the fly. Append it to your URL:

https://api.csvgetter.com/abc123?type=json_records

If you don't specify a type, the endpoint uses its default format (CSV or JSON, depending on how you configured it).


Format Comparison

type Value

Format

MIME Type

Best For

(none/default)

CSV

text/csv

Spreadsheets, data import, universal compatibility

json_records

JSON (Records)

text/json

APIs, JavaScript apps, most JSON use cases

json_split

JSON (Split)

text/json

Data science tools, columnar processing

json_index

JSON (Index)

text/json

Row-keyed lookups

json_columns

JSON (Columns)

text/json

Column-oriented analysis

json_values

JSON (Values)

text/json

Lightweight data transfer (no headers in body)

json_table

JSON (Table)

text/json

Schema-aware consumers, typed data

xml

XML

application/xml

Enterprise integrations, SOAP services

html_table

HTML Table

text/html

Embedding in web pages, simple viewing

dynamic_table

Dynamic Table

text/html

Interactive viewing with search and pagination

excel_web_query

Excel Web Query

text/html

Excel "Data from Web" feature


Sample Output for Each Format

Using this sample data:

name
email
age

CSV (Default)

When to use: Universal format. Works with Excel, Google Sheets, pandas, any tool that reads CSV.


json_records

When to use: The most common JSON format. Each row is an object. Ideal for JavaScript fetch(), REST APIs, and most programming use cases.


json_split

When to use: Separates column names from data. Useful when you need to process headers independently, or for data science tools like pandas (pd.read_json(url, orient='split')).


json_index

When to use: When you need to look up rows by their index. Each row is keyed by its row number.


json_columns

When to use: When you want to access all values for a specific column. Good for column-by-column processing.


json_values

When to use: Smallest JSON payload — no column names, no indices. Use when you already know the column order and want minimal data transfer.


json_table

When to use: Includes a schema with data types. Good for consumers that need to know column types (integer vs string) without guessing. CSV Getter automatically detects numeric columns.


xml

When to use: Enterprise integrations, SOAP services, or any system that requires XML input.

Note: Column names with special characters (spaces, punctuation) are automatically converted to underscores in XML output.


html_table

Renders a static HTML page with the data in a <table> element.

When to use: Quick data viewing in a browser, embedding in web pages via iframe, or simple reporting.


dynamic_table

Renders an interactive HTML page with:

  • Searchable columns

  • Sortable headers

  • Pagination

When to use: Sharing data with non-technical users who want to browse and search interactively.


excel_web_query

Renders an HTML table optimized for Excel's "Data from Web" import feature.

When to use: Specifically for pulling data into Excel using Data > From Web. The HTML is structured so Excel recognizes the table correctly.


Using nest_json with JSON Formats

The nest_json parameter wraps any JSON output in a named top-level key:

Without nest_json:

With nest_json=results:

This works with all json_* formats and is useful when your consuming application expects a specific JSON structure.


Choosing the Right Format

Use Case
Recommended Format

Import into Excel

CSV or excel_web_query

Import into Google Sheets

CSV (via IMPORTDATA)

JavaScript / web app

json_records

Python / pandas

CSV or json_split

Enterprise / legacy system

xml

Share with non-technical users

dynamic_table

Embed in a web page

html_table

Minimal payload size

json_values

Need schema/type info

json_table

API that expects nested JSON

json_records + nest_json

Last updated