Resources API Reference¶
Resources provide access to different parts of the Honeycomb API. All resources have both async and sync variants of each method.
Datasets¶
DatasetsResource ¶
Bases: BaseResource
Resource for managing Honeycomb datasets.
Datasets are containers for your telemetry data in Honeycomb.
Example (async): >>> async with HoneycombClient(api_key="...") as client: ... datasets = await client.datasets.list() ... dataset = await client.datasets.get(slug="my-dataset")
Example (sync): >>> with HoneycombClient(api_key="...", sync=True) as client: ... datasets = client.datasets.list()
list_async
async
¶
get_async
async
¶
Get a specific dataset (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
slug
|
str
|
Dataset slug. |
required |
Returns:
| Type | Description |
|---|---|
Dataset
|
Dataset object. |
create_async
async
¶
Create a new dataset (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
DatasetCreate
|
Dataset configuration. |
required |
Returns:
| Type | Description |
|---|---|
Dataset
|
Created Dataset object. |
update_async
async
¶
Update an existing dataset (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
slug
|
str
|
Dataset slug. |
required |
dataset
|
DatasetCreate | DatasetUpdate
|
Updated dataset configuration. |
required |
Returns:
| Type | Description |
|---|---|
Dataset
|
Updated Dataset object. |
delete_async
async
¶
Delete a dataset (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
slug
|
str
|
Dataset slug. |
required |
list ¶
get ¶
Get a specific dataset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
slug
|
str
|
Dataset slug. |
required |
Returns:
| Type | Description |
|---|---|
Dataset
|
Dataset object. |
create ¶
Create a new dataset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
DatasetCreate
|
Dataset configuration. |
required |
Returns:
| Type | Description |
|---|---|
Dataset
|
Created Dataset object. |
update ¶
Update an existing dataset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
slug
|
str
|
Dataset slug. |
required |
dataset
|
DatasetCreate | DatasetUpdate
|
Updated dataset configuration. |
required |
Returns:
| Type | Description |
|---|---|
Dataset
|
Updated Dataset object. |
delete ¶
Delete a dataset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
slug
|
str
|
Dataset slug. |
required |
Triggers¶
TriggersResource ¶
Bases: BaseResource
Resource for managing Honeycomb triggers.
Triggers allow you to define alert conditions on your data and receive notifications when those conditions are met.
Example (async): >>> async with HoneycombClient(api_key="...") as client: ... triggers = await client.triggers.list(dataset="my-dataset") ... trigger = await client.triggers.get(dataset="my-dataset", trigger_id="abc123")
Example (sync): >>> with HoneycombClient(api_key="...", sync=True) as client: ... triggers = client.triggers.list(dataset="my-dataset")
list_async
async
¶
get_async
async
¶
Get a specific trigger (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
trigger_id
|
str
|
Trigger ID. |
required |
Returns:
| Type | Description |
|---|---|
Trigger
|
Trigger object. |
create_async
async
¶
Create a new trigger (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
trigger
|
TriggerCreate
|
Trigger configuration. |
required |
Returns:
| Type | Description |
|---|---|
Trigger
|
Created Trigger object. |
update_async
async
¶
Update an existing trigger (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
trigger_id
|
str
|
Trigger ID. |
required |
trigger
|
TriggerCreate
|
Updated trigger configuration. |
required |
Returns:
| Type | Description |
|---|---|
Trigger
|
Updated Trigger object. |
delete_async
async
¶
Delete a trigger (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
trigger_id
|
str
|
Trigger ID. |
required |
list ¶
get ¶
Get a specific trigger.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
trigger_id
|
str
|
Trigger ID. |
required |
Returns:
| Type | Description |
|---|---|
Trigger
|
Trigger object. |
create ¶
Create a new trigger.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
trigger
|
TriggerCreate
|
Trigger configuration. |
required |
Returns:
| Type | Description |
|---|---|
Trigger
|
Created Trigger object. |
update ¶
Update an existing trigger.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
trigger_id
|
str
|
Trigger ID. |
required |
trigger
|
TriggerCreate
|
Updated trigger configuration. |
required |
Returns:
| Type | Description |
|---|---|
Trigger
|
Updated Trigger object. |
delete ¶
Delete a trigger.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
trigger_id
|
str
|
Trigger ID. |
required |
SLOs¶
SLOsResource ¶
Bases: BaseResource
Resource for managing Honeycomb SLOs (Service Level Objectives).
SLOs allow you to define and track service level objectives based on your data.
Example (async): >>> async with HoneycombClient(api_key="...") as client: ... slos = await client.slos.list(dataset="my-dataset") ... slo = await client.slos.get(dataset="my-dataset", slo_id="abc123")
Example (sync): >>> with HoneycombClient(api_key="...", sync=True) as client: ... slos = client.slos.list(dataset="my-dataset")
list_async
async
¶
get_async
async
¶
Get a specific SLO (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
slo_id
|
str
|
SLO ID. |
required |
Returns:
| Type | Description |
|---|---|
SLO
|
SLO object. |
create_async
async
¶
update_async
async
¶
delete_async
async
¶
Delete an SLO (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
slo_id
|
str
|
SLO ID. |
required |
list ¶
get ¶
Get a specific SLO.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
slo_id
|
str
|
SLO ID. |
required |
Returns:
| Type | Description |
|---|---|
SLO
|
SLO object. |
create ¶
update ¶
delete ¶
Delete an SLO.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
Dataset slug. |
required |
slo_id
|
str
|
SLO ID. |
required |
Boards¶
BoardsResource ¶
Bases: BaseResource
Resource for managing Honeycomb boards.
Boards are dashboards that display visualizations of your data.
Example (async): >>> async with HoneycombClient(api_key="...") as client: ... boards = await client.boards.list() ... board = await client.boards.get(board_id="abc123")
Example (sync): >>> with HoneycombClient(api_key="...", sync=True) as client: ... boards = client.boards.list()
list_async
async
¶
get_async
async
¶
Get a specific board (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
board_id
|
str
|
Board ID. |
required |
Returns:
| Type | Description |
|---|---|
Board
|
Board object. |
create_async
async
¶
Create a new board (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
board
|
BoardCreate
|
Board configuration. |
required |
Returns:
| Type | Description |
|---|---|
Board
|
Created Board object. |
update_async
async
¶
Update an existing board (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
board_id
|
str
|
Board ID. |
required |
board
|
BoardCreate
|
Updated board configuration. |
required |
Returns:
| Type | Description |
|---|---|
Board
|
Updated Board object. |
delete_async
async
¶
Delete a board (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
board_id
|
str
|
Board ID. |
required |
list ¶
get ¶
Get a specific board.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
board_id
|
str
|
Board ID. |
required |
Returns:
| Type | Description |
|---|---|
Board
|
Board object. |
create ¶
Create a new board.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
board
|
BoardCreate
|
Board configuration. |
required |
Returns:
| Type | Description |
|---|---|
Board
|
Created Board object. |
update ¶
Update an existing board.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
board_id
|
str
|
Board ID. |
required |
board
|
BoardCreate
|
Updated board configuration. |
required |
Returns:
| Type | Description |
|---|---|
Board
|
Updated Board object. |
delete ¶
Delete a board.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
board_id
|
str
|
Board ID. |
required |
Queries¶
QueriesResource ¶
Bases: BaseResource
Resource for managing Honeycomb queries.
Queries define how to analyze your data. They can be used with triggers, SLOs, or run directly to get results.
Example (async): >>> async with HoneycombClient(api_key="...") as client: ... query = await client.queries.create_async( ... QueryBuilder() ... .dataset("my-dataset") ... .last_1_hour() ... .count() ... ) ... query_obj = await client.queries.get_async( ... dataset="my-dataset", ... query_id=query.id ... )
Example (sync): >>> with HoneycombClient(api_key="...", sync=True) as client: ... query = client.queries.create( ... QueryBuilder().dataset("my-dataset").count() ... )
create_async
async
¶
Create a new query (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
spec
|
QuerySpec | QueryBuilder
|
Query specification (QueryBuilder or QuerySpec). |
required |
dataset
|
str | None
|
Dataset slug. Required for QuerySpec, extracted from QueryBuilder. |
None
|
Returns:
| Type | Description |
|---|---|
Query
|
Created Query object. |
Raises:
| Type | Description |
|---|---|
HoneycombValidationError
|
If the query spec is invalid. |
HoneycombNotFoundError
|
If the dataset doesn't exist. |
ValueError
|
If dataset parameter is misused. |
Example (QueryBuilder - recommended): >>> query = await client.queries.create_async( ... QueryBuilder() ... .dataset("my-dataset") ... .last_1_hour() ... .count() ... )
Example (QuerySpec - advanced): >>> query = await client.queries.create_async( ... QuerySpec(time_range=3600, calculations=[{"op": "COUNT"}]), ... dataset="my-dataset" ... )
get_async
async
¶
Get a specific query (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
The dataset slug. |
required |
query_id
|
str
|
Query ID. |
required |
Returns:
| Type | Description |
|---|---|
Query
|
Query object. |
Raises:
| Type | Description |
|---|---|
HoneycombNotFoundError
|
If the query doesn't exist. |
create ¶
Create a new query.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
spec
|
QuerySpec | QueryBuilder
|
Query specification (QueryBuilder or QuerySpec). |
required |
dataset
|
str | None
|
Dataset slug. Required for QuerySpec, extracted from QueryBuilder. |
None
|
Returns:
| Type | Description |
|---|---|
Query
|
Created Query object. |
Raises:
| Type | Description |
|---|---|
HoneycombValidationError
|
If the query spec is invalid. |
HoneycombNotFoundError
|
If the dataset doesn't exist. |
ValueError
|
If dataset parameter is misused. |
get ¶
Get a specific query.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
The dataset slug. |
required |
query_id
|
str
|
Query ID. |
required |
Returns:
| Type | Description |
|---|---|
Query
|
Query object. |
Raises:
| Type | Description |
|---|---|
HoneycombNotFoundError
|
If the query doesn't exist. |
Query Results¶
QueryResultsResource ¶
Bases: BaseResource
Resource for running queries and getting results.
Query results represent the execution of a query against a dataset. You must first create a saved query, then run it to get results.
Note
Query Results API requires Enterprise plan.
Example (async - run saved query): >>> async with HoneycombClient(api_key="...") as client: ... # First create a saved query ... query = await client.queries.create_async( ... dataset="my-dataset", ... spec=QuerySpec(time_range=3600, calculations=[...]) ... ) ... # Then run it and poll for results ... result = await client.query_results.run_async( ... dataset="my-dataset", ... query_id=query.id, ... poll_interval=1.0, ... timeout=60.0, ... ) ... print(f"Found {len(result.data.rows)} rows")
Example (manual polling): >>> async with HoneycombClient(api_key="...") as client: ... # Create query result ... query_result_id = await client.query_results.create_async( ... dataset="my-dataset", ... spec=QuerySpec(...) ... ) ... # Poll for completion ... result = await client.query_results.get_async( ... dataset="my-dataset", ... query_result_id=query_result_id ... )
create_async
async
¶
create_async(
dataset: str,
query_id: str,
disable_series: bool = True,
limit: int | None = None,
) -> str
Create a query result (start query execution) (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
The dataset slug. |
required |
query_id
|
str
|
Saved query ID (from queries.create_async). |
required |
disable_series
|
bool
|
If True, disable timeseries data and allow up to 10K results (default: True for better performance). |
True
|
limit
|
int | None
|
Override result limit (max 10,000 when disable_series=True, 1,000 otherwise). Defaults to 10,000 when disable_series=True, 1,000 when False. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
Query result ID for polling. |
Raises:
| Type | Description |
|---|---|
HoneycombNotFoundError
|
If the query doesn't exist. |
HoneycombValidationError
|
If the query spec is invalid. |
Note
Query Results API requires Enterprise plan.
get_async
async
¶
Get query result status/data (async).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
The dataset slug. |
required |
query_result_id
|
str
|
Query result ID. |
required |
Returns:
| Type | Description |
|---|---|
QueryResult
|
QueryResult with data if query is complete. |
Raises:
| Type | Description |
|---|---|
HoneycombNotFoundError
|
If the query result doesn't exist. |
run_async
async
¶
run_async(
dataset: str,
query_id: str,
disable_series: bool = True,
limit: int | None = None,
poll_interval: float = 1.0,
timeout: float = 60.0,
) -> QueryResult
Run a saved query and poll for results (async).
Convenience method that creates a query result and polls until complete.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
The dataset slug. |
required |
query_id
|
str
|
Saved query ID (from queries.create_async). |
required |
disable_series
|
bool
|
If True, disable timeseries data and allow up to 10K results (default: True for better performance). |
True
|
limit
|
int | None
|
Override result limit (max 10,000 when disable_series=True, 1,000 otherwise). Defaults to 10,000 when disable_series=True, 1,000 when False. |
None
|
poll_interval
|
float
|
Seconds between poll attempts (default: 1.0). |
1.0
|
timeout
|
float
|
Maximum seconds to wait for results (default: 60.0). |
60.0
|
Returns:
| Type | Description |
|---|---|
QueryResult
|
QueryResult with completed data (up to 10K rows if disable_series=True). |
Raises:
| Type | Description |
|---|---|
HoneycombTimeoutError
|
If query doesn't complete within timeout. |
HoneycombNotFoundError
|
If the query doesn't exist. |
Note
For > 10K results, use run_all_async() with sort-based pagination. Query Results API requires Enterprise plan.
create_and_run_async
async
¶
create_and_run_async(
spec: QuerySpec | QueryBuilder,
*,
dataset: str | None = None,
disable_series: bool = True,
limit: int | None = None,
poll_interval: float = 1.0,
timeout: float = 60.0
) -> tuple[Query, QueryResult]
Create a saved query and run it in one call (async).
Convenience method that: 1. Creates a permanent saved query 2. Executes it and polls for results 3. Returns both the saved query and results
This is useful when you want to save a query for future use AND get immediate results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
spec
|
QuerySpec | QueryBuilder
|
Query specification (QueryBuilder or QuerySpec). |
required |
dataset
|
str | None
|
Dataset slug. Required for QuerySpec, extracted from QueryBuilder. |
None
|
disable_series
|
bool
|
If True, disable timeseries data and allow up to 10K results (default: True for better performance). |
True
|
limit
|
int | None
|
Override result limit (max 10,000 when disable_series=True, 1,000 otherwise). Defaults to 10,000 when disable_series=True, 1,000 when False. |
None
|
poll_interval
|
float
|
Seconds between poll attempts (default: 1.0). |
1.0
|
timeout
|
float
|
Maximum seconds to wait for results (default: 60.0). |
60.0
|
Returns:
| Type | Description |
|---|---|
tuple[Query, QueryResult]
|
Tuple of (Query, QueryResult) - the saved query and execution results. |
Raises:
| Type | Description |
|---|---|
HoneycombTimeoutError
|
If query doesn't complete within timeout. |
HoneycombValidationError
|
If the query spec is invalid. |
ValueError
|
If dataset parameter is misused. |
Example (QueryBuilder - recommended): >>> query, result = await client.query_results.create_and_run_async( ... QueryBuilder() ... .dataset("my-dataset") ... .last_1_hour() ... .count(), ... )
Example (QuerySpec - advanced): >>> query, result = await client.query_results.create_and_run_async( ... QuerySpec(time_range=3600, calculations=[{"op": "COUNT"}]), ... dataset="my-dataset" ... )
create ¶
create(
dataset: str,
query_id: str,
disable_series: bool = True,
limit: int | None = None,
) -> str
Create a query result (start query execution).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
The dataset slug. |
required |
query_id
|
str
|
Saved query ID (from queries.create). |
required |
disable_series
|
bool
|
If True, disable timeseries data and allow up to 10K results (default: True for better performance). |
True
|
limit
|
int | None
|
Override result limit (max 10,000 when disable_series=True, 1,000 otherwise). Defaults to 10,000 when disable_series=True, 1,000 when False. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
Query result ID for polling. |
Raises:
| Type | Description |
|---|---|
HoneycombNotFoundError
|
If the query doesn't exist. |
HoneycombValidationError
|
If the query spec is invalid. |
Note
Query Results API requires Enterprise plan.
get ¶
Get query result status/data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
The dataset slug. |
required |
query_result_id
|
str
|
Query result ID. |
required |
Returns:
| Type | Description |
|---|---|
QueryResult
|
QueryResult with data if query is complete. |
Raises:
| Type | Description |
|---|---|
HoneycombNotFoundError
|
If the query result doesn't exist. |
run ¶
run(
dataset: str,
query_id: str,
disable_series: bool = True,
limit: int | None = None,
poll_interval: float = 1.0,
timeout: float = 60.0,
) -> QueryResult
Run a saved query and poll for results.
Convenience method that creates a query result and polls until complete.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
str
|
The dataset slug. |
required |
query_id
|
str
|
Saved query ID (from queries.create). |
required |
disable_series
|
bool
|
If True, disable timeseries data and allow up to 10K results (default: True for better performance). |
True
|
limit
|
int | None
|
Override result limit (max 10,000 when disable_series=True, 1,000 otherwise). Defaults to 10,000 when disable_series=True, 1,000 when False. |
None
|
poll_interval
|
float
|
Seconds between poll attempts (default: 1.0). |
1.0
|
timeout
|
float
|
Maximum seconds to wait for results (default: 60.0). |
60.0
|
Returns:
| Type | Description |
|---|---|
QueryResult
|
QueryResult with completed data (up to 10K rows if disable_series=True). |
Raises:
| Type | Description |
|---|---|
HoneycombTimeoutError
|
If query doesn't complete within timeout. |
HoneycombNotFoundError
|
If the query doesn't exist. |
Note
Query Results API requires Enterprise plan.
create_and_run ¶
create_and_run(
spec: QuerySpec | QueryBuilder,
*,
dataset: str | None = None,
disable_series: bool = True,
limit: int | None = None,
poll_interval: float = 1.0,
timeout: float = 60.0
) -> tuple[Query, QueryResult]
Create a saved query and run it in one call.
Convenience method that: 1. Creates a permanent saved query 2. Executes it and polls for results 3. Returns both the saved query and results
This is useful when you want to save a query for future use AND get immediate results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
spec
|
QuerySpec | QueryBuilder
|
Query specification (QueryBuilder or QuerySpec). |
required |
dataset
|
str | None
|
Dataset slug. Required for QuerySpec, extracted from QueryBuilder. |
None
|
disable_series
|
bool
|
If True, disable timeseries data and allow up to 10K results (default: True for better performance). |
True
|
limit
|
int | None
|
Override result limit (max 10,000 when disable_series=True, 1,000 otherwise). Defaults to 10,000 when disable_series=True, 1,000 when False. |
None
|
poll_interval
|
float
|
Seconds between poll attempts (default: 1.0). |
1.0
|
timeout
|
float
|
Maximum seconds to wait for results (default: 60.0). |
60.0
|
Returns:
| Type | Description |
|---|---|
tuple[Query, QueryResult]
|
Tuple of (Query, QueryResult) - the saved query and execution results. |
Raises:
| Type | Description |
|---|---|
HoneycombTimeoutError
|
If query doesn't complete within timeout. |
HoneycombValidationError
|
If the query spec is invalid. |
ValueError
|
If dataset parameter is misused. |
Example (QueryBuilder - recommended): >>> query, result = client.query_results.create_and_run( ... QueryBuilder() ... .dataset("my-dataset") ... .last_1_hour() ... .count(), ... )
Example (QuerySpec - advanced): >>> query, result = client.query_results.create_and_run( ... QuerySpec(time_range=3600, calculations=[{"op": "COUNT"}]), ... dataset="my-dataset" ... )
Base Resource¶
BaseResource ¶
Base class for API resource clients.
Provides common functionality for making API requests and parsing responses into Pydantic models.