The Pear API uses cursor-based pagination for all list endpoints. This approach is efficient for large datasets and provides consistent results even when data is being added or updated.
How it works
Every paginated response includes these fields:
| Field | Type | Description |
|---|
data | array | The current page of results. |
next_cursor | string or null | An opaque cursor to pass for the next page. null when there are no more pages. |
has_more | boolean | true if more results exist beyond the current page. |
total_count | integer | Total number of results matching your query (across all pages). |
Basic usage
First request
Make your initial request without a cursor:
curl "http://localhost:8000/api/events?category=Sports&limit=50" \
-H "Authorization: Bearer mk_live_..."
{
"data": [
{ "id": "pear_001", "title": "Super Bowl LIX 2025", "..." : "..." },
{ "id": "pear_002", "title": "NBA Finals 2025", "..." : "..." }
],
"next_cursor": "eyJpZCI6InBlYXJfMDUwIn0=",
"has_more": true,
"total_count": 2847
}
Next page
Pass the next_cursor value from the previous response:
curl "http://localhost:8000/api/events?category=Sports&limit=50&cursor=eyJpZCI6InBlYXJfMDUwIn0=" \
-H "Authorization: Bearer mk_live_..."
Last page
When you reach the end, has_more is false and next_cursor is null:
{
"data": [
{ "id": "pear_2845", "title": "...", "..." : "..." }
],
"next_cursor": null,
"has_more": false,
"total_count": 2847
}
Paginating through all results
Here is how to iterate through an entire dataset:
import requests
headers = {"Authorization": "Bearer mk_live_..."}
all_events = []
cursor = None
while True:
params = {"category": "Crypto", "limit": 200}
if cursor:
params["cursor"] = cursor
response = requests.get(
"http://localhost:8000/api/events",
params=params,
headers=headers
)
page = response.json()
all_events.extend(page["data"])
print(f"Fetched {len(all_events)} of {page['total_count']} events")
if not page["has_more"]:
break
cursor = page["next_cursor"]
print(f"Done. Total events: {len(all_events)}")
Page size
Use the limit parameter to control page size. The maximum is 200 items per page.
| Smaller pages (10-50) | Larger pages (100-200) |
|---|
| Faster individual responses | Fewer total requests |
| Good for UI pagination | Good for data export |
| Lower memory per request | More efficient for batch processing |
The following endpoints support cursor-based pagination:
| Endpoint | Default limit |
|---|
GET /api/events | 50 |
GET /api/markets | 50 |
GET /api/comparisons | 50 |
Reducing result sets with filters
Instead of paginating through everything, use filters to narrow results:
# Instead of paginating through 79,239 markets...
curl "http://localhost:8000/api/markets?limit=200" \
-H "Authorization: Bearer mk_live_..."
# ...filter to just what you need
curl "http://localhost:8000/api/markets?venue=polymarket&status=active&min_volume=50000&limit=200" \
-H "Authorization: Bearer mk_live_..."
Applying filters before pagination is always more efficient than fetching all results and filtering client-side.
Important notes
- Cursors are opaque: Do not parse, modify, or construct cursor values. They are encoded strings that the server uses to identify the next page.
- Cursors are stateless: Each cursor encodes the position in the dataset. You can use a cursor multiple times to re-fetch the same page.
- Filters must be consistent: When paginating, pass the same filter parameters with each request. Changing filters mid-pagination will produce unexpected results.
- Cursors may expire: While cursors are generally long-lived, very old cursors may become invalid if the underlying data changes significantly.