Skip to main content
All CollectionsREST API
API Recipe: Extract All Console Logs from All Web Pages
API Recipe: Extract All Console Logs from All Web Pages
Luiza Gircoveanu avatar
Written by Luiza Gircoveanu
Updated over 6 months ago

Overview

Given an Audit Run, you can extract all the console logs collected on each page.

Note: More information about each web page is available from the ObservePoint API, but this recipe only covers collected console logs on each page. Using this recipe as a starting point, you can expand your integration to include other information.

Step 1: Get your Audit ID and Run ID

You’ll need an Audit ID and Run ID to get started. There are multiple ways to do this, depending on your needs. The ObservePoint API is flexible, so you can use the approach that works best for you.

Here are 3 ways to get your Audit ID and Run ID:

  • From a webhook: If you have a webhook configured, the webhook payload will include Audit ID and Run ID. See the “Webhooks” section above for setting up webhooks.

  • Manually (good for one-time testing): You can find the Audit ID in the ObservePoint application under "Data Sources". Click on the Audit you want, note the Audit ID and Run ID in the address bar.

Tip: A common scenario is to download and store Audit Run IDs in a database you control. Later, when your code runs, it can first query your database for the most recently processed run ID, and then query the ObservePoint API for any to find the next run ID that hasn’t yet been downloaded.

Step 2: Download all the pages from the Audit Run:

Make an authenticated POST request to this URL with an empty request payload:

In your code, start with page=0, and make multiple requests, incrementing the page number for each request, until you have downloaded all the web pages for this run (see the “Pagination” section above for more instructions).

Each response will look like the following. Note that this example Audit Run scanned 544 web pages, and we requested a page size of 100, so there are 6 total pages to request.

{
"metadata": {
"pagination": {
"totalCount": 544,
"totalPageCount": 6,
"pageSize": 100,
"currentPageSize": 100,
"currentPageNumber": 0
}
},
"pages": [
{
"pageId": "77ebb089815a3b8d82813ebaf6320730",
"dataCollectionUuid": "77ebb089815a3b8d82813ebaf6320730",
"pageUrl": "http://example.com/",
"pageTitle": "Example Home Page",
"pageLoadTime": 665,
"pageStatusCode": 200,
"initialPageStatusCode": 200,
"finalPageStatusCode": 200,
"redirectCount": 0,
"size": 34363
},
{
"pageId": "139cf2a71e4ecb1967b7a5b47770e66a",
"dataCollectionUuid": "139cf2a71e4ecb1967b7a5b47770e66a",
"pageUrl": "http://example.com/path",
"pageTitle": "Example Web Page",
"pageLoadTime": 1621,
"pageStatusCode": 200,
"initialPageStatusCode": 200,
"finalPageStatusCode": 200,
"redirectCount": 0,
"size": 956436
},
...
]
}

In the next step, you will use the pageId field from each of the web page records you fetch above.

Step 3: For each web page, fetch its network requests

From step 2, you have a list of page IDs (example: 139cf2a71e4ecb1967b7a5b47770e66a). The next step is to query the API for the network requests which ObservePoint captured on each page.

For each page, make an authenticated POST request to this URL:

There is a payload for the above request. For an extraction use case, use the following payload:

{"includedLevels":["error","warn","debug","info","other"],"search":""}

This request is also paginated (see “Pagination” above), so you’ll need to iterate through all pages if there are more than 50 unique console logs for any given page.

Each response will look like this. Note that the web page in this example made 3 console logs, and we requested size=50, so there is only one page of results to download.

{
"metadata": {
"pagination": {
"totalCount": 3,
"totalPageCount": 1,
"pageSize": 50,
"currentPageSize": 3,
"currentPageNumber": 0
}
},
"consoleLogs": [
{
"timestamp": 1674772618281,
"message": "Failed to load resource: the server responded with a status of 404 (Not Found)",
"level": "error",
"source": "https://example.org/content/par_0/image.img.jpg",
"count": 4
},
{
"timestamp": 1674772618838,
"message": "Failed to load resource: the server responded with a status of 404 ()",
"level": "error",
"source": "https://www.googletagmanager.com/gtm.js?id=GTM-XXXXXX",
"count": 1,
"relatedTag": {
"tagId": 18,
"tagName": "Google Tag Manager",
"tagCategoryId": 6,
"tagCategoryName": "Tag Management"
}
}
...

This API endpoint returns these fields in its payload:

Field Name

Field Description

timestamp

UTC Epoch value of time console call was made

message

Console log message

level

Console log type

source

Resource calling console log

count

Number of times console log is called on page (if called multiple times)

tagId

If source is a known technology within the ObservePoint Tag Database, the identifier for said tag/technology

tagName

If source is a known technology within the ObservePoint Tag Database, the name for said tag/technology

tagCategoryId

If source is a known technology within the ObservePoint Tag Database, the category identifier for said tag/technology

tagCategoryName

If source is a known technology within the ObservePoint Tag Database, the category name for said tag/technology

Conclusion

With all the web pages and console logs downloaded for this Audit Run, you can store them in a database and report/visualize them as you like.

Did this answer your question?