All Collections
REST API
API Recipe: Update Audit Starting URLs
API Recipe: Update Audit Starting URLs
Luiza Gircoveanu avatar
Written by Luiza Gircoveanu
Updated over a week ago

Overview

When conducting an Audit, you may want to specify URLs for various reasons, such as:

  • Including known pages from a sitemap

  • Adding landing pages from external sources not detected by ObservePoint's web crawler

  • Validating newly published content by other teams

By specifying the URLs, you can bypass ObservePoint's web crawler and ensure that only the pages you want to be scanned are included in the Audit.

Implementation

Step 1: Get your Audit ID

Find your Audit ID manually from the ObservePoint application or use a webhook or API query to get the list of Audits in your account:

  • From a webhook: If you have a webhook configured, the webhook payload will include Audit ID and Run ID. See the Webhooks help document for setting up webhooks.

  • Manually (good for one-time testing): You can find the Audit ID in the ObservePoint application under "Data Sources". Click on the Audit you want, note the Audit ID, and run ID in the address bar.

Step 2: Return your Audit object

Make an authenticated GET request to this URL to get the full current configuration of your Audit, including the starting URLs:

Step 3: Replace the starting URLs and limit

Replace the startingUrls attribute with an array of valid URL strings and the limit attribute with the length of the startingUrls array.

Step 4: Send updated Audit configuration to ObservePoint

Make a PUT call with the Audit object created above to the following endpoint:

Note: All URLs will be validated on the ObservePoint backend, and invalid URLs will return a 422 error status code response specifying which URLs are not valid.

(Optional) Step 5: Start Audit

Make an authenticated POST request to this URL with an empty payload to start the Audit:

Conclusion

With these steps, you can push a known list of URLs into an ObservePoint Audit to scan only those URLs and bypass the web crawler.

Did this answer your question?