This service allows you to extract specific elements from a web page based on CSS selectors and optionally inject custom JavaScript. In addition to selected elements, parent elements are included (but not siblings).
It loads the page with Chromium and returns the page after JS rendering, so is more likely to be correct than basic scrapers.
It also sets the base
tag to a special *.proxy.snip.info
endpoint to work around CORS errors and intercept relative URLs in the rendered page, which also works for tricky considerations like relative URLs in CSS.
Finally, it patches window.fetch
to send relative requests to the upstream domain, and has some other neat tricks like loading window.location.hash
and window.location.search
from the upstream URL for triggering behavior in PWAs.
url
: The URL of the web page to extract content fromselector
(repeating): CSS selector(s) to identify the elements you want to extract
js
(optional, repeating): Custom JavaScript function(s) to run after all other scripts on the pageGET /api/v1/snippet?url=http://example.com&selector=.header&js=document.querySelector('.footer').remove()
Enter the URL and add selectors or JS functions: