ProxyCrawl

ProxyCrawl

Development

All-In-One data crawling and scraping platform for business developers.

Visit API

πŸ“š Documentation & Examples

Everything you need to integrate with ProxyCrawl

πŸš€ Quick Start Examples

ProxyCrawl Javascript Examplejavascript
// ProxyCrawl API Example
const response = await fetch('https://proxycrawl.com', {
    method: 'GET',
    headers: {
        'Content-Type': 'application/json'
    }
});

const data = await response.json();
console.log(data);

ProxyCrawl API Documentation

Are you tired of being blocked by websites that use CAPTCHA or IP blocking to limit your access? Proxycrawl is here to help! Our API allows you to crawl websites and access data without being detected.

Getting Started

First, you need to sign up for an API key at https://proxycrawl.com. Once you have an API key, you can start using our API services.

Example code

Here is an example code to use our API in JavaScript:

const ProxycrawlAPI = require('proxycrawl');

const api = new ProxycrawlAPI({ token: 'YOUR_TOKEN' });

api.get({
  url: 'https://www.example.com',
  country: 'us',
  user_agent: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'
}).then(function(response) {
  console.log(response.body);
}).catch(function(error) {
  console.log(error);
});

In the example code above, we first import the Proxycrawl API module and initialize it with our API token. Then, we make a request to the specified URL, using a custom user agent and country, and log the response body to the console.

API Services

Proxycrawl provides several API services, each with its own endpoint and parameters.

Scraper API

The Scraper API allows you to extract data from websites using selectors.

Example code

api.get({
  url: 'https://www.example.com',
  scraper: 'amazon',
  country: 'us',
  use_js: 'true',
  body: 'body'
}).then(function(response) {
  console.log(response.body);
}).catch(function(error) {
  console.log(error);
});

In this example, we use the Scraper API to extract data from Amazon's website in the United States, using JavaScript-enabled scraping and the specified selector.

Crawler API

The Crawler API allows you to crawl websites and extract data from multiple pages.

Example code

api.get({
  url: 'https://www.example.com',
  crawler: 'links',
  country: 'us',
  limit: 3,
  next_url_regex: '(/page/\\d+)',
  headers: {
    'Accept-Language': 'en-US,en;q=0.9',
    'Referer': 'https://www.google.com/'
  }
}).then(function(response) {
  console.log(response.body);
}).catch(function(error) {
  console.log(error);
});

In this example, we use the Crawler API to extract links from the specified URL, limit the crawl to three pages, and use the specified regex for the next page URL. We also include custom headers in the request.

Proxy API

The Proxy API allows you to retrieve a list of proxies to use for scraping.

Example code

api.get({
  url: 'https://api.proxycrawl.com/proxy',
  headers: {
    'Proxy-Crawl-Token': 'YOUR_TOKEN'
  }
}).then(function(response) {
  console.log(response.body);
}).catch(function(error) {
  console.log(error);
});

In this example, we use the Proxy API to retrieve a list of proxies using our API token.

Conclusion

Proxycrawl API offers a powerful solution for bypassing IP blocking and CAPTCHA challenges. Using the API in JavaScript is easy and convenient, as shown in the example codes above. We hope this documentation helps you get started with using Proxycrawl API for your web scraping needs.

πŸ“Š 30-Day Uptime History

Daily uptime tracking showing online vs offline minutes

Jun 11Jun 13Jun 15Jun 17Jun 19Jun 21Jun 23Jun 25Jun 27Jun 29Jul 1Jul 3Jul 5Jul 7Jul 1004008001440Minutes
Online
Offline

Related APIs in Development