Why Use Proxies in Node.js?
Node.js is one of the most popular runtimes for building web scrapers, API integrators, and automation tools. But if you are sending hundreds or thousands of requests from a single IP address, you will quickly run into rate limits, CAPTCHAs, and outright IP bans. Proxies in Node.js solve this by routing your requests through different IP addresses, making each one appear to come from a unique user.
Whether you are scraping product prices, monitoring search engine rankings, or collecting public data at scale, proxies are essential infrastructure. In this guide, we will cover everything you need to integrate residential proxies into your Node.js projects using the ProxyHat Node SDK, Axios, Puppeteer, and Playwright.
If you are still evaluating which proxy type fits your use case, check out our comparison of residential vs datacenter vs mobile proxies.
Installation and Setup
Installing the ProxyHat SDK
The fastest way to get started is with the official Node SDK. It handles authentication, rotation, and connection pooling out of the box:
npm install proxyhat
You will also want the HTTP clients you plan to use:
npm install axios puppeteer playwright
Authentication
All ProxyHat proxy connections authenticate via your API credentials. You can find your username and password in the ProxyHat dashboard. The SDK accepts them as constructor options or environment variables:
// Option 1: Pass credentials directly
const ProxyHat = require('proxyhat');
const client = new ProxyHat({
username: 'your_username',
password: 'your_password',
});
// Option 2: Use environment variables
// Set PROXYHAT_USERNAME and PROXYHAT_PASSWORD in your .env
const client = new ProxyHat();
Simple GET Request with the SDK
The SDK provides a high-level fetch method that handles proxy rotation automatically:
const ProxyHat = require('proxyhat');
const client = new ProxyHat();
async function main() {
const response = await client.fetch('https://httpbin.org/ip', {
country: 'us',
});
console.log('Status:', response.status);
console.log('Body:', await response.text());
}
main().catch(console.error);
Each call automatically selects a different residential IP. No manual proxy URL formatting required.
Using Proxies with Axios
Axios is the most popular HTTP client in the Node.js ecosystem. To route Axios requests through ProxyHat, you can either use the SDK's proxy URL or configure Axios directly.
Method 1: SDK Proxy Agent
const ProxyHat = require('proxyhat');
const axios = require('axios');
const client = new ProxyHat();
const agent = client.createAgent({ country: 'us' });
async function scrapeWithAxios() {
const response = await axios.get('https://httpbin.org/ip', {
httpAgent: agent,
httpsAgent: agent,
timeout: 30000,
});
console.log('IP:', response.data.origin);
}
scrapeWithAxios();
Method 2: Direct Proxy URL
If you prefer manual configuration, use the standard proxy URL format with the https-proxy-agent package:
const axios = require('axios');
const { HttpsProxyAgent } = require('https-proxy-agent');
const proxyUrl = 'http://USERNAME:PASSWORD@gate.proxyhat.com:8080';
const agent = new HttpsProxyAgent(proxyUrl);
async function scrapeWithAxios() {
const response = await axios.get('https://httpbin.org/ip', {
httpsAgent: agent,
timeout: 30000,
});
console.log('IP:', response.data.origin);
}
scrapeWithAxios();
Axios Instance with Default Proxy
For repeated use, create a pre-configured Axios instance:
const { HttpsProxyAgent } = require('https-proxy-agent');
const agent = new HttpsProxyAgent('http://USERNAME:PASSWORD@gate.proxyhat.com:8080');
const proxyClient = axios.create({
httpsAgent: agent,
timeout: 30000,
headers: {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
},
});
// All requests now go through the proxy
const res = await proxyClient.get('https://example.com');
Using Proxies with Puppeteer
Puppeteer launches a headless Chrome browser, which is ideal for scraping JavaScript-heavy websites. Proxies are configured at the browser launch level.
const puppeteer = require('puppeteer');
async function scrapeWithPuppeteer() {
const browser = await puppeteer.launch({
args: ['--proxy-server=gate.proxyhat.com:8080'],
headless: 'new',
});
const page = await browser.newPage();
// Authenticate with the proxy
await page.authenticate({
username: 'USERNAME',
password: 'PASSWORD',
});
await page.goto('https://httpbin.org/ip', {
waitUntil: 'networkidle2',
timeout: 60000,
});
const content = await page.evaluate(() => document.body.innerText);
console.log('IP:', content);
await browser.close();
}
scrapeWithPuppeteer();
Puppeteer with Geo-Targeting
To target a specific country, include the country code in your username. ProxyHat uses the format USERNAME-country-XX:
await page.authenticate({
username: 'USERNAME-country-de', // Route through Germany
password: 'PASSWORD',
});
Browse all available proxy locations on our locations page.
Using Proxies with Playwright
Playwright supports Chromium, Firefox, and WebKit. Proxy configuration is built into the launch options, making it even simpler:
const { chromium } = require('playwright');
async function scrapeWithPlaywright() {
const browser = await chromium.launch({
proxy: {
server: 'http://gate.proxyhat.com:8080',
username: 'USERNAME',
password: 'PASSWORD',
},
});
const context = await browser.newContext();
const page = await context.newPage();
await page.goto('https://httpbin.org/ip');
const body = await page.textContent('body');
console.log('IP:', body);
await browser.close();
}
scrapeWithPlaywright();
Playwright with Per-Context Proxies
Playwright allows different proxies per browser context, which is useful for multi-account or multi-region scraping:
const browser = await chromium.launch();
// US context
const usContext = await browser.newContext({
proxy: {
server: 'http://gate.proxyhat.com:8080',
username: 'USERNAME-country-us',
password: 'PASSWORD',
},
});
// UK context
const ukContext = await browser.newContext({
proxy: {
server: 'http://gate.proxyhat.com:8080',
username: 'USERNAME-country-gb',
password: 'PASSWORD',
},
});
const usPage = await usContext.newPage();
const ukPage = await ukContext.newPage();
await Promise.all([
usPage.goto('https://www.google.com/search?q=proxy+service'),
ukPage.goto('https://www.google.co.uk/search?q=proxy+service'),
]);
Rotating vs Sticky Sessions
ProxyHat supports two session modes that serve different scraping needs:
| Feature | Rotating | Sticky |
|---|---|---|
| IP per request | New IP each time | Same IP for session duration |
| Best for | Large-scale scraping | Multi-step workflows, login sessions |
| Session duration | N/A | Up to 30 minutes |
| Username format | USERNAME | USERNAME-session-XXXX |
Sticky Session Example
const { HttpsProxyAgent } = require('https-proxy-agent');
const axios = require('axios');
// Generate a random session ID
const sessionId = 'session_' + Math.random().toString(36).slice(2, 10);
const agent = new HttpsProxyAgent(
`http://USERNAME-session-${sessionId}:PASSWORD@gate.proxyhat.com:8080`
);
// All requests with this agent use the same IP
const client = axios.create({ httpsAgent: agent, timeout: 30000 });
const res1 = await client.get('https://httpbin.org/ip');
const res2 = await client.get('https://httpbin.org/ip');
console.log(res1.data.origin === res2.data.origin); // true
Geo-Targeted Requests
Geo-targeting is critical for SERP tracking, localized price monitoring, and regional content verification. ProxyHat supports country-level and city-level targeting through your username string:
// Country targeting
const countryAgent = new HttpsProxyAgent(
'http://USERNAME-country-jp:PASSWORD@gate.proxyhat.com:8080'
);
// City targeting
const cityAgent = new HttpsProxyAgent(
'http://USERNAME-country-us-city-newyork:PASSWORD@gate.proxyhat.com:8080'
);
Check our full list of available locations for supported countries and cities.
Error Handling and Retries
Network errors are inevitable when working with proxies at scale. A robust retry strategy is essential for production scrapers:
const axios = require('axios');
const { HttpsProxyAgent } = require('https-proxy-agent');
async function fetchWithRetry(url, options = {}) {
const maxRetries = options.maxRetries || 3;
const baseDelay = options.baseDelay || 1000;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const agent = new HttpsProxyAgent(
'http://USERNAME:PASSWORD@gate.proxyhat.com:8080'
);
const response = await axios.get(url, {
httpsAgent: agent,
timeout: options.timeout || 30000,
});
if (response.status === 200) return response;
throw new Error(`HTTP ${response.status}`);
} catch (error) {
console.warn(`Attempt ${attempt}/${maxRetries} failed: ${error.message}`);
if (attempt === maxRetries) throw error;
// Exponential backoff: 1s, 2s, 4s...
const delay = baseDelay * Math.pow(2, attempt - 1);
await new Promise(r => setTimeout(r, delay));
}
}
}
// Usage
const response = await fetchWithRetry('https://example.com', {
maxRetries: 3,
timeout: 20000,
});
Concurrent Scraping with Concurrency Control
Sending all requests at once with Promise.all can overwhelm both your machine and the target server. Use a concurrency limiter to control the number of parallel connections:
async function asyncPool(concurrency, items, iteratorFn) {
const results = [];
const executing = new Set();
for (const [index, item] of items.entries()) {
const promise = Promise.resolve().then(() => iteratorFn(item, index));
results.push(promise);
executing.add(promise);
const cleanup = () => executing.delete(promise);
promise.then(cleanup, cleanup);
if (executing.size >= concurrency) {
await Promise.race(executing);
}
}
return Promise.all(results);
}
// Scrape 100 URLs, 10 at a time
const urls = Array.from({ length: 100 }, (_, i) =>
`https://example.com/page/${i + 1}`
);
const results = await asyncPool(10, urls, async (url) => {
return fetchWithRetry(url, { maxRetries: 2, timeout: 15000 });
});
For large-scale web scraping projects, keeping concurrency between 5 and 20 offers a good balance between speed and reliability. Read our guide on the best proxies for web scraping in 2026 for more architecture advice.
Production Tips
Connection Management
- Reuse Axios instances and proxy agents instead of creating new ones per request. This avoids socket exhaustion.
- Set
keepAlive: trueon your HTTP agent for persistent connections. - Close Puppeteer/Playwright browsers promptly after use to free memory.
Memory and Timeouts
- For Puppeteer, use
page.setRequestInterception(true)to block images, CSS, and fonts when you only need HTML content. This reduces bandwidth and memory usage significantly. - Always set explicit timeouts on every request. A missing timeout is the number one cause of hanging scrapers.
- Monitor Node.js memory with
process.memoryUsage()in long-running scrapers.
Request Interception for Faster Scraping
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', (req) => {
const blocked = ['image', 'stylesheet', 'font', 'media'];
if (blocked.includes(req.resourceType())) {
req.abort();
} else {
req.continue();
}
});
await page.goto('https://example.com');
Environment Variables
Never hardcode credentials. Use a .env file and load it with dotenv:
require('dotenv').config();
const proxyUrl = `http://${process.env.PROXY_USER}:${process.env.PROXY_PASS}@gate.proxyhat.com:8080`;
Ready to start? Check our pricing plans to find the right amount of residential proxy bandwidth for your project.
Key Takeaways
- The ProxyHat Node SDK is the fastest way to add proxy support. Install with
npm install proxyhatand start fetching in three lines of code.- Axios works with proxies via
https-proxy-agentor the SDK's built-in agent. Create a reusable Axios instance for efficiency.- Puppeteer accepts proxies as a launch argument. Call
page.authenticate()to pass credentials.- Playwright has native proxy support in its launch and context options, including per-context proxies for multi-region scraping.
- Use sticky sessions for multi-step workflows and rotating IPs for large-scale data collection.
- Always implement retry logic with exponential backoff and concurrency control in production scrapers.
- Block unnecessary resources in headless browsers to reduce bandwidth and improve speed.






