Why Use Proxies in Go?
Go has become the language of choice for high-performance networking tools, web scrapers, and API clients. Its lightweight goroutines, built-in concurrency primitives, and a battle-tested net/http standard library make it ideal for proxy-powered applications that need to handle thousands of concurrent requests.
Whether you are building a web scraping pipeline, monitoring SERP rankings, or collecting competitive pricing data, routing your Go HTTP clients through proxies lets you rotate IP addresses, bypass geo-restrictions, and avoid rate limits at scale.
In this guide, you will learn how to configure proxies in Go using both the standard library and the ProxyHat Go SDK. Every code snippet is copy-paste-ready so you can start scraping within minutes.
Installation
ProxyHat Go SDK
The fastest way to get started is with the official SDK. It handles authentication, rotation, geo-targeting, and retries out of the box.
go get github.com/ProxyHatCom/go-sdk@latest
Standard library only
If you prefer zero dependencies, Go's net/http and net/url packages are all you need. No extra installation required.
Authentication and Basic Setup
ProxyHat uses username-password authentication over the proxy endpoint. You will find your credentials in the ProxyHat dashboard. A typical proxy URL looks like this:
http://USERNAME:PASSWORD@gate.proxyhat.com:8080
Keep credentials out of source code. Use environment variables or a .env file:
export PROXYHAT_USER="your_username"
export PROXYHAT_PASS="your_password"
Simple GET Request with a Proxy
Here is the minimal approach using only the standard library:
package main
import (
"fmt"
"io"
"log"
"net/http"
"net/url"
"os"
)
func main() {
proxyURL, err := url.Parse(fmt.Sprintf(
"http://%s:%s@gate.proxyhat.com:8080",
os.Getenv("PROXYHAT_USER"),
os.Getenv("PROXYHAT_PASS"),
))
if err != nil {
log.Fatal(err)
}
client := &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyURL(proxyURL),
},
}
resp, err := client.Get("https://httpbin.org/ip")
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
fmt.Println(string(body))
}
Run it and you will see a residential IP address instead of your own. Every request is routed through ProxyHat's residential proxy pool.
Using Different Proxy Types
ProxyHat supports three proxy types, each suited to different workloads. You select the type via the proxy port or a username flag:
| Type | Port | Best For | Avg Latency |
|---|---|---|---|
| Residential | 8000 | Web scraping, ad verification | ~800 ms |
| Datacenter | 8010 | High-speed bulk requests | ~200 ms |
| Mobile | 8020 | Social media, app testing | ~1200 ms |
For a deeper comparison of when to use each type, see our guide on residential vs datacenter vs mobile proxies.
// Switch proxy type by changing the port
residentialProxy := "http://user:pass@gate.proxyhat.com:8080"
datacenterProxy := "http://user:pass@gate.proxyhat.com:8080"
mobileProxy := "http://user:pass@gate.proxyhat.com:8080"
Manual Approach: Go net/http with Proxy Configuration
For full control, configure the http.Transport directly. This lets you tune connection pooling, TLS settings, and timeouts:
package main
import (
"crypto/tls"
"net/http"
"net/url"
"time"
)
func newProxyClient(proxyAddr string) (*http.Client, error) {
proxyURL, err := url.Parse(proxyAddr)
if err != nil {
return nil, err
}
transport := &http.Transport{
Proxy: http.ProxyURL(proxyURL),
MaxIdleConns: 100,
MaxIdleConnsPerHost: 10,
IdleConnTimeout: 90 * time.Second,
TLSClientConfig: &tls.Config{MinVersion: tls.VersionTLS12},
}
client := &http.Client{
Transport: transport,
Timeout: 30 * time.Second,
}
return client, nil
}
Recommended Approach: ProxyHat Go SDK
The ProxyHat Go SDK wraps all the boilerplate into a clean API. It manages connection pooling, automatic retries, session handling, and geo-targeting for you.
package main
import (
"context"
"fmt"
"log"
proxyhat "github.com/ProxyHatCom/go-sdk"
)
func main() {
client, err := proxyhat.NewClient(proxyhat.Config{
Username: "your_username",
Password: "your_password",
ProxyType: proxyhat.Residential,
})
if err != nil {
log.Fatal(err)
}
defer client.Close()
resp, err := client.Get(context.Background(), "https://httpbin.org/ip")
if err != nil {
log.Fatal(err)
}
fmt.Println("Status:", resp.StatusCode)
fmt.Println("Body:", string(resp.Body))
}
The SDK returns a structured response, handles decompression, and retries transient failures automatically. Check the API documentation for the full method reference.
Rotating vs Sticky Sessions
ProxyHat supports two session modes:
- Rotating (default) — every request gets a new IP. Ideal for large-scale web scraping.
- Sticky — the same IP is held for a configurable duration (up to 30 minutes). Useful for multi-step flows like login sequences or paginated crawls.
Rotating sessions (SDK)
client, _ := proxyhat.NewClient(proxyhat.Config{
Username: "your_username",
Password: "your_password",
ProxyType: proxyhat.Residential,
// Rotating is the default — no extra config needed
})
// Each call uses a different IP
for i := 0; i < 5; i++ {
resp, _ := client.Get(context.Background(), "https://httpbin.org/ip")
fmt.Printf("Request %d: %s\n", i+1, string(resp.Body))
}
Sticky sessions (SDK)
session, _ := client.NewSession(proxyhat.SessionConfig{
Duration: 10 * time.Minute,
})
// All requests through this session use the same IP
resp1, _ := session.Get(context.Background(), "https://example.com/login")
resp2, _ := session.Post(context.Background(), "https://example.com/dashboard", payload)
Sticky sessions (manual)
// Append session ID to the username
// Format: USERNAME-session-SESSIONID
proxyURL := "http://user-session-abc123:pass@gate.proxyhat.com:8080"
Geo-Targeted Requests
Need IPs from a specific country? ProxyHat supports 190+ locations. Pass the country code via the SDK or as a username flag:
// SDK approach
client, _ := proxyhat.NewClient(proxyhat.Config{
Username: "your_username",
Password: "your_password",
ProxyType: proxyhat.Residential,
Country: "US", // ISO 3166-1 alpha-2
State: "CA", // optional: state/region
City: "LA", // optional: city
})
resp, _ := client.Get(context.Background(), "https://httpbin.org/ip")
fmt.Println(string(resp.Body)) // US-based IP
// Manual approach — append country to username
// Format: USERNAME-country-US
proxyURL := "http://user-country-US:pass@gate.proxyhat.com:8080"
Geo-targeting is essential for localized SERP tracking, regional pricing checks, and content availability testing.
Error Handling and Retries
Network requests through proxies can fail for transient reasons: connection resets, timeouts, or temporary blocks. Robust error handling is critical for production scrapers.
SDK built-in retries
client, _ := proxyhat.NewClient(proxyhat.Config{
Username: "your_username",
Password: "your_password",
ProxyType: proxyhat.Residential,
MaxRetries: 3,
RetryDelay: 2 * time.Second,
})
Manual retry with exponential backoff
package main
import (
"fmt"
"math"
"net/http"
"time"
)
func fetchWithRetry(client *http.Client, url string, maxRetries int) (*http.Response, error) {
var lastErr error
for attempt := 0; attempt <= maxRetries; attempt++ {
resp, err := client.Get(url)
if err == nil && resp.StatusCode < 500 {
return resp, nil
}
if err != nil {
lastErr = err
} else {
lastErr = fmt.Errorf("HTTP %d", resp.StatusCode)
resp.Body.Close()
}
backoff := time.Duration(math.Pow(2, float64(attempt))) * time.Second
time.Sleep(backoff)
}
return nil, fmt.Errorf("all %d retries failed: %w", maxRetries, lastErr)
}
Concurrent Scraping with Goroutines
Go's concurrency model is its superpower. With goroutines and channels, you can scrape hundreds of URLs concurrently while keeping memory usage minimal.
package main
import (
"context"
"fmt"
"sync"
proxyhat "github.com/ProxyHatCom/go-sdk"
)
type Result struct {
URL string
StatusCode int
Body string
Err error
}
func scrape(ctx context.Context, client *proxyhat.Client, urls []string, concurrency int) []Result {
results := make([]Result, len(urls))
sem := make(chan struct{}, concurrency) // semaphore
var wg sync.WaitGroup
for i, u := range urls {
wg.Add(1)
go func(idx int, target string) {
defer wg.Done()
sem <- struct{}{} // acquire
defer func() { <-sem }() // release
resp, err := client.Get(ctx, target)
if err != nil {
results[idx] = Result{URL: target, Err: err}
return
}
results[idx] = Result{
URL: target,
StatusCode: resp.StatusCode,
Body: string(resp.Body),
}
}(i, u)
}
wg.Wait()
return results
}
func main() {
client, _ := proxyhat.NewClient(proxyhat.Config{
Username: "your_username",
Password: "your_password",
ProxyType: proxyhat.Residential,
})
defer client.Close()
urls := []string{
"https://example.com/page/1",
"https://example.com/page/2",
"https://example.com/page/3",
// ... hundreds more
}
results := scrape(context.Background(), client, urls, 20)
for _, r := range results {
if r.Err != nil {
fmt.Printf("FAIL %s: %v\n", r.URL, r.Err)
} else {
fmt.Printf("OK %s: %d bytes\n", r.URL, len(r.Body))
}
}
}
Rate Limiting with a Semaphore
The scraper above already uses a semaphore channel to cap concurrency. For finer-grained rate limiting (e.g., N requests per second), use golang.org/x/time/rate:
package main
import (
"context"
"fmt"
"log"
proxyhat "github.com/ProxyHatCom/go-sdk"
"golang.org/x/time/rate"
)
func main() {
client, _ := proxyhat.NewClient(proxyhat.Config{
Username: "your_username",
Password: "your_password",
ProxyType: proxyhat.Residential,
})
defer client.Close()
// Allow 10 requests per second, burst of 20
limiter := rate.NewLimiter(10, 20)
urls := []string{"https://example.com/1", "https://example.com/2"}
for _, u := range urls {
if err := limiter.Wait(context.Background()); err != nil {
log.Fatal(err)
}
resp, err := client.Get(context.Background(), u)
if err != nil {
fmt.Printf("Error: %v\n", err)
continue
}
fmt.Printf("%s — %d\n", u, resp.StatusCode)
}
}
Production Tips
Connection pooling
Go's http.Transport maintains a pool of idle connections by default. For proxy workloads, tune these settings:
transport := &http.Transport{
Proxy: http.ProxyURL(proxyURL),
MaxIdleConns: 200,
MaxIdleConnsPerHost: 50,
MaxConnsPerHost: 100,
IdleConnTimeout: 90 * time.Second,
ResponseHeaderTimeout: 15 * time.Second,
}
Timeouts
Always set timeouts. A scraper without timeouts will eventually hang on a stalled connection:
client := &http.Client{
Transport: transport,
Timeout: 30 * time.Second, // total request timeout
}
// Or use context for per-request control
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
req, _ := http.NewRequestWithContext(ctx, "GET", targetURL, nil)
resp, err := client.Do(req)
Graceful shutdown
In long-running scrapers, listen for OS signals to shut down cleanly:
package main
import (
"context"
"os"
"os/signal"
"syscall"
)
func main() {
ctx, stop := signal.NotifyContext(
context.Background(),
os.Interrupt, syscall.SIGTERM,
)
defer stop()
// Pass ctx to your scraping functions
// When Ctrl+C is pressed, ctx is cancelled
// and in-flight requests wind down gracefully
runScraper(ctx)
}
Logging and observability
Wrap your HTTP transport to log request timing and status codes. This helps identify slow targets and proxy errors in production:
type loggingTransport struct {
inner http.RoundTripper
}
func (t *loggingTransport) RoundTrip(req *http.Request) (*http.Response, error) {
start := time.Now()
resp, err := t.inner.RoundTrip(req)
elapsed := time.Since(start)
if err != nil {
log.Printf("ERR %s %s (%v) err=%v", req.Method, req.URL, elapsed, err)
} else {
log.Printf("OK %s %s (%v) status=%d", req.Method, req.URL, elapsed, resp.StatusCode)
}
return resp, err
}
Key Takeaways
- Go's goroutines + proxies = massive concurrency. You can run thousands of proxy-routed requests with minimal memory overhead.
- The ProxyHat Go SDK handles authentication, retries, sessions, and geo-targeting with a clean API. Install it to skip the boilerplate.
- Use rotating IPs for scraping at scale and sticky sessions for multi-step workflows like login flows.
- Always set timeouts on both the
http.Clientand viacontext.WithTimeoutfor per-request control.- Rate-limit responsibly with
golang.org/x/time/rateand cap concurrency with semaphore channels.- Geo-target your requests by passing a country code to access 190+ locations worldwide.
- Check our guide on the best proxies for web scraping to choose the right plan for your workload.
Frequently Asked Questions
How do I configure a proxy in Go's net/http client?
Set the Proxy field on http.Transport to http.ProxyURL(parsedURL) where parsedURL is your proxy address parsed with url.Parse(). Then pass the transport to http.Client. The standard library handles CONNECT tunneling for HTTPS targets automatically.
Does the ProxyHat Go SDK support HTTPS targets?
Yes. The SDK uses HTTP CONNECT tunneling under the hood, so all HTTPS traffic is encrypted end-to-end between your client and the target server. The proxy only sees the destination hostname.
How many concurrent requests can I make through Go proxies?
Go's goroutines are extremely lightweight (roughly 4 KB of stack each), so you can run tens of thousands concurrently. The practical limit is your ProxyHat plan's concurrent connection allowance and the target server's capacity. Use a semaphore channel to cap concurrency at a safe level.
What is the difference between rotating and sticky proxy sessions?
Rotating sessions assign a new IP address to every request, which is ideal for broad scraping. Sticky sessions keep the same IP for a set duration (up to 30 minutes), making them suitable for multi-step flows where the target expects a consistent visitor, such as login sequences or checkout pages.
How do I handle proxy errors and retries in Go?
The ProxyHat Go SDK provides built-in retry logic with configurable MaxRetries and RetryDelay. If using the standard library, implement exponential backoff by wrapping your request in a loop that doubles the delay after each failed attempt. Always check for both network errors and HTTP 5xx status codes.






