APIs change. Auth schemes evolve. Rate limits tighten. Entire services disappear.
If you build integrations long enough, you realise the “hard part” isn’t making the first request — it’s making something that survives:
- new HTTP clients and frameworks
- auth edge cases
- flaky networks and CI environments
- product growth and increased traffic
- vendor shutdowns and breaking changes
Below are five integration patterns that keep paying dividends, regardless of the API you’re talking to.
1) Separate transport from business logic
The fastest way to create a brittle integration is to mix:
- HTTP calls and retries
- request/response parsing
- domain rules (“what does this data mean for my product?”)
- error handling and logging …all in the same place.
What to do instead
Create a thin transport layer responsible for HTTP mechanics:
- base URLs, headers, timeouts, retries, backoff
- serialization (JSON), status code handling
Keep your integration/domain logic separate:
- “get user profile”, “sync contacts”, “submit payment”
- mapping API data into your own internal model
Why it matters HTTP clients come and go. Your business logic shouldn’t care whether you use Guzzle, cURL, fetch, Axios, or whatever comes next.
Rule of thumb: depend on an interface, not a specific HTTP client.
2) Treat authentication as its own subsystem
Authentication looks simple until it isn’t. OAuth variants, refresh flows, token expiry, revoked credentials, clock skew, and provider-specific error formats will eventually leak into your codebase — unless you stop them.
What to do instead
Create a dedicated auth layer that:
- acquires tokens
- refreshes tokens
- signs requests (if needed)
- standardises auth failures into your own error types
Your application code should be able to say:
“Make an authenticated call to the provider”
…without caring whether the provider uses OAuth 1.0a, OAuth 2.0, API keys, JWTs, or something custom.
Rule of thumb: the rest of your system should not know how auth works — only whether it succeeded.
3) Build for offline testing: mock first, not last
Live APIs are a hostile environment for tests:
- rate limits
- network instability
- changing datasets
- expiring credentials
- provider outages
If your test suite needs the real API, it will be flaky. If it’s flaky, it will be ignored. And when tests are ignored, integrations break silently.
What to do instead
Design your integration so it can run fully offline:
- Record real responses once (or use fixtures from API docs)
- Replay those responses in unit/integration tests
- Simulate hard-to-trigger errors:
- 429 rate limits
- 401 expired tokens
- 500s and timeouts
- malformed payloads
Bonus tip: include fixtures for both “happy path” and “ugly path.” Most failures happen in the ugly path.
Rule of thumb: if it can’t be tested offline, it can’t be relied on.
4) Assume the API will disappear (because it might)
APIs get deprecated. Companies pivot. Products are acquired and shut down. Even “stable” providers can change terms, pricing, or availability.
So design the integration with an exit strategy.
What to do instead
- Keep API-specific code isolated behind a clear boundary
- Never block critical user flows on a third-party call
- Cache where it’s safe and appropriate
- Define fallback behaviour:
- “try later” queues
- degraded mode UI
- last-known-good data
- Make removal possible without rewriting your system
Rule of thumb: your core product should degrade gracefully if the provider fails tomorrow.
5) Engineer for rate limits and resilience from day one
Rate limits aren’t an error — they’re part of the contract. Ignoring them turns your integration into a production incident waiting to happen.
What to do instead Implement resilience techniques deliberately:
- Exponential backoff + jitter on transient failures
- Circuit breakers to prevent cascading failure
- Request queues for non-urgent operations
- Idempotency where supported (especially for write operations)
- Structured logging that captures:
- endpoint, status code, request IDs
- rate limit headers
- retry counts and latency
Also: monitor real usage. You can’t manage what you don’t measure.
Rule of thumb: treat external APIs like an unreliable dependency you must contain.
A practical checklist (use this on your next integration)
Before shipping, can you answer “yes” to these?
- HTTP is abstracted behind an interface (swappable client)
- Auth is isolated (tokens/signing/refresh are not scattered)
- Tests run offline using fixtures/mocks
- Failure modes are defined (timeouts, 429s, 401s, 5xx)
- Critical flows do not hard-depend on the provider
- Rate limits are respected (backoff, queueing, monitoring)
- Integration code is removable (clean boundary)
If not, your integration may work today — but it’s expensive to keep working.
Closing thought
Tools and protocols will keep changing — OpenAPI, GraphQL, new auth providers, new SDKs. But the underlying integration problems stay the same.
If you design for change, failure, and removal, your integration won’t just “work.” It will remain maintainable as the ecosystem shifts around it.
