Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Coding Guide to Build a Production-Ready Asynchronous Python SDK with Rate Limiting, In-Memory Caching, and Authentication

    A Coding Guide to Build a Production-Ready Asynchronous Python SDK with Rate Limiting, In-Memory Caching, and Authentication

    June 23, 2025

    In this tutorial, we guide users through building a robust, production-ready Python SDK. It begins by showing how to install and configure essential asynchronous HTTP libraries (aiohttp, nest-asyncio). It then walks through the implementation of core components, including structured response objects, token-bucket rate limiting, in-memory caching with TTL, and a clean, dataclass-driven design. We’ll see how to wrap these pieces up in an AdvancedSDK class that supports async context management, automatic retry/wait-on-rate-limit behavior, JSON/auth headers injection, and convenient HTTP-verb methods. Along the way, a demo harness against JSONPlaceholder illustrates caching efficiency, batch fetching with rate limits, error handling, and even shows how to extend the SDK via a fluent “builder” pattern for custom configuration.

    Copy CodeCopiedUse a different Browser
    import asyncio
    import aiohttp
    import time
    import json
    from typing import Dict, List, Optional, Any, Union
    from dataclasses import dataclass, asdict
    from datetime import datetime, timedelta
    import hashlib
    import logging
    
    
    !pip install aiohttp nest-asyncio

    We install and configure the asynchronous runtime by importing asyncio and aiohttp, alongside utilities for timing, JSON handling, dataclass modeling, caching (via hashlib and datetime), and structured logging. The !pip install aiohttp nest-asyncio line ensures that the notebook can run an event loop seamlessly within Colab, enabling robust async HTTP requests and rate-limited workflows.

    Copy CodeCopiedUse a different Browser
    @dataclass
    class APIResponse:
        """Structured response object"""
        data: Any
        status_code: int
        headers: Dict[str, str]
        timestamp: datetime
       
        def to_dict(self) -> Dict:
            return asdict(self)

    The APIResponse dataclass encapsulates HTTP response details, payload (data), status code, headers, and the timestamp of retrieval into a single, typed object. The to_dict() helper converts the instance into a plain dictionary for easy logging, serialization, or downstream processing.

    Copy CodeCopiedUse a different Browser
    class RateLimiter:
        """Token bucket rate limiter"""
        def __init__(self, max_calls: int = 100, time_window: int = 60):
            self.max_calls = max_calls
            self.time_window = time_window
            self.calls = []
       
        def can_proceed(self) -> bool:
            now = time.time()
            self.calls = [call_time for call_time in self.calls if now - call_time < self.time_window]
           
            if len(self.calls) < self.max_calls:
                self.calls.append(now)
                return True
            return False
       
        def wait_time(self) -> float:
            if not self.calls:
                return 0
            return max(0, self.time_window - (time.time() - self.calls[0]))

    The RateLimiter class enforces a simple token-bucket policy by tracking the timestamps of recent calls and allowing up to max_calls within a rolling time_window. When the limit is reached, can_proceed() returns False, and wait_time() calculates how long to pause before making the next request.

    Copy CodeCopiedUse a different Browser
    class Cache:
        """Simple in-memory cache with TTL"""
        def __init__(self, default_ttl: int = 300):
            self.cache = {}
            self.default_ttl = default_ttl
       
        def _generate_key(self, method: str, url: str, params: Dict = None) -> str:
            key_data = f"{method}:{url}:{json.dumps(params or {}, sort_keys=True)}"
            return hashlib.md5(key_data.encode()).hexdigest()
       
        def get(self, method: str, url: str, params: Dict = None) -> Optional[APIResponse]:
            key = self._generate_key(method, url, params)
            if key in self.cache:
                response, expiry = self.cache[key]
                if datetime.now() < expiry:
                    return response
                del self.cache[key]
            return None
       
        def set(self, method: str, url: str, response: APIResponse, params: Dict = None, ttl: int = None):
            key = self._generate_key(method, url, params)
            expiry = datetime.now() + timedelta(seconds=ttl or self.default_ttl)
            self.cache[key] = (response, expiry)

    The Cache class provides a lightweight in-memory TTL cache for API responses by hashing the request signature (method, URL, params) into a unique key. It returns valid cached APIResponse objects before expiry and automatically evicts stale entries after their time-to-live has elapsed.

    Copy CodeCopiedUse a different Browser
    class AdvancedSDK:
        """Advanced SDK with modern Python patterns"""
       
        def __init__(self, base_url: str, api_key: str = None, rate_limit: int = 100):
            self.base_url = base_url.rstrip('/')
            self.api_key = api_key
            self.session = None
            self.rate_limiter = RateLimiter(max_calls=rate_limit)
            self.cache = Cache()
            self.logger = self._setup_logger()
           
        def _setup_logger(self) -> logging.Logger:
            logger = logging.getLogger(f"SDK-{id(self)}")
            if not logger.handlers:
                handler = logging.StreamHandler()
                formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
                handler.setFormatter(formatter)
                logger.addHandler(handler)
                logger.setLevel(logging.INFO)
            return logger
       
        async def __aenter__(self):
            """Async context manager entry"""
            self.session = aiohttp.ClientSession()
            return self
       
        async def __aexit__(self, exc_type, exc_val, exc_tb):
            """Async context manager exit"""
            if self.session:
                await self.session.close()
       
        def _get_headers(self) -> Dict[str, str]:
            headers = {'Content-Type': 'application/json'}
            if self.api_key:
                headers['Authorization'] = f'Bearer {self.api_key}'
            return headers
       
        async def _make_request(self, method: str, endpoint: str, params: Dict = None,
                              data: Dict = None, use_cache: bool = True) -> APIResponse:
            """Core request method with rate limiting and caching"""
           
            if use_cache and method.upper() == 'GET':
                cached = self.cache.get(method, endpoint, params)
                if cached:
                    self.logger.info(f"Cache hit for {method} {endpoint}")
                    return cached
           
            if not self.rate_limiter.can_proceed():
                wait_time = self.rate_limiter.wait_time()
                self.logger.warning(f"Rate limit hit, waiting {wait_time:.2f}s")
                await asyncio.sleep(wait_time)
           
            url = f"{self.base_url}/{endpoint.lstrip('/')}"
           
            try:
                async with self.session.request(
                    method=method.upper(),
                    url=url,
                    params=params,
                    json=data,
                    headers=self._get_headers()
                ) as resp:
                    response_data = await resp.json() if resp.content_type == 'application/json' else await resp.text()
                   
                    api_response = APIResponse(
                        data=response_data,
                        status_code=resp.status,
                        headers=dict(resp.headers),
                        timestamp=datetime.now()
                    )
                   
                    if use_cache and method.upper() == 'GET' and 200 <= resp.status < 300:
                        self.cache.set(method, endpoint, api_response, params)
                   
                    self.logger.info(f"{method.upper()} {endpoint} - Status: {resp.status}")
                    return api_response
                   
            except Exception as e:
                self.logger.error(f"Request failed: {str(e)}")
                raise
       
        async def get(self, endpoint: str, params: Dict = None, use_cache: bool = True) -> APIResponse:
            return await self._make_request('GET', endpoint, params=params, use_cache=use_cache)
       
        async def post(self, endpoint: str, data: Dict = None) -> APIResponse:
            return await self._make_request('POST', endpoint, data=data, use_cache=False)
       
        async def put(self, endpoint: str, data: Dict = None) -> APIResponse:
            return await self._make_request('PUT', endpoint, data=data, use_cache=False)
       
        async def delete(self, endpoint: str) -> APIResponse:
            return await self._make_request('DELETE', endpoint, use_cache=False)

    The AdvancedSDK class wraps everything together into a clean, async-first client: it manages an aiohttp session via async context managers, injects JSON and auth headers, and coordinates our RateLimiter and Cache under the hood. Its _make_request method centralizes GET/POST/PUT/DELETE logic, handling cache lookups, rate-limit waits, error logging, and response packing into APIResponse objects, while the get/post/put/delete helpers give us ergonomic, high-level calls.

    Copy CodeCopiedUse a different Browser
    async def demo_sdk():
        """Demonstrate SDK capabilities"""
        print("🚀 Advanced SDK Demo")
        print("=" * 50)
       
        async with AdvancedSDK("https://jsonplaceholder.typicode.com") as sdk:
           
            print("n📥 Testing GET request with caching...")
            response1 = await sdk.get("/posts/1")
            print(f"First request - Status: {response1.status_code}")
            print(f"Title: {response1.data.get('title', 'N/A')}")
           
            response2 = await sdk.get("/posts/1")
            print(f"Second request (cached) - Status: {response2.status_code}")
           
            print("n📤 Testing POST request...")
            new_post = {
                "title": "Advanced SDK Tutorial",
                "body": "This SDK demonstrates modern Python patterns",
                "userId": 1
            }
            post_response = await sdk.post("/posts", data=new_post)
            print(f"POST Status: {post_response.status_code}")
            print(f"Created post ID: {post_response.data.get('id', 'N/A')}")
           
            print("n⚡ Testing batch requests with rate limiting...")
            tasks = []
            for i in range(1, 6):
                tasks.append(sdk.get(f"/posts/{i}"))
           
            results = await asyncio.gather(*tasks)
            print(f"Batch completed: {len(results)} requests")
            for i, result in enumerate(results, 1):
                print(f"  Post {i}: {result.data.get('title', 'N/A')[:30]}...")
           
            print("n❌ Testing error handling...")
            try:
                error_response = await sdk.get("/posts/999999")
                print(f"Error response status: {error_response.status_code}")
            except Exception as e:
                print(f"Handled error: {type(e).__name__}")
       
        print("n✅ Demo completed successfully!")
    
    
    async def run_demo():
      """Colab-friendly demo runner"""
      await demo_sdk()

    The demo_sdk coroutine walks through the SDK’s core features, issuing a cached GET request, performing a POST, executing a batch of GETs under rate limiting, and handling errors, against the JSONPlaceholder API, printing status codes and sample data to illustrate each capability. The run_demo helper ensures this demo runs smoothly inside a Colab notebook’s existing event loop.

    Copy CodeCopiedUse a different Browser
    import nest_asyncio
    nest_asyncio.apply()
    
    
    if __name__ == "__main__":
        try:
            asyncio.run(demo_sdk())
        except RuntimeError:
            loop = asyncio.get_event_loop()
            loop.run_until_complete(demo_sdk())
    
    
    class SDKBuilder:
        """Builder pattern for SDK configuration"""
        def __init__(self, base_url: str):
            self.base_url = base_url
            self.config = {}
       
        def with_auth(self, api_key: str):
            self.config['api_key'] = api_key
            return self
       
        def with_rate_limit(self, calls_per_minute: int):
            self.config['rate_limit'] = calls_per_minute
            return self
       
        def build(self) -> AdvancedSDK:
            return AdvancedSDK(self.base_url, **self.config)

    Finally, we apply nest_asyncio to enable nested event loops in Colab, then run the demo via asyncio.run (with a fallback to manual loop execution if needed). It also introduces an SDKBuilder class that implements a fluent builder pattern for easily configuring and instantiating the AdvancedSDK with custom authentication and rate-limit settings.

    In conclusion, this SDK tutorial provides a scalable foundation for any RESTful integration, combining modern Python idioms (dataclasses, async/await, context managers) with practical tooling (rate limiter, cache, structured logging). By adapting the patterns shown here, particularly the separation of concerns between request orchestration, caching, and response modeling, teams can accelerate the development of new API clients while ensuring predictability, observability, and resilience.


    Check out the Codes. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post A Coding Guide to Build a Production-Ready Asynchronous Python SDK with Rate Limiting, In-Memory Caching, and Authentication appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleDigital Accessibility Is Rising: Here’s How APAC and LATAM Are Leading the Shift
    Next Article Sakana AI Introduces Reinforcement-Learned Teachers (RLTs): Efficiently Distilling Reasoning in LLMs Using Small-Scale Reinforcement Learning

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-4818 – SourceCodester Doctor’s Appointment System SQL Injection

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2024-57459 – CloudClassroom PHP Project SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-7069 – HDF5 Heap-Based Buffer Overflow Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-6358 – Code-Projects Simple Pizza Ordering System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-45489 – Linksys E5600 Command Injection Vulnerability

    May 6, 2025

    CVE ID : CVE-2025-45489

    Published : May 6, 2025, 4:15 p.m. | 3 hours, 19 minutes ago

    Description : Linksys E5600 v1.1.0.26 was discovered to contain a command injection vulnerability in the runtime.ddnsStatus DynDNS function via the hostname parameter.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Black Myth: Wukong is coming to Xbox exactly one year after launching on PlayStation

    June 6, 2025

    Mozilla Thunderbird Pro: un client email open source che evolve in una piattaforma completa

    April 3, 2025

    I replaced my Galaxy S25 Plus with the S25 Edge (and Samsung may do the same)

    July 15, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.