Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Coding Implementation to Build an Interactive Transcript and PDF Analysis with Lyzr Chatbot Framework

    A Coding Implementation to Build an Interactive Transcript and PDF Analysis with Lyzr Chatbot Framework

    May 28, 2025

    In this tutorial, we introduce a streamlined approach for extracting, processing, and analyzing YouTube video transcripts using Lyzr, an advanced AI-powered framework designed to simplify interaction with textual data. Leveraging Lyzr’s intuitive ChatBot interface alongside the youtube-transcript-api and FPDF, users can effortlessly convert video content into structured PDF documents and conduct insightful analyses through dynamic interactions. Ideal for researchers, educators, and content creators, Lyzr accelerates the process of deriving meaningful insights, generating summaries, and formulating creative questions directly from multimedia resources.

    Copy CodeCopiedUse a different Browser
    !pip install lyzr youtube-transcript-api fpdf2 ipywidgets
    !apt-get update -qq && apt-get install -y fonts-dejavu-core

    We set up the necessary environment for the tutorial. The first command installs essential Python libraries, including lyzr for AI-powered chat, youtube-transcript-api for transcript extraction, fpdf2 for PDF generation, and ipywidgets for creating interactive chat interfaces. The second command ensures the DejaVu Sans font is installed on the system to support full Unicode text rendering within the generated PDF files.

    Copy CodeCopiedUse a different Browser
    import os
    import openai
    
    
    openai.api_key = os.getenv("OPENAI_API_KEY")
    os.environ['OPENAI_API_KEY'] = "YOUR_OPENAI_API_KEY_HERE"

    We configure OpenAI API key access for the tutorial. We import the os and openai modules, then retrieve the API key from environment variables (or directly set it via os.environ). This setup is essential for leveraging OpenAI’s powerful models within the Lyzr framework.

    Copy CodeCopiedUse a different Browser
    import json
    from lyzr import ChatBot
    from youtube_transcript_api import YouTubeTranscriptApi, TranscriptsDisabled, NoTranscriptFound, CouldNotRetrieveTranscript
    from fpdf import FPDF
    from ipywidgets import Textarea, Button, Output, Layout
    from IPython.display import display, Markdown
    import re

    Check out the full Notebook here

    We import essential libraries required for the tutorial. It includes json for data handling, Lyzr’s ChatBot for AI-driven chat capabilities, and YouTubeTranscriptApi for extracting transcripts from YouTube videos. Also, it brings in FPDF for PDF generation, ipywidgets for interactive UI components, and IPython.display for rendering Markdown content in notebooks. The re module is also imported for regular expression operations in text processing tasks.

    Copy CodeCopiedUse a different Browser
    def transcript_to_pdf(video_id: str, output_pdf_path: str) -> bool:
        """
        Download YouTube transcript (manual or auto) and write it into a PDF
        using the system-installed DejaVuSans.ttf for full Unicode support.
        Fixed to handle long words and text formatting issues.
        """
        try:
            entries = YouTubeTranscriptApi.get_transcript(video_id)
        except (TranscriptsDisabled, NoTranscriptFound, CouldNotRetrieveTranscript):
            try:
                entries = YouTubeTranscriptApi.get_transcript(video_id, languages=['en'])
            except Exception:
                print(f"[!] No transcript for {video_id}")
                return False
        except Exception as e:
            print(f"[!] Error fetching transcript for {video_id}: {e}")
            return False
    
    
        text = "n".join(e['text'] for e in entries).strip()
        if not text:
            print(f"[!] Empty transcript for {video_id}")
            return False
    
    
        pdf = FPDF()
        pdf.add_page()
    
    
        font_path = "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf"
        try:
            if os.path.exists(font_path):
                pdf.add_font("DejaVu", "", font_path)
                pdf.set_font("DejaVu", size=10)
            else:
                pdf.set_font("Arial", size=10)
        except Exception:
            pdf.set_font("Arial", size=10)
    
    
        pdf.set_margins(20, 20, 20)
        pdf.set_auto_page_break(auto=True, margin=25)
    
    
        def process_text_for_pdf(text):
            text = re.sub(r's+', ' ', text)
            text = text.replace('nn', 'n')
    
    
            processed_lines = []
            for paragraph in text.split('n'):
                if not paragraph.strip():
                    continue
    
    
                words = paragraph.split()
                processed_words = []
                for word in words:
                    if len(word) > 50:
                        chunks = [word[i:i+50] for i in range(0, len(word), 50)]
                        processed_words.extend(chunks)
                    else:
                        processed_words.append(word)
    
    
                processed_lines.append(' '.join(processed_words))
    
    
            return processed_lines
    
    
        processed_lines = process_text_for_pdf(text)
    
    
        for line in processed_lines:
            if line.strip():
                try:
                    pdf.multi_cell(0, 8, line.encode('utf-8', 'replace').decode('utf-8'), align='L')
                    pdf.ln(2)
                except Exception as e:
                    print(f"[!] Warning: Skipped problematic line: {str(e)[:100]}...")
                    continue
    
    
        try:
            pdf.output(output_pdf_path)
            print(f"[+] PDF saved: {output_pdf_path}")
            return True
        except Exception as e:
            print(f"[!] Error saving PDF: {e}")
            return False

    Check out the full Notebook here

    This function, transcript_to_pdf, automates converting YouTube video transcripts into clean, readable PDF documents. It retrieves the transcript using the YouTubeTranscriptApi, gracefully handles exceptions such as unavailable transcripts, and formats the text to avoid issues like long words breaking the PDF layout. The function also ensures proper Unicode support by using the DejaVuSans font (if available) and optimizes text for PDF rendering by splitting overly long words and maintaining consistent margins. It returns True if the PDF is generated successfully or False if errors occur.

    Copy CodeCopiedUse a different Browser
    def create_interactive_chat(agent):
        input_area = Textarea(
            placeholder="Type a question…", layout=Layout(width='80%', height='80px')
        )
        send_button = Button(description="Send", button_style="success")
        output_area = Output(layout=Layout(
            border='1px solid gray', width='80%', height='200px', overflow='auto'
        ))
    
    
        def on_send(btn):
            question = input_area.value.strip()
            if not question:
                return
            with output_area:
                print(f">> You: {question}")
                try:
                    print("<< Bot:", agent.chat(question), "n")
                except Exception as e:
                    print(f"[!] Error: {e}n")
    
    
        send_button.on_click(on_send)
        display(input_area, send_button, output_area)
    

    Check out the full Notebook here

    This function, create_interactive_chat, creates a simple and interactive chat interface within Colab. Using ipywidgets provides a text input area (Textarea) for users to type questions, a send button (Button) to trigger the chat, and an output area (Output) to display the conversation. When the user clicks send, the entered question is passed to the Lyzr ChatBot agent, which generates and displays a response. This enables users to engage in dynamic Q&A sessions based on the transcript analysis, making the interaction like a live conversation with the AI model.

    Copy CodeCopiedUse a different Browser
    def main():
        video_ids = ["dQw4w9WgXcQ", "jNQXAC9IVRw"]
        processed = []
    
    
        for vid in video_ids:
            pdf_path = f"{vid}.pdf"
            if transcript_to_pdf(vid, pdf_path):
                processed.append((vid, pdf_path))
            else:
                print(f"[!] Skipping {vid} — no transcript available.")
    
    
        if not processed:
            print("[!] No PDFs generated. Please try other video IDs.")
            return
    
    
        first_vid, first_pdf = processed[0]
        print(f"[+] Initializing PDF-chat agent for video {first_vid}…")
        bot = ChatBot.pdf_chat(
            input_files=[first_pdf]
        )
    
    
        questions = [
            "Summarize the transcript in 2–3 sentences.",
            "What are the top 5 insights and why?",
            "List any recommendations or action items mentioned.",
            "Write 3 quiz questions to test comprehension.",
            "Suggest 5 creative prompts to explore further."
        ]
        responses = {}
        for q in questions:
            print(f"[?] {q}")
            try:
                resp = bot.chat(q)
            except Exception as e:
                resp = f"[!] Agent error: {e}"
            responses[q] = resp
            print(f"[/] {resp}n" + "-"*60 + "n")
    
    
        with open('responses.json','w',encoding='utf-8') as f:
            json.dump(responses,f,indent=2)
        md = "# Transcript Analysis Reportnn"
        for q,a in responses.items():
            md += f"## Q: {q}n{a}nn"
        with open('report.md','w',encoding='utf-8') as f:
            f.write(md)
    
    
        display(Markdown(md))
    
    
        if len(processed) > 1:
            print("[+] Generating comparison…")
            _, pdf1 = processed[0]
            _, pdf2 = processed[1]
            compare_bot = ChatBot.pdf_chat(
                input_files=[pdf1, pdf2]
            )
            comparison = compare_bot.chat(
                "Compare the main themes of these two videos and highlight key differences."
            )
            print("[+] Comparison Result:n", comparison)
    
    
        print("n=== Interactive Chat (Video 1) ===")
        create_interactive_chat(bot)
    

    Check out the full Notebook here

    Our main() function serves as the core driver for the entire tutorial pipeline. It processes a list of YouTube video IDs, converting available transcripts into PDF files using the transcript_to_pdf function. Once PDFs are generated, a Lyzr PDF-chat agent is initialized on the first PDF, allowing the model to answer predefined questions such as summarizing the content, identifying insights, and generating quiz questions. The answers are stored in a responses.json file and formatted into a Markdown report (report.md). If multiple PDFs are created, the function compares them using the Lyzr agent to highlight key differences between the videos. Finally, it launches an interactive chat interface with the user, enabling dynamic conversations based on the transcript content, showcasing the power of Lyzr for seamless PDF analysis and AI-driven interactions.

    Copy CodeCopiedUse a different Browser
    if __name__ == "__main__":
        main()
    

    We ensure that the main() function runs only when the script is executed directly, not when it’s imported as a module. It’s a best practice in Python scripts to control execution flow.

    In conclusion, by integrating Lyzr into our workflow as demonstrated in this tutorial, we can effortlessly transform YouTube videos into insightful, actionable knowledge. Lyzr’s intelligent PDF-chat capability simplifies extracting core themes and generating comprehensive summaries, and also enables engaging, interactive exploration of content through an intuitive conversational interface. Adopting Lyzr empowers users to unlock deeper insights and significantly enhances productivity when working with video transcripts, whether for academic research, educational purposes, or creative content analysis.


    Check out the Notebook here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post A Coding Implementation to Build an Interactive Transcript and PDF Analysis with Lyzr Chatbot Framework appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow Rufus doubled their inference speed and handled Prime Day traffic with AWS AI chips and parallel decoding
    Next Article LLMs Can Now Reason Beyond Language: Researchers Introduce Soft Thinking to Replace Discrete Tokens with Continuous Concept Embeddings

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    How to Build a Multi-Tenant SaaS Application with Next.js (Frontend Integration)

    How to Build a Multi-Tenant SaaS Application with Next.js (Frontend Integration)

    Development

    CVE-2025-5732 – Traffic Offense Reporting System Cross-Site Request Forgery Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    AI agents make great teammates, but don’t let them code alone – here’s why

    News & Updates

    Microsoft urges users to ditch Windows 10 for Windows 11 because it’s better in 7 ways

    Operating Systems

    Highlights

    Bulma CSS Framework

    April 14, 2025

    Bulma’s grid system excels due to its use of Flexbox, allowing content to adapt seamlessly…

    Update ASAP: Google Fixes Android Flaw (CVE-2025-27363) Exploited by Attackers

    May 14, 2025

    CVE-2025-6887 – Tenda AC5 Stack-Based Buffer Overflow Vulnerability

    June 30, 2025

    Gamers continue to make the switch to Windows 11 — and not just from Windows 10, either

    July 3, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.