Show HN: Local GLaDOS

https://github.com/dnhkng/GlaDOS

dnhkng%2FGlaDOS | Trendshift

GLaDOS Personality Core

Prologue

"Science isn't about asking why. It's about asking, 'Why not?'" - Cave Johnson

GLaDOS is the AI antagonist from Valve's Portal series—a sardonic, passive-aggressive superintelligence who views humans as test subjects worthy of both study and mockery.

Back in 2022 when ChatGPT made its debut, I had a realization: we are living in the Sci-Fi future and can actually build her now. A demented, obsessive AI fixated on humanity, super intelligent yet utterly lacking sound judgment; so just like an LLM, right? 2026, and still no moon colonies or flying cars. But a passive-aggressive AI that controls your lights and runs experiments on you? That we can do.

The architecture borrows from Minsky's Society of Mind—rather than one monolithic prompt, multiple specialized agents (vision, memory, personality, planning) each contribute to a dynamic context. GLaDOS's "self" emerges from their combined output, assembled fresh for each interaction.

The hard part was latency. Getting round-trip response time under 600 milliseconds is a threshold—below it, conversation stops feeling stilted and starts to flow. That meant training a custom TTS model and ruthlessly cutting milliseconds from every part of the pipeline.

Since 2023 I've refactored the system multiple times as better models came out. The current version finally adds what I always wanted: vision, memory, and tool use via MCP.

She sees through a camera, hears through a microphone, speaks through a speaker, and judges you accordingly.

Join our Discord! | Sponsor the project

LocalGLaDOS.mp4

Vision

"We've both said a lot of things that you're going to regret" - GLaDOS

Most voice assistants wait for wake words. GLaDOS doesn't wait—she observes, thinks, and speaks when she has something to say. All the while, parts of her minds are tracking what she sees, monitoring system stats, and researching new neurotoxin recipes online.

Goals:

  • Proactive behavior: React to events (vision, sound, time) without being prompted
  • Emotional state: PAD model (Pleasure-Arousal-Dominance) for reactive mood
  • Persistent personality: HEXACO traits provide stable character across sessions
  • Multi-agent architecture: Subagents handle research, memory, emotions; main agent stays focused
  • Real-time conversation: Optimized latency, natural interruption handling

What's New

  • Emotions: PAD model for reactive mood + HEXACO traits for persistent personality
  • Long-term Memory: Facts, preferences, and conversation summaries persist across sessions
  • Observer Agent: Constitutional AI monitors behavior and self-adjusts within bounds
  • Vision: FastVLM gives her eyes. Details | Demo
  • Autonomy: She watches, waits, and speaks when she has something to say. Details
  • MCP Tools: Extensible tool system for home automation, system info, etc. Details
  • 8GB SBC: Runs on a Rock5b with RK3588 NPU. Branch

Roadmap

"Federal regulations require me to warn you that this next test chamber... is looking pretty good.” - GLaDOS

There's still a lot do do; I will be swapping out models are they are released, and then working on anamatronics, once a good model with inverse kinematics comes out. There was a time when I would code that myself; these days it makes more sense to wait until a trained model is released!

  • Train GLaDOS voice
  • Personality that actually sounds like her
  • Vision via VLM
  • Autonomy (proactive behavior)
  • MCP tool system
  • Emotional state (PAD + HEXACO model)
  • Long-term memory
  • Implement streaming ASR (nvidia/multitalker-parakeet-streaming-0.6b-v1)
  • Observer agent (behavior adjustment)
  • 3D-printable enclosure
  • Animatronics

Architecture

"Let's be honest. Neither one of us knows what that thing does. Just put it in the corner and I'll deal with it later." - GLaDOS

flowchart TB
    subgraph Input
        mic[🎤 Microphone] --> vad[VAD] --> asr[ASR]
        text[⌨️ Text Input]
        tick[⏱️ Timer]
        cam[📷 Camera]--> vlm[VLM]
    end
    subgraph Minds["Subagents"]
        sensors[Sensors]
        weather[Weather]
        emotion[Emotion]
        news[News]
        memory[Memory]
    end
    ctx[📋 Context]
    subgraph Core["Main Agent"]
        llm[🧠 LLM]
        tts[TTS]
    end
    subgraph Output
        speaker[🔊 Speaker]
        logs[Logs]
        images[🖼️ Images]
        motors[⚙️ Animatronics]
    end
    asr -->|priority| llm
    text -->|priority| llm
    vlm --> ctx
    tick -->|autonomy| llm
    Minds -->|write| ctx
    ctx -->|read| llm
    llm --> tts --> speaker
    llm --> logs
    llm <-->|MCP| tools[Tools]
    tools --> images
    tools --> motors
Loading

GLaDOS runs a loop: each tick she reads her slots (weather, news, vision, mood), decides if she has something to say, and speaks. No wake word—if she has an opinion, you'll hear it.

Two lanes: Your speech jumps the queue (priority lane). The autonomy lane is just the loop running in the background. User always wins.

Audio Pipeline
flowchart LR
    subgraph Capture["Audio Capture"]
        mic[Microphone<br/>16kHz]
        vad[Silero VAD<br/>32ms chunks]
        buffer[Pre-activation<br/>Buffer 800ms]
    end
    subgraph Recognition["Speech Recognition"]
        detect[Voice Detected<br/>VAD > 0.8]
        accumulate[Accumulate<br/>Speech]
        silence[Silence Detection<br/>640ms pause]
        asr[Parakeet ASR]
    end
    subgraph Interruption["Interruption Handling"]
        speaking{Speaking?}
        stop[Stop Playback]
        clip[Clip Response]
    end
    mic --> vad --> buffer
    buffer --> detect --> accumulate
    accumulate --> silence --> asr
    detect --> speaking
    speaking -->|Yes| stop --> clip
Loading
  • Microphone captures at 16kHz mono
  • Silero VAD processes 32ms chunks, triggers at probability > 0.8
  • Pre-activation buffer preserves 800ms before voice detected
  • Silence detection waits 640ms pause before finalizing
  • Interruption stops playback and clips the response in conversation history
Thread Architecture
Thread Class Daemon Priority Queue Purpose
SpeechListener SpeechListener INPUT VAD + ASR
TextListener TextListener INPUT Text input
LLMProcessor LanguageModelProcessor PROCESSING llm_queue_priority Main LLM
LLMProcessor-Auto-N LanguageModelProcessor PROCESSING llm_queue_autonomy Autonomy LLM
ToolExecutor ToolExecutor PROCESSING tool_calls_queue Tool execution
TTSSynthesizer TextToSpeechSynthesizer OUTPUT tts_queue Voice synthesis
AudioPlayer SpeechPlayer OUTPUT audio_queue Playback
AutonomyLoop AutonomyLoop BACKGROUND Tick orchestration
VisionProcessor VisionProcessor BACKGROUND vision_request_queue Vision analysis

Daemon threads can be killed on exit. Non-daemon threads must complete gracefully to preserve state (e.g., conversation history).

Shutdown order: INPUT → PROCESSING → OUTPUT → BACKGROUND → CLEANUP

Context Building
flowchart TB
    subgraph Sources["Context Sources"]
        sys[System Prompt<br/>Personality]
        slots[Task Slots<br/>Weather, News, etc.]
        prefs[User Preferences]
        const[Constitutional<br/>Modifiers]
        mcp[MCP Resources]
        vision[Vision State]
    end
    subgraph Builder["Context Builder"]
        merge[Priority-Sorted<br/>Merge]
    end
    subgraph Final["LLM Request"]
        messages[System Messages]
        history[Conversation<br/>History]
        user[User Message]
    end
    Sources --> merge --> messages
    messages --> history --> user
Loading

What the LLM sees on each request:

  1. System prompt with personality
  2. Task slots (weather, news, vision state, emotion)
  3. User preferences from memory
  4. Constitutional modifiers (behavior adjustments from observer)
  5. MCP resources (dynamic tool descriptions)
  6. Conversation history (compacted when exceeding token threshold)
Autonomy System
flowchart TB
    subgraph Triggers
        tick[⏱️ Time Tick]
        vision[📷 Vision Event]
        task[📋 Task Update]
    end
    subgraph Loop["Autonomy Loop"]
        bus[Event Bus]
        cooldown{Cooldown<br/>Passed?}
        build[Build Context<br/>from Slots]
        dispatch[Dispatch to<br/>LLM Queue]
    end
    subgraph Agents["Subagents"]
        emotion[Emotion Agent<br/>PAD Model]
        compact[Compaction Agent<br/>Token Management]
        observer[Observer Agent<br/>Behavior Adjustment]
        weather[Weather Agent]
        news[HN Agent]
    end
    Triggers --> bus --> cooldown
    cooldown -->|Yes| build --> dispatch
    Agents -->|write| slots[Task Slots]
    slots -->|read| build
Loading

Each subagent runs its own loop: timer or camera triggers it, it makes an LLM decision, and writes to a slot the main agent reads. Fully async—subagents never block the main conversation.

See autonomy.md for details.

Tool Execution
sequenceDiagram
    participant LLM
    participant Executor as Tool Executor
    participant MCP as MCP Server
    participant Native as Native Tool
    LLM->>Executor: tool_call {name, args}
    alt MCP Tool (mcp.*)
        Executor->>MCP: call_tool(server, tool, args)
        MCP-->>Executor: result
    else Native Tool
        Executor->>Native: run(tool_call_id, args)
        Native-->>Executor: result
    end
    Executor->>LLM: {role: tool, content: result}
Loading

Native tools: speak, do_nothing, get_user_preferences, set_user_preferences

MCP tools: Prefixed with server name (e.g., mcp.system_info.get_cpu). Supports stdio, HTTP, and SSE transports.

See mcp.md for configuration.

Components

"All these science spheres are made out of asbestos, by the way. Keeps out the rats. Let us know if you feel a shortness of breath, a persistent dry cough, or your heart stopping. Because that's not part of the test. That's asbestos." - Cave Johnson

Component Technology Purpose Status
Speech Recognition Parakeet TDT (ONNX) Speech-to-text, 16kHz streaming
Voice Activity Silero VAD (ONNX) Detect speech, 32ms chunks
Voice Synthesis Kokoro / GLaDOS TTS Text-to-speech, streaming
Interruption VAD + Playback Control Talk over her, she stops
Vision FastVLM (ONNX) Scene understanding, change detection
LLM OpenAI-compatible API Reasoning, tool use, streaming
Tools MCP Protocol Extensibility, stdio/HTTP/SSE
Autonomy Subagent Architecture Proactive behavior, tick loop
Conversation ConversationStore Thread-safe history
Compaction LLM Summarization Token management
Emotional State PAD + HEXACO Reactive mood, persistent personality
Long-term Memory MCP + Subagent Facts, preferences, summaries
Observer Agent Constitutional AI Behavior adjustment

✅ = Done | 🔨 = In progress

Quick Start

"The Enrichment Center is required to remind you that the Weighted Companion Cube cannot talk. In the event that it does talk The Enrichment Centre asks you to ignore its advice." - GLaDOS

  1. Install Ollama and grab a model:

  2. Clone and install:

    git clone https://github.com/dnhkng/GLaDOS.git
    cd GLaDOS
    python scripts/install.py
  3. Run:

    uv run glados          # Voice mode
    uv run glados tui      # Text interface

Installation

GPU Setup (recommended)

Works without GPU, just slower.

LLM Backend

GLaDOS needs an LLM. Options:

  1. Ollama (easiest): ollama pull llama3.2
  2. Any OpenAI-compatible API

Configure in glados_config.yaml:

completion_url: "http://localhost:11434/v1/chat/completions"
model: "llama3.2"
api_key: ""  # if needed

Platform Notes

Linux:

sudo apt install libportaudio2

Windows: Install Python 3.12 from Microsoft Store.

macOS: Experimental. Check Discord for help.

Install

git clone https://github.com/dnhkng/GLaDOS.git
cd GLaDOS
python scripts/install.py

Usage

uv run glados                           # Voice mode
uv run glados tui                       # Text UI
uv run glados start --input-mode text   # Text only
uv run glados start --input-mode both   # Voice + text
uv run glados say "The cake is a lie"   # Just TTS

TUI Controls

Press Ctrl+P to open the command palette. Available commands:

Command What it does
Status System overview
Speech Recognition Toggle ASR on/off
Text-to-Speech Toggle TTS on/off
Config View configuration
Memory Long-term memory stats
Knowledge Manage user facts

Keyboard Shortcuts:

  • Ctrl+P - Command palette
  • F1 - Help screen
  • Ctrl+D/L/S/A/U/M - Toggle panels (Dialog, Logs, Status, Autonomy, Queue, MCP)
  • Ctrl+I - Toggle right info panels
  • Ctrl+R - Restore all panels
  • Esc - Close dialogs

Configuration

"As part of a required test protocol, we will not monitor the next test chamber. You will be entirely on your own. Good luck." - GLaDOS

Change the LLM

Then in glados_config.yaml:

Browse models: ollama.com/library

Change the Voice

“I'm speaking in an accent that is beyond her range of hearing.” - Wheatley

Kokoro voices in glados_config.yaml:

Female US: af_alloy, af_aoede, af_jessica, af_kore, af_nicole, af_nova, af_river, af_sarah, af_sky Female UK: bf_alice, bf_emma, bf_isabella, bf_lily Male US: am_adam, am_echo, am_eric, am_fenrir, am_liam, am_michael, am_onyx, am_puck Male UK: bm_daniel, bm_fable, bm_george, bm_lewis

Custom Personality

Copy configs/glados_config.yaml, edit the personality:

personality_preprompt:
  - system: "You are a sarcastic AI who judges humans."
  - user: "What do you think of my code?"
  - assistant: "I've seen better output from a random number generator."

Run with:

uv run glados start --config configs/your_config.yaml

MCP Servers

Add tools in glados_config.yaml:

mcp_servers:
  - name: "system_info"
    transport: "stdio"
    command: "python"
    args: ["-m", "glados.mcp.system_info_server"]

Built-in: system_info, time_info, disk_info, network_info, process_info, power_info, memory

See mcp.md for Home Assistant integration.

TTS API Server

Expose Kokoro as an OpenAI-compatible TTS endpoint:

python scripts/install.py --api
./scripts/serve

Or Docker:

docker compose up -d --build

Generate speech:

curl -X POST http://localhost:5050/v1/audio/speech \
  -H "Content-Type: application/json" \
  -d '{"input": "Hello.", "voice": "glados"}' \
  --output speech.mp3

Troubleshooting

"No one will blame you for giving up. In fact, quitting at this point is a perfectly reasonable response." - GLaDOS

She keeps responding to herself: Use headphones or a mic with echo cancellation. Or set interruptible: false.

Windows DLL error: Install Visual C++ Redistributable.

Development

Explore the models:

jupyter notebook demo.ipynb

Star History

Star History Chart

Sponsors

{
"by": "dnhkng",
"descendants": 1,
"id": 40237586,
"kids": [
40237931,
40237725
],
"score": 12,
"text": "I built GLaDOS&#x27;s brain, with a low-latency chat interface. Sub 600ms voice-to-voice response, running on Llama-3 70B.",
"time": 1714664903,
"title": "Show HN: Local GLaDOS",
"type": "story",
"url": "https://github.com/dnhkng/GlaDOS"
}
{
"author": "dnhkng",
"date": null,
"description": "This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve. - dnhkng/GLaDOS",
"image": "https://opengraph.githubassets.com/5d2066d5cd422351620ad77b74940eb7b87493db32c8c122a90c6695f69bc51e/dnhkng/GLaDOS",
"logo": null,
"publisher": "GitHub",
"title": "GitHub - dnhkng/GLaDOS: This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.",
"url": "https://github.com/dnhkng/GLaDOS"
}
{
"url": "https://github.com/dnhkng/GLaDOS",
"title": "GitHub - dnhkng/GLaDOS: This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.",
"description": "GLaDOS Personality Core Prologue \"Science isn't about asking why. It's about asking, 'Why not?'\" - Cave Johnson GLaDOS is the AI antagonist from Valve's Portal series—a sardonic, passive-aggressive...",
"links": [
"https://github.com/dnhkng/GLaDOS",
"https://github.com/dnhkng/GlaDOS"
],
"image": "https://opengraph.githubassets.com/5d2066d5cd422351620ad77b74940eb7b87493db32c8c122a90c6695f69bc51e/dnhkng/GLaDOS",
"content": "<div><article><p><a target=\"_blank\" href=\"https://trendshift.io/repositories/9828\"><img src=\"https://camo.githubusercontent.com/f63695e0f5702ec4e3ca6a43bd0cdc12b704f841275e2d78d5a54a7b8d034505/68747470733a2f2f7472656e6473686966742e696f2f6170692f62616467652f7265706f7369746f726965732f39383238\" alt=\"dnhkng%2FGlaDOS | Trendshift\" /></a></p>\n<p></p><h2>GLaDOS Personality Core</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#glados-personality-core\"></a><p></p>\n<p></p><h2>Prologue</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#prologue\"></a><p></p>\n<blockquote>\n<p><em>\"Science isn't about asking why. It's about asking, 'Why not?'\" - Cave Johnson</em></p>\n</blockquote>\n<p>GLaDOS is the AI antagonist from Valve's Portal series—a sardonic, passive-aggressive superintelligence who views humans as test subjects worthy of both study and mockery.</p>\n<p>Back in 2022 when ChatGPT made its debut, I had a realization: we are living in the Sci-Fi future and can actually build her now. A demented, obsessive AI fixated on humanity, super intelligent yet utterly lacking sound judgment; so just like an LLM, right? 2026, and still no moon colonies or flying cars. But a passive-aggressive AI that controls your lights and runs experiments on you? That we can do.</p>\n<p>The architecture borrows from Minsky's Society of Mind—rather than one monolithic prompt, multiple specialized agents (vision, memory, personality, planning) each contribute to a dynamic context. GLaDOS's \"self\" emerges from their combined output, assembled fresh for each interaction.</p>\n<p>The hard part was latency. Getting round-trip response time under 600 milliseconds is a threshold—below it, conversation stops feeling stilted and starts to flow. That meant training a custom TTS model and ruthlessly cutting milliseconds from every part of the pipeline.</p>\n<p>Since 2023 I've refactored the system multiple times as better models came out. The current version finally adds what I always wanted: vision, memory, and tool use via MCP.</p>\n<p>She sees through a camera, hears through a microphone, speaks through a speaker, and judges you accordingly.</p>\n<p><a target=\"_blank\" href=\"https://discord.com/invite/ERTDKwpjNB\">Join our Discord!</a> | <a target=\"_blank\" href=\"https://ko-fi.com/dnhkng\">Sponsor the project</a></p>\n<details>\n <summary>\n <span>LocalGLaDOS.mp4</span>\n <span></span>\n </summary>\n <video src=\"https://private-user-images.githubusercontent.com/2691732/402261237-c22049e4-7fba-4e84-8667-2c6657a656a0.mp4?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjkwMDk2MzQsIm5iZiI6MTc2OTAwOTMzNCwicGF0aCI6Ii8yNjkxNzMyLzQwMjI2MTIzNy1jMjIwNDllNC03ZmJhLTRlODQtODY2Ny0yYzY2NTdhNjU2YTAubXA0P1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI2MDEyMSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNjAxMjFUMTUyODU0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ODgxZWYwMTFiZTk2Y2E0ODEzMWI2MmFkOWE4YTJhNWY2MGZkNmM0MTZkYTAzNWFhMzk3YmU0MTQzM2Q3MjZhYiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.4H48Fudd6GJGri7RPe5evtFGFkbmFC59W_vudn_rUCY\" controls=\"controls\" muted=\"muted\">\n </video>\n</details>\n<p></p><h2>Vision</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#vision\"></a><p></p>\n<blockquote>\n<p><em>\"We've both said a lot of things that you're going to regret\" - GLaDOS</em></p>\n</blockquote>\n<p>Most voice assistants wait for wake words. GLaDOS doesn't wait—she observes, thinks, and speaks when she has something to say. All the while, parts of her minds are tracking what she sees, monitoring system stats, and researching new neurotoxin recipes online.</p>\n<p><strong>Goals:</strong></p>\n<ul>\n<li><strong>Proactive behavior</strong>: React to events (vision, sound, time) without being prompted</li>\n<li><strong>Emotional state</strong>: PAD model (Pleasure-Arousal-Dominance) for reactive mood</li>\n<li><strong>Persistent personality</strong>: HEXACO traits provide stable character across sessions</li>\n<li><strong>Multi-agent architecture</strong>: Subagents handle research, memory, emotions; main agent stays focused</li>\n<li><strong>Real-time conversation</strong>: Optimized latency, natural interruption handling</li>\n</ul>\n<p></p><h2>What's New</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#whats-new\"></a><p></p>\n<ul>\n<li><strong>Emotions</strong>: PAD model for reactive mood + HEXACO traits for persistent personality</li>\n<li><strong>Long-term Memory</strong>: Facts, preferences, and conversation summaries persist across sessions</li>\n<li><strong>Observer Agent</strong>: Constitutional AI monitors behavior and self-adjusts within bounds</li>\n<li><strong>Vision</strong>: FastVLM gives her eyes. <a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS/blob/main/vision.md\">Details</a> | <a target=\"_blank\" href=\"https://www.youtube.com/watch?v=JDd9Rc4toEo\">Demo</a></li>\n<li><strong>Autonomy</strong>: She watches, waits, and speaks when she has something to say. <a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS/blob/main/autonomy.md\">Details</a></li>\n<li><strong>MCP Tools</strong>: Extensible tool system for home automation, system info, etc. <a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS/blob/main/mcp.md\">Details</a></li>\n<li><strong>8GB SBC</strong>: Runs on a Rock5b with RK3588 NPU. <a target=\"_blank\" href=\"https://github.com/dnhkng/RKLLM-Gradio\">Branch</a></li>\n</ul>\n<p></p><h2>Roadmap</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#roadmap\"></a><p></p>\n<blockquote>\n<p><em>\"Federal regulations require me to warn you that this next test chamber... is looking pretty good.” - GLaDOS</em></p>\n</blockquote>\n<p>There's still a lot do do; I will be swapping out models are they are released, and then working on anamatronics, once a good model with inverse kinematics comes out. There was a time when I would code that myself; these days it makes more sense to wait until a trained model is released!</p>\n<ul>\n<li> Train GLaDOS voice</li>\n<li> Personality that actually sounds like her</li>\n<li> Vision via VLM</li>\n<li> Autonomy (proactive behavior)</li>\n<li> MCP tool system</li>\n<li> Emotional state (PAD + HEXACO model)</li>\n<li> Long-term memory</li>\n<li> Implement streaming ASR (nvidia/multitalker-parakeet-streaming-0.6b-v1)</li>\n<li> Observer agent (behavior adjustment)</li>\n<li> 3D-printable enclosure</li>\n<li> Animatronics</li>\n</ul>\n<p></p><h2>Architecture</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#architecture\"></a><p></p>\n<blockquote>\n<p><em>\"Let's be honest. Neither one of us knows what that thing does. Just put it in the corner and I'll deal with it later.\" - GLaDOS</em></p>\n</blockquote>\n<section>\n <div>\n <pre>flowchart TB\n subgraph Input\n mic[🎤 Microphone] --&gt; vad[VAD] --&gt; asr[ASR]\n text[⌨️ Text Input]\n tick[⏱️ Timer]\n cam[📷 Camera]--&gt; vlm[VLM]\n end\n subgraph Minds[\"Subagents\"]\n sensors[Sensors]\n weather[Weather]\n emotion[Emotion]\n news[News]\n memory[Memory]\n end\n ctx[📋 Context]\n subgraph Core[\"Main Agent\"]\n llm[🧠 LLM]\n tts[TTS]\n end\n subgraph Output\n speaker[🔊 Speaker]\n logs[Logs]\n images[🖼️ Images]\n motors[⚙️ Animatronics]\n end\n asr --&gt;|priority| llm\n text --&gt;|priority| llm\n vlm --&gt; ctx\n tick --&gt;|autonomy| llm\n Minds --&gt;|write| ctx\n ctx --&gt;|read| llm\n llm --&gt; tts --&gt; speaker\n llm --&gt; logs\n llm &lt;--&gt;|MCP| tools[Tools]\n tools --&gt; images\n tools --&gt; motors\n</pre>\n </div>\n <span>\n <span>\n <span>Loading</span>\n</span>\n </span>\n</section>\n<p>GLaDOS runs a loop: each tick she reads her slots (weather, news, vision, mood), decides if she has something to say, and speaks. No wake word—if she has an opinion, you'll hear it.</p>\n<p><strong>Two lanes</strong>: Your speech jumps the queue (priority lane). The autonomy lane is just the loop running in the background. User always wins.</p>\n<details>\n<summary><strong>Audio Pipeline</strong></summary>\n<section>\n <div>\n <pre>flowchart LR\n subgraph Capture[\"Audio Capture\"]\n mic[Microphone&lt;br/&gt;16kHz]\n vad[Silero VAD&lt;br/&gt;32ms chunks]\n buffer[Pre-activation&lt;br/&gt;Buffer 800ms]\n end\n subgraph Recognition[\"Speech Recognition\"]\n detect[Voice Detected&lt;br/&gt;VAD &gt; 0.8]\n accumulate[Accumulate&lt;br/&gt;Speech]\n silence[Silence Detection&lt;br/&gt;640ms pause]\n asr[Parakeet ASR]\n end\n subgraph Interruption[\"Interruption Handling\"]\n speaking{Speaking?}\n stop[Stop Playback]\n clip[Clip Response]\n end\n mic --&gt; vad --&gt; buffer\n buffer --&gt; detect --&gt; accumulate\n accumulate --&gt; silence --&gt; asr\n detect --&gt; speaking\n speaking --&gt;|Yes| stop --&gt; clip\n</pre>\n </div>\n <span>\n <span>\n <span>Loading</span>\n</span>\n </span>\n</section>\n<ul>\n<li><strong>Microphone</strong> captures at 16kHz mono</li>\n<li><strong>Silero VAD</strong> processes 32ms chunks, triggers at probability &gt; 0.8</li>\n<li><strong>Pre-activation buffer</strong> preserves 800ms before voice detected</li>\n<li><strong>Silence detection</strong> waits 640ms pause before finalizing</li>\n<li><strong>Interruption</strong> stops playback and clips the response in conversation history</li>\n</ul>\n</details>\n<details>\n<summary><strong>Thread Architecture</strong></summary>\n<table>\n<thead>\n<tr>\n<th>Thread</th>\n<th>Class</th>\n<th>Daemon</th>\n<th>Priority</th>\n<th>Queue</th>\n<th>Purpose</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>SpeechListener</td>\n<td><code>SpeechListener</code></td>\n<td>✓</td>\n<td>INPUT</td>\n<td>—</td>\n<td>VAD + ASR</td>\n</tr>\n<tr>\n<td>TextListener</td>\n<td><code>TextListener</code></td>\n<td>✓</td>\n<td>INPUT</td>\n<td>—</td>\n<td>Text input</td>\n</tr>\n<tr>\n<td>LLMProcessor</td>\n<td><code>LanguageModelProcessor</code></td>\n<td>✗</td>\n<td>PROCESSING</td>\n<td><code>llm_queue_priority</code></td>\n<td>Main LLM</td>\n</tr>\n<tr>\n<td>LLMProcessor-Auto-N</td>\n<td><code>LanguageModelProcessor</code></td>\n<td>✗</td>\n<td>PROCESSING</td>\n<td><code>llm_queue_autonomy</code></td>\n<td>Autonomy LLM</td>\n</tr>\n<tr>\n<td>ToolExecutor</td>\n<td><code>ToolExecutor</code></td>\n<td>✗</td>\n<td>PROCESSING</td>\n<td><code>tool_calls_queue</code></td>\n<td>Tool execution</td>\n</tr>\n<tr>\n<td>TTSSynthesizer</td>\n<td><code>TextToSpeechSynthesizer</code></td>\n<td>✗</td>\n<td>OUTPUT</td>\n<td><code>tts_queue</code></td>\n<td>Voice synthesis</td>\n</tr>\n<tr>\n<td>AudioPlayer</td>\n<td><code>SpeechPlayer</code></td>\n<td>✗</td>\n<td>OUTPUT</td>\n<td><code>audio_queue</code></td>\n<td>Playback</td>\n</tr>\n<tr>\n<td>AutonomyLoop</td>\n<td><code>AutonomyLoop</code></td>\n<td>✓</td>\n<td>BACKGROUND</td>\n<td>—</td>\n<td>Tick orchestration</td>\n</tr>\n<tr>\n<td>VisionProcessor</td>\n<td><code>VisionProcessor</code></td>\n<td>✓</td>\n<td>BACKGROUND</td>\n<td><code>vision_request_queue</code></td>\n<td>Vision analysis</td>\n</tr>\n</tbody>\n</table>\n<p><strong>Daemon threads</strong> can be killed on exit. <strong>Non-daemon threads</strong> must complete gracefully to preserve state (e.g., conversation history).</p>\n<p><strong>Shutdown order</strong>: INPUT → PROCESSING → OUTPUT → BACKGROUND → CLEANUP</p>\n</details>\n<details>\n<summary><strong>Context Building</strong></summary>\n<section>\n <div>\n <pre>flowchart TB\n subgraph Sources[\"Context Sources\"]\n sys[System Prompt&lt;br/&gt;Personality]\n slots[Task Slots&lt;br/&gt;Weather, News, etc.]\n prefs[User Preferences]\n const[Constitutional&lt;br/&gt;Modifiers]\n mcp[MCP Resources]\n vision[Vision State]\n end\n subgraph Builder[\"Context Builder\"]\n merge[Priority-Sorted&lt;br/&gt;Merge]\n end\n subgraph Final[\"LLM Request\"]\n messages[System Messages]\n history[Conversation&lt;br/&gt;History]\n user[User Message]\n end\n Sources --&gt; merge --&gt; messages\n messages --&gt; history --&gt; user\n</pre>\n </div>\n <span>\n <span>\n <span>Loading</span>\n</span>\n </span>\n</section>\n<p>What the LLM sees on each request:</p>\n<ol>\n<li><strong>System prompt</strong> with personality</li>\n<li><strong>Task slots</strong> (weather, news, vision state, emotion)</li>\n<li><strong>User preferences</strong> from memory</li>\n<li><strong>Constitutional modifiers</strong> (behavior adjustments from observer)</li>\n<li><strong>MCP resources</strong> (dynamic tool descriptions)</li>\n<li><strong>Conversation history</strong> (compacted when exceeding token threshold)</li>\n</ol>\n</details>\n<details>\n<summary><strong>Autonomy System</strong></summary>\n<section>\n <div>\n <pre>flowchart TB\n subgraph Triggers\n tick[⏱️ Time Tick]\n vision[📷 Vision Event]\n task[📋 Task Update]\n end\n subgraph Loop[\"Autonomy Loop\"]\n bus[Event Bus]\n cooldown{Cooldown&lt;br/&gt;Passed?}\n build[Build Context&lt;br/&gt;from Slots]\n dispatch[Dispatch to&lt;br/&gt;LLM Queue]\n end\n subgraph Agents[\"Subagents\"]\n emotion[Emotion Agent&lt;br/&gt;PAD Model]\n compact[Compaction Agent&lt;br/&gt;Token Management]\n observer[Observer Agent&lt;br/&gt;Behavior Adjustment]\n weather[Weather Agent]\n news[HN Agent]\n end\n Triggers --&gt; bus --&gt; cooldown\n cooldown --&gt;|Yes| build --&gt; dispatch\n Agents --&gt;|write| slots[Task Slots]\n slots --&gt;|read| build\n</pre>\n </div>\n <span>\n <span>\n <span>Loading</span>\n</span>\n </span>\n</section>\n<p>Each subagent runs its own loop: timer or camera triggers it, it makes an LLM decision, and writes to a slot the main agent reads. Fully async—subagents never block the main conversation.</p>\n<p>See <a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS/blob/main/autonomy.md\">autonomy.md</a> for details.</p>\n</details>\n<details>\n<summary><strong>Tool Execution</strong></summary>\n<section>\n <div>\n <pre>sequenceDiagram\n participant LLM\n participant Executor as Tool Executor\n participant MCP as MCP Server\n participant Native as Native Tool\n LLM-&gt;&gt;Executor: tool_call {name, args}\n alt MCP Tool (mcp.*)\n Executor-&gt;&gt;MCP: call_tool(server, tool, args)\n MCP--&gt;&gt;Executor: result\n else Native Tool\n Executor-&gt;&gt;Native: run(tool_call_id, args)\n Native--&gt;&gt;Executor: result\n end\n Executor-&gt;&gt;LLM: {role: tool, content: result}\n</pre>\n </div>\n <span>\n <span>\n <span>Loading</span>\n</span>\n </span>\n</section>\n<p><strong>Native tools</strong>: <code>speak</code>, <code>do_nothing</code>, <code>get_user_preferences</code>, <code>set_user_preferences</code></p>\n<p><strong>MCP tools</strong>: Prefixed with server name (e.g., <code>mcp.system_info.get_cpu</code>). Supports stdio, HTTP, and SSE transports.</p>\n<p>See <a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS/blob/main/mcp.md\">mcp.md</a> for configuration.</p>\n</details>\n<p></p><h3>Components</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#components\"></a><p></p>\n<blockquote>\n<p><em>\"All these science spheres are made out of asbestos, by the way. Keeps out the rats. Let us know if you feel a shortness of breath, a persistent dry cough, or your heart stopping. Because that's not part of the test. That's asbestos.\" - Cave Johnson</em></p>\n</blockquote>\n<table>\n<thead>\n<tr>\n<th>Component</th>\n<th>Technology</th>\n<th>Purpose</th>\n<th>Status</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><strong>Speech Recognition</strong></td>\n<td>Parakeet TDT (ONNX)</td>\n<td>Speech-to-text, 16kHz streaming</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Voice Activity</strong></td>\n<td>Silero VAD (ONNX)</td>\n<td>Detect speech, 32ms chunks</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Voice Synthesis</strong></td>\n<td>Kokoro / GLaDOS TTS</td>\n<td>Text-to-speech, streaming</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Interruption</strong></td>\n<td>VAD + Playback Control</td>\n<td>Talk over her, she stops</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Vision</strong></td>\n<td>FastVLM (ONNX)</td>\n<td>Scene understanding, change detection</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>LLM</strong></td>\n<td>OpenAI-compatible API</td>\n<td>Reasoning, tool use, streaming</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Tools</strong></td>\n<td>MCP Protocol</td>\n<td>Extensibility, stdio/HTTP/SSE</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Autonomy</strong></td>\n<td>Subagent Architecture</td>\n<td>Proactive behavior, tick loop</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Conversation</strong></td>\n<td>ConversationStore</td>\n<td>Thread-safe history</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Compaction</strong></td>\n<td>LLM Summarization</td>\n<td>Token management</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Emotional State</strong></td>\n<td>PAD + HEXACO</td>\n<td>Reactive mood, persistent personality</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Long-term Memory</strong></td>\n<td>MCP + Subagent</td>\n<td>Facts, preferences, summaries</td>\n<td>✅</td>\n</tr>\n<tr>\n<td><strong>Observer Agent</strong></td>\n<td>Constitutional AI</td>\n<td>Behavior adjustment</td>\n<td>✅</td>\n</tr>\n</tbody>\n</table>\n<p>✅ = Done | 🔨 = In progress</p>\n<p></p><h2>Quick Start</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#quick-start\"></a><p></p>\n<blockquote>\n<p><em>\"The Enrichment Center is required to remind you that the Weighted Companion Cube cannot talk. In the event that it does talk The Enrichment Centre asks you to ignore its advice.\" - GLaDOS</em></p>\n</blockquote>\n<ol>\n<li>\n<p>Install <a target=\"_blank\" href=\"https://github.com/ollama/ollama\">Ollama</a> and grab a model:</p>\n</li>\n<li>\n<p>Clone and install:</p>\n<div><pre>git clone https://github.com/dnhkng/GLaDOS.git\n<span>cd</span> GLaDOS\npython scripts/install.py</pre></div>\n</li>\n<li>\n<p>Run:</p>\n<div><pre>uv run glados <span><span>#</span> Voice mode</span>\nuv run glados tui <span><span>#</span> Text interface</span></pre></div>\n</li>\n</ol>\n<p></p><h2>Installation</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#installation\"></a><p></p>\n<p></p><h3>GPU Setup (recommended)</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#gpu-setup-recommended\"></a><p></p>\n<ul>\n<li><strong>NVIDIA</strong>: Install <a target=\"_blank\" href=\"https://developer.nvidia.com/cuda-toolkit\">CUDA Toolkit</a></li>\n<li><strong>AMD/Intel</strong>: Install appropriate <a target=\"_blank\" href=\"https://onnxruntime.ai/docs/install/\">ONNX Runtime</a></li>\n</ul>\n<p>Works without GPU, just slower.</p>\n<p></p><h3>LLM Backend</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#llm-backend\"></a><p></p>\n<p>GLaDOS needs an LLM. Options:</p>\n<ol>\n<li><a target=\"_blank\" href=\"https://github.com/ollama/ollama\">Ollama</a> (easiest): <code>ollama pull llama3.2</code></li>\n<li>Any OpenAI-compatible API</li>\n</ol>\n<p>Configure in <code>glados_config.yaml</code>:</p>\n<div><pre><span>completion_url</span>: <span><span>\"</span>http://localhost:11434/v1/chat/completions<span>\"</span></span>\n<span>model</span>: <span><span>\"</span>llama3.2<span>\"</span></span>\n<span>api_key</span>: <span><span>\"</span><span>\"</span></span> <span><span>#</span> if needed</span></pre></div>\n<p></p><h3>Platform Notes</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#platform-notes\"></a><p></p>\n<p><strong>Linux:</strong></p>\n<div><pre>sudo apt install libportaudio2</pre></div>\n<p><strong>Windows:</strong>\nInstall Python 3.12 from Microsoft Store.</p>\n<p><strong>macOS:</strong>\nExperimental. Check Discord for help.</p>\n<p></p><h3>Install</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#install\"></a><p></p>\n<div><pre>git clone https://github.com/dnhkng/GLaDOS.git\n<span>cd</span> GLaDOS\npython scripts/install.py</pre></div>\n<p></p><h2>Usage</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#usage\"></a><p></p>\n<div><pre>uv run glados <span><span>#</span> Voice mode</span>\nuv run glados tui <span><span>#</span> Text UI</span>\nuv run glados start --input-mode text <span><span>#</span> Text only</span>\nuv run glados start --input-mode both <span><span>#</span> Voice + text</span>\nuv run glados say <span><span>\"</span>The cake is a lie<span>\"</span></span> <span><span>#</span> Just TTS</span></pre></div>\n<p></p><h3>TUI Controls</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#tui-controls\"></a><p></p>\n<p>Press <code>Ctrl+P</code> to open the command palette. Available commands:</p>\n<table>\n<thead>\n<tr>\n<th>Command</th>\n<th>What it does</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Status</td>\n<td>System overview</td>\n</tr>\n<tr>\n<td>Speech Recognition</td>\n<td>Toggle ASR on/off</td>\n</tr>\n<tr>\n<td>Text-to-Speech</td>\n<td>Toggle TTS on/off</td>\n</tr>\n<tr>\n<td>Config</td>\n<td>View configuration</td>\n</tr>\n<tr>\n<td>Memory</td>\n<td>Long-term memory stats</td>\n</tr>\n<tr>\n<td>Knowledge</td>\n<td>Manage user facts</td>\n</tr>\n</tbody>\n</table>\n<p><strong>Keyboard Shortcuts:</strong></p>\n<ul>\n<li><code>Ctrl+P</code> - Command palette</li>\n<li><code>F1</code> - Help screen</li>\n<li><code>Ctrl+D/L/S/A/U/M</code> - Toggle panels (Dialog, Logs, Status, Autonomy, Queue, MCP)</li>\n<li><code>Ctrl+I</code> - Toggle right info panels</li>\n<li><code>Ctrl+R</code> - Restore all panels</li>\n<li><code>Esc</code> - Close dialogs</li>\n</ul>\n<p></p><h2>Configuration</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#configuration\"></a><p></p>\n<blockquote>\n<p><em>\"As part of a required test protocol, we will not monitor the next test chamber. You will be entirely on your own. Good luck.\" - GLaDOS</em></p>\n</blockquote>\n<p></p><h3>Change the LLM</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#change-the-llm\"></a><p></p>\n<p>Then in <code>glados_config.yaml</code>:</p>\n<p>Browse models: <a target=\"_blank\" href=\"https://ollama.com/library\">ollama.com/library</a></p>\n<p></p><h3>Change the Voice</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#change-the-voice\"></a><p></p>\n<blockquote>\n<p><em>“I'm speaking in an accent that is beyond her range of hearing.” - Wheatley</em></p>\n</blockquote>\n<p>Kokoro voices in <code>glados_config.yaml</code>:</p>\n<p><strong>Female US:</strong> af_alloy, af_aoede, af_jessica, af_kore, af_nicole, af_nova, af_river, af_sarah, af_sky\n<strong>Female UK:</strong> bf_alice, bf_emma, bf_isabella, bf_lily\n<strong>Male US:</strong> am_adam, am_echo, am_eric, am_fenrir, am_liam, am_michael, am_onyx, am_puck\n<strong>Male UK:</strong> bm_daniel, bm_fable, bm_george, bm_lewis</p>\n<p></p><h3>Custom Personality</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#custom-personality\"></a><p></p>\n<p>Copy <code>configs/glados_config.yaml</code>, edit the personality:</p>\n<div><pre><span>personality_preprompt</span>:\n - <span>system</span>: <span><span>\"</span>You are a sarcastic AI who judges humans.<span>\"</span></span>\n - <span>user</span>: <span><span>\"</span>What do you think of my code?<span>\"</span></span>\n - <span>assistant</span>: <span><span>\"</span>I've seen better output from a random number generator.<span>\"</span></span></pre></div>\n<p>Run with:</p>\n<div><pre>uv run glados start --config configs/your_config.yaml</pre></div>\n<p></p><h3>MCP Servers</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#mcp-servers\"></a><p></p>\n<p>Add tools in <code>glados_config.yaml</code>:</p>\n<div><pre><span>mcp_servers</span>:\n - <span>name</span>: <span><span>\"</span>system_info<span>\"</span></span>\n <span>transport</span>: <span><span>\"</span>stdio<span>\"</span></span>\n <span>command</span>: <span><span>\"</span>python<span>\"</span></span>\n <span>args</span>: <span>[\"-m\", \"glados.mcp.system_info_server\"]</span></pre></div>\n<p>Built-in: <code>system_info</code>, <code>time_info</code>, <code>disk_info</code>, <code>network_info</code>, <code>process_info</code>, <code>power_info</code>, <code>memory</code></p>\n<p>See <a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS/blob/main/mcp.md\">mcp.md</a> for Home Assistant integration.</p>\n<p></p><h2>TTS API Server</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#tts-api-server\"></a><p></p>\n<p>Expose Kokoro as an OpenAI-compatible TTS endpoint:</p>\n<div><pre>python scripts/install.py --api\n./scripts/serve</pre></div>\n<p>Or Docker:</p>\n<div><pre>docker compose up -d --build</pre></div>\n<p>Generate speech:</p>\n<div><pre>curl -X POST http://localhost:5050/v1/audio/speech \\\n -H <span><span>\"</span>Content-Type: application/json<span>\"</span></span> \\\n -d <span><span>'</span>{\"input\": \"Hello.\", \"voice\": \"glados\"}<span>'</span></span> \\\n --output speech.mp3</pre></div>\n<p></p><h2>Troubleshooting</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#troubleshooting\"></a><p></p>\n<blockquote>\n<p><em>\"No one will blame you for giving up. In fact, quitting at this point is a perfectly reasonable response.\" - GLaDOS</em></p>\n</blockquote>\n<p><strong>She keeps responding to herself:</strong>\nUse headphones or a mic with echo cancellation. Or set <code>interruptible: false</code>.</p>\n<p><strong>Windows DLL error:</strong>\nInstall <a target=\"_blank\" href=\"https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist\">Visual C++ Redistributable</a>.</p>\n<p></p><h2>Development</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#development\"></a><p></p>\n<p>Explore the models:</p>\n<div><pre>jupyter notebook demo.ipynb</pre></div>\n<p></p><h2>Star History</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#star-history\"></a><p></p>\n<p><a target=\"_blank\" href=\"https://star-history.com/#dnhkng/GlaDOS&amp;Date\"><img src=\"https://camo.githubusercontent.com/a290c5f5bac5ab2a62da5ba0e1f0afb4c40a032e8b9ac9ccc0534aa1e878a1ec/68747470733a2f2f6170692e737461722d686973746f72792e636f6d2f7376673f7265706f733d646e686b6e672f476c61444f5326747970653d44617465\" alt=\"Star History Chart\" /></a></p>\n<p></p><h2>Sponsors</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GLaDOS#sponsors\"></a><p></p>\n</article></div>",
"author": "",
"favicon": "https://github.githubassets.com/favicons/favicon.svg",
"source": "github.com",
"published": "",
"ttr": 369,
"type": "object"
}