🔬Technical Deep-Dive

How QphiQ Actually Works

A transparent look at our architecture, what AI can and can't do, and honest answers about multi-agent systems.

🧠

What LLMs Can (and Can't) Do

Large Language Models (LLMs) like GPT-4, Claude, and Gemini are text-in, text-out machines. They receive text, process it, and generate text back. That's it. They cannot:

Cannot Access the Internet

LLMs don't browse - they only see what's in the prompt

Cannot Make API Requests

They can't fetch data from databases or external services

Cannot Read Files Directly

Files must be converted to text and put in the prompt

Cannot Query Databases

Your application code must handle data retrieval

💡 Key Insight

LLMs only "see" what your application code feeds them. When QphiQ shows AI analysis, the AI isn't fetching data—our backend code does, then packages it into a prompt for the AI to analyze.

⚙️

What Actually Happens

// QphiQ Backend (Simplified)
// Step 1: Our code makes API calls
const openAlexData = await fetch('https://api.openalex.org/works?search=CRISPR')
const semanticData = await fetch('https://api.semanticscholar.org/...')

// Step 2: Our code handles cross-validation WITH real data
const crossValidated = mergePapers(openAlexData, semanticData)

// Step 3: Our code sends data to LLM for synthesis
const synthesis = await anthropic.messages.create({
model: 'claude-3-haiku',
messages: [{ role: 'user', content: `Analyze these papers: ${crossValidated}` }]
})

// Step 4: Return combined results
return { papers: crossValidated, analysis: synthesis }

The Backend is the Middleman:

1

Fetches raw data from APIs (OpenAlex, Semantic Scholar, SEC EDGAR)

2

Packages that data into a prompt

3

Sends prompt to LLM

4

Returns LLM's response to the user

🤖

Multi-Agent: What It Really Means

⚠️ Honest Assessment

In QphiQ's current form, "multi-agent" is primarily a presentation layer. Here's what's actually happening versus what it looks like:

What It Looks Like
  • • 4 specialized agents debating
  • • Agents reaching consensus
  • • Real-time debate
  • • Different expert perspectives
What It Actually Is
  • • Same LLM called 4 times with different prompts
  • • Your code averaging their outputs
  • • Sequential API calls displayed with animation
  • • Different system prompts to same model

What the Code Actually Does:

// "Multi-agent" = same LLM with different prompts

const literatureAgent = await claude("You are a literature expert. Analyze...")
const trendsAgent = await claude("You are a trends analyst. Analyze...")
const authorAgent = await claude("You are an author expert. Analyze...")
const verifyAgent = await claude("You are a verification expert. Check...")
💡 Why This Still Has Value

A single well-written prompt could produce 80% of the same value. The multi-agent UI adds presentation clarity and demonstrates architectural thinking—valuable for showing engineering sophistication to enterprise buyers.

🎯 What "Real" Multi-Agent Would Look Like
QphiQ Today
  • All agents use same RAW data
  • Agents can't update state or react to each other
  • Same base LLM model with different prompts
  • "Consensus" = averaging scores
Real Multi-Agent
  • Each agent queries DIFFERENT data sources
  • Agents respond to each other's outputs
  • Different specialized models
  • Agents can disagree and flag conflicts
🏗️

Why We Built It This Way

🏛️

Architecture Ready

The structure is in place to evolve into true multi-agent when compute costs allow. Current implementation demonstrates the pattern.

Verified Data

100% of claims come from official sources, not AI hallucination. Cross-validation happens at the data layer, not the LLM layer.

🎨

Clear UX

Nested agent interface enables complex analysis. Agent-like abstractions give users mental models to navigate.

📚

Educational

Shows enterprise buyers multi-agent concepts in action. Demonstrates deep understanding of AI architecture.

Transparency Builds Trust

We believe in being honest about what AI can and can't do. That's how we build products you can actually rely on.