Skip to content

场景 5: 多模型创作对比

模块:Parallel(并行执行)优先级:🟢 P3(中)业务价值:提供多角度创意,增强用户体验

一、业务背景

1.1 场景描述

用户在进行创意工作时,可能想要:

  • 对比不同模型对同一问题的回答
  • 从多个角度分析同一主题
  • 同时生成多种风格的内容

1.2 期望效果

优势:

  • 3 个模型并行,总耗时 = max(单个模型耗时)
  • 用户可对比不同模型的风格
  • 提供更多创意选择

二、应用场景

2.1 创意写作对比

2.2 代码方案对比

2.3 多角度分析


三、代码实现

3.1 并行创作服务

创建文件: services/parallel_creative.py

python
"""并行创意服务

同时调用多个模型,对比输出结果。
"""
from typing import List, Dict, Any, Optional, TypedDict, Annotated
from operator import add
from dataclasses import dataclass, field
import os
import logging
import asyncio

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langgraph.graph import START, StateGraph, END

logger = logging.getLogger(__name__)


@dataclass
class ModelConfig:
    """模型配置"""
    name: str
    display_name: str
    description: str
    system_prompt: Optional[str] = None


# 预设模型配置
MODEL_CONFIGS = {
    "gpt-4o": ModelConfig(
        name="openai/gpt-4o",
        display_name="GPT-4o",
        description="OpenAI 最新旗舰模型,综合能力强",
        system_prompt="你是一个创意丰富的助手,擅长用生动的语言表达。"
    ),
    "claude-3.5": ModelConfig(
        name="anthropic/claude-3.5-sonnet",
        display_name="Claude 3.5",
        description="Anthropic 高级模型,擅长细腻的表达",
        system_prompt="你是一个深思熟虑的助手,擅长深入分析和优美表达。"
    ),
    "gemini-pro": ModelConfig(
        name="google/gemini-pro-1.5",
        display_name="Gemini Pro",
        description="Google 大模型,擅长多角度思考",
        system_prompt="你是一个多才多艺的助手,擅长从不同角度看问题。"
    ),
}


class ParallelState(TypedDict):
    """并行执行状态"""
    prompt: str
    task_type: str  # creative, code, analysis
    results: Annotated[List[Dict], add]
    final_output: str


class ParallelCreativeService:
    """并行创意服务"""

    def __init__(self):
        self.api_key = os.getenv("OPENROUTER_API_KEY")
        self.base_url = os.getenv("OPENROUTER_BASE_URL", "https://openrouter.ai/api/v1")

    def _get_llm(self, model_name: str):
        """获取 LLM 实例"""
        return ChatOpenAI(
            model=model_name,
            api_key=self.api_key,
            base_url=self.base_url,
            temperature=0.8  # 创意任务用较高温度
        )

    def create_task_node(self, model_key: str):
        """创建任务节点工厂函数"""
        config = MODEL_CONFIGS[model_key]

        def task_node(state: ParallelState) -> ParallelState:
            llm = self._get_llm(config.name)

            # 构建系统提示
            system_prompt = config.system_prompt or "你是一个有用的助手。"

            # 根据任务类型调整提示
            task_prompts = {
                "creative": f"{system_prompt}\n\n请用你独特的风格完成创作任务。",
                "code": f"{system_prompt}\n\n请提供清晰、高效的代码解决方案。",
                "analysis": f"{system_prompt}\n\n请从你的专业角度进行分析。"
            }

            enhanced_prompt = task_prompts.get(state["task_type"], system_prompt)

            response = llm.invoke([
                SystemMessage(content=enhanced_prompt),
                HumanMessage(content=state["prompt"])
            ])

            return {
                "results": [{
                    "model": config.display_name,
                    "model_key": model_key,
                    "description": config.description,
                    "content": response.content
                }]
            }

        return task_node

    def aggregate_results(self, state: ParallelState) -> ParallelState:
        """聚合结果"""
        results = state["results"]

        # 构建对比输出
        output_parts = ["# 📊 多模型对比结果\n"]

        for result in results:
            output_parts.append(f"\n## {result['model']}\n")
            output_parts.append(f"*{result['description']}*\n\n")
            output_parts.append(result["content"])
            output_parts.append("\n---\n")

        return {"final_output": "\n".join(output_parts)}

    def build_graph(self, models: List[str], task_type: str = "creative"):
        """构建并行工作流"""
        if not models:
            models = ["gpt-4o", "claude-3.5", "gemini-pro"]

        builder = StateGraph(ParallelState)

        # 为每个模型添加节点
        for model_key in models:
            if model_key in MODEL_CONFIGS:
                builder.add_node(
                    f"task_{model_key}",
                    self.create_task_node(model_key)
                )

        # 添加聚合节点
        builder.add_node("aggregator", self.aggregate_results)

        # 并行边:START 同时指向所有任务节点
        for model_key in models:
            if model_key in MODEL_CONFIGS:
                builder.add_edge(START, f"task_{model_key}")

        # 所有任务节点指向聚合节点
        for model_key in models:
            if model_key in MODEL_CONFIGS:
                builder.add_edge(f"task_{model_key}", "aggregator")

        builder.add_edge("aggregator", END)

        return builder.compile()

    def compare(
        self,
        prompt: str,
        models: Optional[List[str]] = None,
        task_type: str = "creative"
    ) -> Dict[str, Any]:
        """
        执行多模型对比

        Args:
            prompt: 用户提示
            models: 模型列表,默认使用 gpt-4o, claude-3.5, gemini-pro
            task_type: 任务类型 creative/code/analysis

        Returns:
            对比结果
        """
        models = models or ["gpt-4o", "claude-3.5", "gemini-pro"]
        graph = self.build_graph(models, task_type)

        result = graph.invoke({
            "prompt": prompt,
            "task_type": task_type,
            "results": [],
            "final_output": ""
        })

        return {
            "results": result["results"],
            "comparison": result["final_output"]
        }

3.2 API 端点

python
# api/creative.py

from fastapi import APIRouter
from pydantic import BaseModel
from typing import List, Optional
from services.parallel_creative import ParallelCreativeService

router = APIRouter(prefix="/api", tags=["creative"])


class CompareRequest(BaseModel):
    prompt: str
    models: Optional[List[str]] = None
    task_type: str = "creative"


@router.post("/creative/compare")
async def compare_models(request: CompareRequest):
    """
    多模型对比

    同时调用多个模型,返回对比结果。
    """
    service = ParallelCreativeService()

    result = service.compare(
        prompt=request.prompt,
        models=request.models,
        task_type=request.task_type
    )

    return {"success": True, **result}

四、前端展示

4.1 对比卡片组件

html
<!-- 多模型对比展示 -->
<div class="model-comparison-container">
    <div class="comparison-header">
        <h3>📊 多模型对比</h3>
        <span class="model-count">3 个模型参与</span>
    </div>

    <div class="model-cards" id="model-cards">
        <!-- 卡片由 JS 动态生成 -->
    </div>

    <div class="comparison-footer">
        <button class="btn-select-all">全选对比</button>
        <button class="btn-merge">合并结果</button>
    </div>
</div>

4.2 JavaScript 实现

javascript
// static/js/model-comparison.js

class ModelComparison {
    constructor() {
        this.results = [];
        this.selectedModels = new Set();
    }

    async compare(prompt, models = ['gpt-4o', 'claude-3.5', 'gemini-pro'], taskType = 'creative') {
        // 显示加载状态
        this.showLoading(models);

        try {
            const response = await fetch('/api/creative/compare', {
                method: 'POST',
                headers: {'Content-Type': 'application/json'},
                body: JSON.stringify({
                    prompt,
                    models,
                    task_type: taskType
                })
            });

            const data = await response.json();
            if (data.success) {
                this.results = data.results;
                this.renderComparison();
            }
        } catch (error) {
            this.showError(error);
        }
    }

    showLoading(models) {
        const container = document.getElementById('model-cards');
        container.innerHTML = models.map(model => `
            <div class="model-card loading" data-model="${model}">
                <div class="model-header">
                    <span class="model-name">${this.getModelName(model)}</span>
                    <div class="loading-spinner"></div>
                </div>
                <div class="model-content">
                    <div class="skeleton-text"></div>
                    <div class="skeleton-text"></div>
                    <div class="skeleton-text short"></div>
                </div>
            </div>
        `).join('');
    }

    renderComparison() {
        const container = document.getElementById('model-cards');
        container.innerHTML = this.results.map((result, index) => `
            <div class="model-card" data-model="${result.model_key}">
                <div class="model-header">
                    <span class="model-name">${result.model}</span>
                    <label class="select-checkbox">
                        <input type="checkbox"
                               onchange="modelComparison.toggleSelect('${result.model_key}')">
                        <span>选择</span>
                    </label>
                </div>
                <div class="model-description">${result.description}</div>
                <div class="model-content">${this.formatContent(result.content)}</div>
                <div class="model-footer">
                    <button onclick="modelComparison.copyContent(${index})">📋 复制</button>
                    <button onclick="modelComparison.expandContent(${index})">📖 展开</button>
                </div>
            </div>
        `).join('');
    }

    formatContent(content) {
        // 简单格式化
        if (content.length > 500) {
            return `<div class="content-preview">${content.slice(0, 500)}...</div>
                    <div class="content-full" style="display:none">${content}</div>`;
        }
        return content;
    }

    toggleSelect(modelKey) {
        if (this.selectedModels.has(modelKey)) {
            this.selectedModels.delete(modelKey);
        } else {
            this.selectedModels.add(modelKey);
        }
    }

    copyContent(index) {
        const content = this.results[index].content;
        navigator.clipboard.writeText(content);
        this.showToast('已复制到剪贴板');
    }

    expandContent(index) {
        const card = document.querySelector(`[data-model="${this.results[index].model_key}"]`);
        const preview = card.querySelector('.content-preview');
        const full = card.querySelector('.content-full');

        if (preview && full) {
            preview.style.display = 'none';
            full.style.display = 'block';
        }
    }

    getModelName(key) {
        const names = {
            'gpt-4o': 'GPT-4o',
            'claude-3.5': 'Claude 3.5',
            'gemini-pro': 'Gemini Pro'
        };
        return names[key] || key;
    }

    showToast(message) {
        const toast = document.createElement('div');
        toast.className = 'toast';
        toast.textContent = message;
        document.body.appendChild(toast);
        setTimeout(() => toast.remove(), 3000);
    }
}

const modelComparison = new ModelComparison();

4.3 CSS 样式

css
/* static/css/model-comparison.css */

.model-comparison-container {
    margin: 20px 0;
    border-radius: 12px;
    overflow: hidden;
    box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
}

.comparison-header {
    background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
    color: white;
    padding: 16px 20px;
    display: flex;
    justify-content: space-between;
    align-items: center;
}

.model-count {
    background: rgba(255, 255, 255, 0.2);
    padding: 4px 12px;
    border-radius: 20px;
    font-size: 12px;
}

.model-cards {
    display: grid;
    grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
    gap: 0;
}

.model-card {
    border-right: 1px solid #eee;
    background: white;
}

.model-card:last-child {
    border-right: none;
}

.model-card.loading {
    pointer-events: none;
}

.model-header {
    background: #f8f9fa;
    padding: 12px 16px;
    display: flex;
    justify-content: space-between;
    align-items: center;
    border-bottom: 1px solid #eee;
}

.model-name {
    font-weight: 600;
    color: #333;
}

.model-description {
    padding: 8px 16px;
    font-size: 12px;
    color: #666;
    background: #fafafa;
}

.model-content {
    padding: 16px;
    font-size: 14px;
    line-height: 1.6;
    max-height: 400px;
    overflow-y: auto;
}

.model-footer {
    padding: 12px 16px;
    border-top: 1px solid #eee;
    display: flex;
    gap: 10px;
}

.model-footer button {
    background: none;
    border: 1px solid #ddd;
    padding: 6px 12px;
    border-radius: 4px;
    cursor: pointer;
    font-size: 12px;
}

.model-footer button:hover {
    background: #f5f5f5;
}

/* 加载动画 */
.loading-spinner {
    width: 20px;
    height: 20px;
    border: 2px solid #f3f3f3;
    border-top: 2px solid #667eea;
    border-radius: 50%;
    animation: spin 1s linear infinite;
}

@keyframes spin {
    0% { transform: rotate(0deg); }
    100% { transform: rotate(360deg); }
}

.skeleton-text {
    background: linear-gradient(90deg, #f0f0f0 25%, #e0e0e0 50%, #f0f0f0 75%);
    background-size: 200% 100%;
    animation: shimmer 1.5s infinite;
    height: 16px;
    margin: 10px 0;
    border-radius: 4px;
}

.skeleton-text.short {
    width: 60%;
}

@keyframes shimmer {
    0% { background-position: -200% 0; }
    100% { background-position: 200% 0; }
}

/* 选择复选框 */
.select-checkbox {
    display: flex;
    align-items: center;
    gap: 6px;
    font-size: 12px;
    color: #666;
    cursor: pointer;
}

五、使用示例

5.1 用户交互流程

5.2 实际场景示例

场景:创意写作

用户输入: "写一段产品文案,推荐一款智能手表"

markdown
# 📊 多模型对比结果

## GPT-4o
*OpenAI 最新旗舰模型,综合能力强*

## 智能手表 × 未来生活

不想错过重要消息,又不想被手机束缚?
[智能手表名称] 让你在抬手间掌控一切。

💪 24小时健康监测
📱 消息即时推送
🎯 50+ 运动模式

**现在购买,享受早鸟价!**

---

## Claude 3.5
*Anthropic 高级模型,擅长细腻的表达*

在忙碌的生活中,我们总在寻找那个能让我们更从容的伙伴。

想象一下:清晨醒来,轻抬手腕便知今日天气与日程;
慢跑途中,实时心率与配速尽在掌握;
会议间隙,一瞥便知是否有紧急消息...

这就是 [智能手表名称] —— 不只是手表,是你腕间的智能助理。

---

## Gemini Pro
*Google 大模型,擅长多角度思考*

### 🌟 [智能手表名称] - 重新定义腕间科技

| 特性 | 优势 |
|------|------|
| 🔋 7天续航 | 无需频繁充电 |
| 💧 50米防水 | 游泳佩戴无压力 |
| 🎨 100+表盘 | 风格随心切换 |

> "科技,让生活更简单。"

---

六、性能分析

6.1 时间对比

时间节省:

  • 串行:9 秒(3 + 3 + 3)
  • 并行:4 秒(max)
  • 节省:55%

七、实施计划

步骤任务预估时间
1创建 services/parallel_creative.py2h
2添加 API 端点1h
3前端对比卡片组件2h
4测试和优化1h
总计6h (0.75天)