收集和记录用户反馈对于了解 GenAI 应用程序的实际质量至关重要。 MLflow 提供了一种结构化方法,用于捕获反馈作为跟踪评估,使你能够随着时间的推移跟踪质量、确定改进领域,以及从生产数据生成评估数据集。
先决条件
根据环境选择适当的安装方法:
生产
对于生产部署,请安装 mlflow-tracing
包:
pip install --upgrade mlflow-tracing
该 mlflow-tracing
包针对生产用途进行优化,具有最少的依赖项和更好的性能特征。
开发
对于开发环境,请使用 Databricks 附加组件安装完整的 MLflow 包:
pip install --upgrade "mlflow[databricks]>=3.1"
完整 mlflow[databricks]
包包括 Databricks 本地开发和试验所需的所有功能。
API log_feedback
在这两个包中都可用,因此无论选择哪种安装方法,都可以收集用户反馈。
注释
收集用户反馈需要 MLflow 3。 由于性能限制和缺少生产用途所必需的功能,不支持 MLflow 2.x。
为什么收集用户反馈?
用户反馈提供有关应用程序性能的一个基本事实:
- 实际质量信号 - 了解实际用户如何感知应用程序的输出
- 持续改进 - 识别负面反馈中的模式以指导开发
- 训练数据创建 - 使用反馈生成高质量的评估数据集
- 质量监视 - 跟踪随时间推移和跨不同用户细分的满意度指标
- 模型微调 - 利用反馈数据改进基础模型
反馈类型
MLflow 通过其评估系统支持各种类型的反馈:
反馈类型 | DESCRIPTION | 常见用例 |
---|---|---|
二进制反馈 | 简单的好评/差评或正确/不正确 | 快速用户满意度信号 |
数值分数 | 按比例评分(例如 1-5 星) | 详细质量评估 |
分类反馈 | 多项选择 | 对问题或响应类型进行分类 |
文本反馈 | 自由格式注释 | 详细的用户说明 |
了解反馈数据模型
在 MLflow 中,用户反馈是通过 反馈 实体捕获的,这种实体是可附加到追踪或特定片段上的一种 评估 类型。 反馈实体提供一种结构化方法来存储:
- 值:实际反馈(布尔、数字、文本或结构化数据)
- 来源:关于谁或什么提供了反馈的信息(人类用户、LLM 评审或代码)
- 理由:反馈的可选解释
- 元数据:其他上下文,如时间戳或自定义属性
了解此数据模型有助于设计与 MLflow 的评估和监视功能无缝集成的有效反馈收集系统。 有关反馈实体架构和所有可用字段的详细信息,请参阅 跟踪数据模型中的“反馈”部分。
最终用户反馈集合
在生产环境中实现反馈收集时,需要将用户反馈链接到特定跟踪。 可以使用两种方法:
- 使用客户端请求 ID - 在处理请求时生成自己的唯一 ID,并在以后参考它们以获取反馈
- 使用 MLflow 跟踪 ID - 使用 MLflow 自动生成的跟踪 ID
了解反馈收集流
这两种方法都遵循类似的模式:
在初始请求期间:应用程序生成唯一的客户端请求 ID 或检索 MLflow 生成的跟踪 ID
收到响应后:用户可以通过引用任一 ID 来提供反馈,两种方法遵循类似的模式。
在初始请求期间:应用程序生成唯一的客户端请求 ID 或检索 MLflow 生成的跟踪 ID
收到响应后:用户可以通过引用任一 ID 来提供反馈
反馈已记录:MLflow 的
log_feedback
API 会创建附加到原始跟踪的评估结果分析和监视:可以针对所有跟踪查询和分析反馈
实现反馈收集
方法 1:使用 MLflow 跟踪 ID
最简单的方法是使用 MLflow 为每个跟踪自动生成的跟踪 ID。 可以在请求处理期间检索此 ID,并将其返回到客户端:
后端实现
import mlflow
from fastapi import FastAPI, Query
from mlflow.client import MlflowClient
from mlflow.entities import AssessmentSource
from pydantic import BaseModel
from typing import Optional
app = FastAPI()
class ChatRequest(BaseModel):
message: str
class ChatResponse(BaseModel):
response: str
trace_id: str # Include the trace ID in the response
@app.post("/chat", response_model=ChatResponse)
def chat(request: ChatRequest):
"""
Process a chat request and return the trace ID for feedback collection.
"""
# Your GenAI application logic here
response = process_message(request.message) # Replace with your actual processing logic
# Get the current trace ID
trace_id = mlflow.get_current_active_span().trace_id
return ChatResponse(
response=response,
trace_id=trace_id
)
class FeedbackRequest(BaseModel):
is_correct: bool # True for thumbs up, False for thumbs down
comment: Optional[str] = None
@app.post("/feedback")
def submit_feedback(
trace_id: str = Query(..., description="The trace ID from the chat response"),
feedback: FeedbackRequest = ...,
user_id: Optional[str] = Query(None, description="User identifier")
):
"""
Collect user feedback using the MLflow trace ID.
"""
# Log the feedback directly using the trace ID
mlflow.log_feedback(
trace_id=trace_id,
name="user_feedback",
value=feedback.is_correct,
source=AssessmentSource(
source_type="HUMAN",
source_id=user_id
),
rationale=feedback.comment
)
return {
"status": "success",
"trace_id": trace_id,
}
前端实现示例
下面是基于 React 的应用程序的前端实现示例:
// React example for chat with feedback
import React, { useState } from 'react';
function ChatWithFeedback() {
const [message, setMessage] = useState('');
const [response, setResponse] = useState('');
const [traceId, setTraceId] = useState(null);
const [feedbackSubmitted, setFeedbackSubmitted] = useState(false);
const sendMessage = async () => {
try {
const res = await fetch('/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message }),
});
const data = await res.json();
setResponse(data.response);
setTraceId(data.trace_id);
setFeedbackSubmitted(false);
} catch (error) {
console.error('Chat error:', error);
}
};
const submitFeedback = async (isCorrect, comment = null) => {
if (!traceId || feedbackSubmitted) return;
try {
const params = new URLSearchParams({
trace_id: traceId,
...(userId && { user_id: userId }),
});
const res = await fetch(`/feedback?${params}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
is_correct: isCorrect,
comment: comment,
}),
});
if (res.ok) {
setFeedbackSubmitted(true);
// Optionally show success message
}
} catch (error) {
console.error('Feedback submission error:', error);
}
};
return (
<div>
<input value={message} onChange={(e) => setMessage(e.target.value)} placeholder="Ask a question..." />
<button onClick={sendMessage}>Send</button>
{response && (
<div>
<p>{response}</p>
<div className="feedback-buttons">
<button onClick={() => submitFeedback(true)} disabled={feedbackSubmitted}>
👍
</button>
<button onClick={() => submitFeedback(false)} disabled={feedbackSubmitted}>
👎
</button>
</div>
{feedbackSubmitted && <span>Thanks for your feedback!</span>}
</div>
)}
</div>
);
}
方法 2:使用客户端请求 ID
若要更好地控制请求跟踪,可以使用自己的唯一客户端请求 ID。 如果需要维护自己的请求跟踪系统或与现有基础结构集成,此方法非常有用:
此方法要求你实现请求跟踪,其中每个跟踪都有一个 client_request_id
属性。 有关如何在初始请求期间将客户端请求 ID 附加到跟踪的详细信息,请参阅 向跟踪添加上下文。
后端实现
import mlflow
from fastapi import FastAPI, Query, Request
from mlflow.client import MlflowClient
from mlflow.entities import AssessmentSource
from pydantic import BaseModel
from typing import Optional
import uuid
app = FastAPI()
class ChatRequest(BaseModel):
message: str
class ChatResponse(BaseModel):
response: str
client_request_id: str # Include the client request ID in the response
@app.post("/chat", response_model=ChatResponse)
def chat(request: ChatRequest):
"""
Process a chat request and set a client request ID for later feedback collection.
"""
# Sample: Generate a unique client request ID
# Normally, this ID would be your app's backend existing ID for this interaction
client_request_id = f"req-{uuid.uuid4().hex[:8]}"
# Attach the client request ID to the current trace
mlflow.update_current_trace(client_request_id=client_request_id)
# Your GenAI application logic here
response = process_message(request.message) # Replace with your actual processing logic
return ChatResponse(
response=response,
client_request_id=client_request_id
)
class FeedbackRequest(BaseModel):
is_correct: bool # True for thumbs up, False for thumbs down
comment: Optional[str] = None
@app.post("/feedback")
def submit_feedback(
request: Request,
client_request_id: str = Query(..., description="The request ID from the original interaction"),
feedback: FeedbackRequest = ...
):
"""
Collect user feedback for a specific interaction.
This endpoint:
1. Finds the trace using the client request ID
2. Logs the feedback as an MLflow assessment
3. Adds tags for easier querying and filtering
"""
client = MlflowClient()
# Find the trace using the client request ID
experiment = client.get_experiment_by_name("/Shared/production-app")
traces = client.search_traces(
experiment_ids=[experiment.experiment_id],
filter_string=f"attributes.client_request_id = '{client_request_id}'",
max_results=1
)
if not traces:
return {"status": "error", "message": "Unexpected error: request not found"}, 500
# Log the feedback as an assessment
# Assessments are the structured way to attach feedback to traces
mlflow.log_feedback(
trace_id=traces[0].info.trace_id,
name="user_feedback",
value=feedback.is_correct,
source=AssessmentSource(
source_type="HUMAN", # Indicates this is human feedback
source_id=request.headers.get("X-User-ID") # Link feedback to the user who provided it
),
rationale=feedback.comment # Optional explanation from the user
)
return {
"status": "success",
"trace_id": traces[0].info.trace_id,
}
前端实现示例
下面是基于 React 的应用程序的前端实现示例。 使用客户端请求 ID 时,前端需要存储和管理这些 ID:
// React example with session-based request tracking
import React, { useState, useEffect } from 'react';
function ChatWithRequestTracking() {
const [message, setMessage] = useState('');
const [conversations, setConversations] = useState([]);
const [sessionId] = useState(() => `session-${Date.now()}`);
const sendMessage = async () => {
try {
const res = await fetch('/chat', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Session-ID': sessionId,
},
body: JSON.stringify({ message }),
});
const data = await res.json();
// Store conversation with request ID
setConversations((prev) => [
...prev,
{
id: data.client_request_id,
message: message,
response: data.response,
timestamp: new Date(),
feedbackSubmitted: false,
},
]);
setMessage('');
} catch (error) {
console.error('Chat error:', error);
}
};
const submitFeedback = async (requestId, isCorrect, comment = null) => {
try {
const params = new URLSearchParams({
client_request_id: requestId,
});
const res = await fetch(`/feedback?${params}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-User-ID': getUserId(), // Your user identification method
},
body: JSON.stringify({
is_correct: isCorrect,
comment: comment,
}),
});
if (res.ok) {
// Mark feedback as submitted
setConversations((prev) =>
prev.map((conv) => (conv.id === requestId ? { ...conv, feedbackSubmitted: true } : conv)),
);
}
} catch (error) {
console.error('Feedback submission error:', error);
}
};
return (
<div>
<div className="chat-history">
{conversations.map((conv) => (
<div key={conv.id} className="conversation">
<div className="user-message">{conv.message}</div>
<div className="bot-response">{conv.response}</div>
<div className="feedback-section">
<button onClick={() => submitFeedback(conv.id, true)} disabled={conv.feedbackSubmitted}>
👍
</button>
<button onClick={() => submitFeedback(conv.id, false)} disabled={conv.feedbackSubmitted}>
👎
</button>
{conv.feedbackSubmitted && <span>✓ Feedback received</span>}
</div>
</div>
))}
</div>
<div className="chat-input">
<input
value={message}
onChange={(e) => setMessage(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && sendMessage()}
placeholder="Type your message..."
/>
<button onClick={sendMessage}>Send</button>
</div>
</div>
);
}
关键实现详细信息
AssessmentSource:对象 AssessmentSource
标识提供反馈的人员或内容:
source_type
:可以是用户反馈的“HUMAN”,也可以是自动评估的“LLM_JUDGE”source_id
:标识提供反馈的特定用户或系统
反馈存储:反馈以跟踪评估的形式存储,这意味着:
- 它永久与特定交互关联
- 它可以与跟踪数据一起查询
- 在查看跟踪时,它在 MLflow UI 中可见
处理不同的反馈类型
可以扩展这两种方法来支持更复杂的反馈。 下面是使用跟踪 ID 的示例:
from mlflow.entities import AssessmentSource
@app.post("/detailed-feedback")
def submit_detailed_feedback(
trace_id: str,
accuracy: int = Query(..., ge=1, le=5, description="Accuracy rating from 1-5"),
helpfulness: int = Query(..., ge=1, le=5, description="Helpfulness rating from 1-5"),
relevance: int = Query(..., ge=1, le=5, description="Relevance rating from 1-5"),
user_id: str = Query(..., description="User identifier"),
comment: Optional[str] = None
):
"""
Collect multi-dimensional feedback with separate ratings for different aspects.
Each aspect is logged as a separate assessment for granular analysis.
"""
# Log each dimension as a separate assessment
dimensions = {
"accuracy": accuracy,
"helpfulness": helpfulness,
"relevance": relevance
}
for dimension, score in dimensions.items():
mlflow.log_feedback(
trace_id=trace_id,
name=f"user_{dimension}",
value=score / 5.0, # Normalize to 0-1 scale
source=AssessmentSource(
source_type="HUMAN",
source_id=user_id
),
rationale=comment if dimension == "accuracy" else None
)
return {
"status": "success",
"trace_id": trace_id,
"feedback_recorded": dimensions
}
使用流式响应处理反馈
使用流式响应(Server-Sent 事件或 WebSocket)时,在流完成之前,跟踪 ID 是不可用的。 这为需要不同方法的反馈收集带来了独特的挑战。
为什么流媒体与众不同
在传统的请求-响应模式中,你将一起收到完整的响应和跟踪 ID。 采用流式传输:
- 令牌以增量方式到达:随着令牌从 LLM 流式传输,响应会随着时间推移而生成
- 跟踪完成延迟:跟踪 ID 仅在整个流完成之后生成
- 反馈 UI 必须等待:在用户拥有完整的响应和跟踪 ID 之前,用户无法提供反馈
使用 SSE 实现后端功能
下面介绍如何在流末尾使用跟踪 ID 传递实现流式处理:
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
import mlflow
import json
import asyncio
from typing import AsyncGenerator
@app.post("/chat/stream")
async def chat_stream(request: ChatRequest):
"""
Stream chat responses with trace ID sent at completion.
"""
async def generate() -> AsyncGenerator[str, None]:
try:
# Start MLflow trace
with mlflow.start_span(name="streaming_chat") as span:
# Update trace with request metadata
mlflow.update_current_trace(
request_message=request.message,
stream_start_time=datetime.now().isoformat()
)
# Stream tokens from your LLM
full_response = ""
async for token in your_llm_stream_function(request.message):
full_response += token
yield f"data: {json.dumps({'type': 'token', 'content': token})}\n\n"
await asyncio.sleep(0.01) # Prevent overwhelming the client
# Log the complete response to the trace
span.set_attribute("response", full_response)
span.set_attribute("token_count", len(full_response.split()))
# Get trace ID after completion
trace_id = span.trace_id
# Send trace ID as final event
yield f"data: {json.dumps({'type': 'done', 'trace_id': trace_id})}\n\n"
except Exception as e:
# Log error to trace if available
if mlflow.get_current_active_span():
mlflow.update_current_trace(error=str(e))
yield f"data: {json.dumps({'type': 'error', 'error': str(e)})}\n\n"
return StreamingResponse(
generate(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable proxy buffering
}
)
用于流式处理的前端实现
处理流媒体事件,并在收到跟踪 ID 后才启用反馈:
// React hook for streaming chat with feedback
import React, { useState, useCallback } from 'react';
function useStreamingChat() {
const [isStreaming, setIsStreaming] = useState(false);
const [streamingContent, setStreamingContent] = useState('');
const [traceId, setTraceId] = useState(null);
const [error, setError] = useState(null);
const sendStreamingMessage = useCallback(async (message) => {
// Reset state
setIsStreaming(true);
setStreamingContent('');
setTraceId(null);
setError(null);
try {
const response = await fetch('/chat/stream', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message }),
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
// Keep the last incomplete line in the buffer
buffer = lines.pop() || '';
for (const line of lines) {
if (line.startsWith('data: ')) {
try {
const data = JSON.parse(line.slice(6));
switch (data.type) {
case 'token':
setStreamingContent((prev) => prev + data.content);
break;
case 'done':
setTraceId(data.trace_id);
setIsStreaming(false);
break;
case 'error':
setError(data.error);
setIsStreaming(false);
break;
}
} catch (e) {
console.error('Failed to parse SSE data:', e);
}
}
}
}
} catch (error) {
setError(error.message);
setIsStreaming(false);
}
}, []);
return {
sendStreamingMessage,
streamingContent,
isStreaming,
traceId,
error,
};
}
// Component using the streaming hook
function StreamingChatWithFeedback() {
const [message, setMessage] = useState('');
const [feedbackSubmitted, setFeedbackSubmitted] = useState(false);
const { sendStreamingMessage, streamingContent, isStreaming, traceId, error } = useStreamingChat();
const handleSend = () => {
if (message.trim()) {
setFeedbackSubmitted(false);
sendStreamingMessage(message);
setMessage('');
}
};
const submitFeedback = async (isPositive) => {
if (!traceId || feedbackSubmitted) return;
try {
const response = await fetch(`/feedback?trace_id=${traceId}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
is_correct: isPositive,
comment: null,
}),
});
if (response.ok) {
setFeedbackSubmitted(true);
}
} catch (error) {
console.error('Feedback submission failed:', error);
}
};
return (
<div className="streaming-chat">
<div className="chat-messages">
{streamingContent && (
<div className="message assistant">
{streamingContent}
{isStreaming && <span className="typing-indicator">...</span>}
</div>
)}
{error && <div className="error-message">Error: {error}</div>}
</div>
{/* Feedback buttons - only enabled when trace ID is available */}
{streamingContent && !isStreaming && traceId && (
<div className="feedback-section">
<span>Was this response helpful?</span>
<button onClick={() => submitFeedback(true)} disabled={feedbackSubmitted} className="feedback-btn positive">
👍 Yes
</button>
<button onClick={() => submitFeedback(false)} disabled={feedbackSubmitted} className="feedback-btn negative">
👎 No
</button>
{feedbackSubmitted && <span className="feedback-thanks">Thank you!</span>}
</div>
)}
<div className="chat-input-section">
<input
type="text"
value={message}
onChange={(e) => setMessage(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && !isStreaming && handleSend()}
placeholder="Type your message..."
disabled={isStreaming}
/>
<button onClick={handleSend} disabled={isStreaming || !message.trim()}>
{isStreaming ? 'Streaming...' : 'Send'}
</button>
</div>
</div>
);
}
流媒体的关键注意事项
使用流式处理响应实现反馈收集时,请记住以下几点:
跟踪 ID 计时:跟踪 ID 仅在流式处理完成后可用。 设计您的用户界面,使其能优雅地处理这种情况,方法是禁用反馈控件,直到收到跟踪 ID。
事件结构:使用包含一个
type
字段的一致事件格式,以区分内容令牌、完成事件和错误。 这使得分析和处理事件更加可靠。状态管理:分别跟踪流媒体内容和追踪 ID。 在每个新交互开始时重置所有状态,以防止过时的数据问题。
错误处理:在流中包含错误事件以正常处理故障。 确保在可能的情况下,将错误记录到跟踪中进行调试。
缓冲区管理:
- 使用
X-Accel-Buffering: no
标头禁用代理缓冲 - 在前端实现正确的行缓冲以处理部分 SSE 消息
- 考虑实现网络中断的重新连接逻辑
- 使用
性能优化:
- 在令牌之间添加轻微的延迟 (
asyncio.sleep(0.01)
) 以防止客户端瘫痪 - 如果多个令牌到达速度过快,则对其进行批处理
- 考虑为慢速客户端实现反压机制
- 在令牌之间添加轻微的延迟 (
分析反馈数据
收集反馈后,可以对其进行分析,以获取有关应用程序质量和用户满意度的见解。
在跟踪 UI 中查看反馈
通过 SDK 获取带有反馈的跟踪
在跟踪 UI 中查看反馈
通过 SDK 获取带有反馈的跟踪
首先,从特定时间范围检索跟踪:
from mlflow.client import MlflowClient
from datetime import datetime, timedelta
def get_recent_traces(experiment_name: str, hours: int = 24):
"""Get traces from the last N hours."""
client = MlflowClient()
# Calculate cutoff time
cutoff_time = datetime.now() - timedelta(hours=hours)
cutoff_timestamp_ms = int(cutoff_time.timestamp() * 1000)
# Query traces
traces = client.search_traces(
experiment_names=[experiment_name],
filter_string=f"trace.timestamp_ms > {cutoff_timestamp_ms}"
)
return traces
通过 SDK 分析反馈模式
从跟踪中提取和分析反馈:
def analyze_user_feedback(traces):
"""Analyze feedback patterns from traces."""
client = MlflowClient()
# Initialize counters
total_traces = len(traces)
traces_with_feedback = 0
positive_count = 0
negative_count = 0
# Process each trace
for trace in traces:
# Get full trace details including assessments
trace_detail = client.get_trace(trace.info.trace_id)
if trace_detail.data.assessments:
traces_with_feedback += 1
# Count positive/negative feedback
for assessment in trace_detail.data.assessments:
if assessment.name == "user_feedback":
if assessment.value:
positive_count += 1
else:
negative_count += 1
# Calculate metrics
if traces_with_feedback > 0:
feedback_rate = (traces_with_feedback / total_traces) * 100
positive_rate = (positive_count / traces_with_feedback) * 100
else:
feedback_rate = 0
positive_rate = 0
return {
"total_traces": total_traces,
"traces_with_feedback": traces_with_feedback,
"feedback_rate": feedback_rate,
"positive_rate": positive_rate,
"positive_count": positive_count,
"negative_count": negative_count
}
# Example usage
traces = get_recent_traces("/Shared/production-genai-app", hours=24)
results = analyze_user_feedback(traces)
print(f"Feedback rate: {results['feedback_rate']:.1f}%")
print(f"Positive feedback: {results['positive_rate']:.1f}%")
print(f"Total feedback: {results['traces_with_feedback']} out of {results['total_traces']} traces")
分析多维反馈
要查看带有评分的更详细反馈:
def analyze_ratings(traces):
"""Analyze rating-based feedback."""
client = MlflowClient()
ratings_by_dimension = {}
for trace in traces:
trace_detail = client.get_trace(trace.info.trace_id)
if trace_detail.data.assessments:
for assessment in trace_detail.data.assessments:
# Look for rating assessments
if assessment.name.startswith("user_") and assessment.name != "user_feedback":
dimension = assessment.name.replace("user_", "")
if dimension not in ratings_by_dimension:
ratings_by_dimension[dimension] = []
ratings_by_dimension[dimension].append(assessment.value)
# Calculate averages
average_ratings = {}
for dimension, scores in ratings_by_dimension.items():
if scores:
average_ratings[dimension] = sum(scores) / len(scores)
return average_ratings
# Example usage
ratings = analyze_ratings(traces)
for dimension, avg_score in ratings.items():
print(f"{dimension}: {avg_score:.2f}/1.0")
生产注意事项
有关生产部署,请参阅使用跟踪来实现生产环境的可观测性指南,其中涵盖:
- 实现反馈收集终结点
- 使用客户端请求 ID 将反馈与跟踪关联起来
- 设置实时质量监测
- 大容量反馈处理的最佳做法
后续步骤
继续您的旅程,并参考这些推荐的行动和教程。
参考指南
浏览本指南中提到的概念和功能的详细文档。
- 日志记录评估 - 了解如何将反馈存储为评估
- 跟踪数据模型 - 了解评估和跟踪结构
- 通过 SDK 查询跟踪 - 用于分析反馈的高级技术