id stringlengths 14 16 | text stringlengths 29 2.73k | source stringlengths 49 117 |
|---|---|---|
e86eafd513c7-4 | combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
' The president did not mention Ketanji Brown Jackson.\nSOURCES: ../../modules/state_of_the_union.txt'
ConversationalRetri... | https://python.langchain.com/en/latest/integrations/vectara/vectara_chat.html |
e86eafd513c7-5 | result = qa({"question": query, "chat_history": chat_history})
The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender.
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
res... | https://python.langchain.com/en/latest/integrations/vectara/vectara_chat.html |
f7414363f71b-0 | .ipynb
.pdf
Vectara Text Generation
Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
Vectara Text Generation#
This notebook is based on chat_vector_db and adapted to Vectara.
Prepare Data#
First, we prepare the data. For this example, we fetch a documentation site that consists... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
f7414363f71b-1 | source_chunks = []
splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)
for source in sources:
for chunk in splitter.split_text(source.page_content):
source_chunks.append(chunk)
Cloning into '.'...
Set Up Vector DB#
Now that we have the documentation content in chunks, let’s put... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
f7414363f71b-2 | print(chain.apply(inputs))
generate_blog_post("environment variables") | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
f7414363f71b-3 | [{'text': '\n\nEnvironment variables are an essential part of any development workflow. They provide a way to store and access information that is specific to the environment in which the code is running. This can be especially useful when working with different versions of a language or framework, or when running code... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
f7414363f71b-4 | store values that you want to re-use, but don\'t want to be available in any spawned processes.\n\nFor example, if you wanted to store a value and then use it in a command, you could do something like this:\n\n```sh\nVAR=hello && echo $VAR && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR\'))"\n```\n\nThis woul... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
f7414363f71b-5 | is important for customizing the'}, {'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables. In this blog ... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
f7414363f71b-6 | Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
38be06525e19-0 | .md
.pdf
Deployments
Contents
Streamlit
Gradio (on Hugging Face)
Chainlit
Beam
Vercel
FastAPI + Vercel
Kinsta
Fly.io
Digitalocean App Platform
Google Cloud Run
SteamShip
Langchain-serve
BentoML
Databutton
Deployments#
So, you’ve created a really cool chain - now what? How do you deploy it and make it easily shareable... | https://python.langchain.com/en/latest/ecosystem/deployments.html |
38be06525e19-1 | Chainlit doc on the integration with LangChain
Beam#
This repo serves as a template for how deploy a LangChain with Beam.
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
Vercel#
A minimal example on how to run LangChain on Vercel using Flask.
FastAPI + Ve... | https://python.langchain.com/en/latest/ecosystem/deployments.html |
38be06525e19-2 | Databutton#
These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memo... | https://python.langchain.com/en/latest/ecosystem/deployments.html |
0b1281d9c454-0 | .md
.pdf
Querying Tabular Data
Contents
Document Loading
Querying
Chains
Agents
Querying Tabular Data#
Conceptual Guide
Lots of data and information is stored in tabular data, whether it be csvs, excel sheets, or SQL tables.
This page covers all resources available in LangChain for working with data in this format.
D... | https://python.langchain.com/en/latest/use_cases/tabular.html |
79f24370e1e3-0 | .md
.pdf
Agents
Contents
Create Your Own Agent
Step 1: Create Tools
(Optional) Step 2: Modify Agent
(Optional) Step 3: Modify Agent Executor
Examples
Agents#
Conceptual Guide
Agents can be used for a variety of tasks.
Agents combine the decision making ability of a language model with tools in order to create a syste... | https://python.langchain.com/en/latest/use_cases/personal_assistants.html |
79f24370e1e3-1 | (Optional) Step 3: Modify Agent Executor#
This step is usually not necessary, as this is pretty general logic.
Possible reasons you would want to modify this include adding different stopping conditions, or handling errors
Examples#
Specific examples of agents include:
AI Plugins: an implementation of an agent that is ... | https://python.langchain.com/en/latest/use_cases/personal_assistants.html |
40e3552ee723-0 | .md
.pdf
Interacting with APIs
Contents
Chains
Agents
Interacting with APIs#
Conceptual Guide
Lots of data and information is stored behind APIs.
This page covers all resources available in LangChain for working with APIs.
Chains#
If you are just getting started, and you have relatively simple apis, you should get st... | https://python.langchain.com/en/latest/use_cases/apis.html |
86b39aac7753-0 | .rst
.pdf
Evaluation
Contents
The Problem
The Solution
The Examples
Other Examples
Evaluation#
Note
Conceptual Guide
This section of documentation covers how we approach and think about evaluation in LangChain.
Both evaluation of internal chains/agents, but also how we would recommend people building on top of LangCh... | https://python.langchain.com/en/latest/use_cases/evaluation.html |
86b39aac7753-1 | We intend this to be a collection of open source datasets for evaluating common chains and agents.
We have contributed five datasets of our own to start, but we highly intend this to be a community effort.
In order to contribute a dataset, you simply need to join the community and then you will be able to upload datase... | https://python.langchain.com/en/latest/use_cases/evaluation.html |
86b39aac7753-2 | SQL Question Answering (Chinook): A notebook showing evaluation of a question-answering task over a SQL database (the Chinook database).
Agent Vectorstore: A notebook showing evaluation of an agent doing question answering while routing between two different vector databases.
Agent Search + Calculator: A notebook showi... | https://python.langchain.com/en/latest/use_cases/evaluation.html |
4bf77b10685d-0 | .md
.pdf
Chatbots
Chatbots#
Conceptual Guide
Since language models are good at producing text, that makes them ideal for creating chatbots.
Aside from the base prompts/LLMs, an important concept to know for Chatbots is memory.
Most chat based applications rely on remembering what happened in previous interactions, whic... | https://python.langchain.com/en/latest/use_cases/chatbots.html |
f70755ec6d51-0 | .md
.pdf
Summarization
Summarization#
Conceptual Guide
Summarization involves creating a smaller summary of multiple longer documents.
This can be useful for distilling long documents into the core pieces of information.
The recommended way to get started using a summarization chain is:
from langchain.chains.summarize ... | https://python.langchain.com/en/latest/use_cases/summarization.html |
e78d9096ef84-0 | .md
.pdf
Agent Simulations
Contents
Simulations with One Agent
Simulations with Two Agents
Simulations with Multiple Agents
Agent Simulations#
Agent simulations involve interacting one of more agents with each other.
Agent simulations generally involve two main components:
Long Term Memory
Simulation Environment
Spec... | https://python.langchain.com/en/latest/use_cases/agent_simulations.html |
e78d9096ef84-1 | Simulated Environment: PettingZoo: an example of how to create a agent-environment interaction loop for multiple agents with PettingZoo (a multi-agent version of Gymnasium).
Generative Agents: This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Pa... | https://python.langchain.com/en/latest/use_cases/agent_simulations.html |
8c8a473621eb-0 | .md
.pdf
Autonomous Agents
Contents
Baby AGI (Original Repo)
AutoGPT (Original Repo)
MetaPrompt (Original Repo)
Autonomous Agents#
Autonomous Agents are agents that designed to be more long running.
You give them one or multiple long term goals, and they independently execute towards those goals.
The applications com... | https://python.langchain.com/en/latest/use_cases/autonomous_agents.html |
f2dc6cfb39b4-0 | .md
.pdf
Extraction
Extraction#
Conceptual Guide
Most APIs and databases still deal with structured information.
Therefore, in order to better work with those, it can be useful to extract structured information from text.
Examples of this include:
Extracting a structured row to insert into a database from a sentence
Ex... | https://python.langchain.com/en/latest/use_cases/extraction.html |
7e119b972f95-0 | .md
.pdf
Code Understanding
Contents
Conversational Retriever Chain
Code Understanding#
Overview
LangChain is a useful tool designed to parse GitHub code repositories. By leveraging VectorStores, Conversational RetrieverChain, and GPT-4, it can answer questions in the context of an entire GitHub repository or generat... | https://python.langchain.com/en/latest/use_cases/code.html |
7e119b972f95-1 | The full tutorial is available below.
Twitter the-algorithm codebase analysis with Deep Lake: A notebook walking through how to parse github source code and run queries conversation.
LangChain codebase analysis with Deep Lake: A notebook walking through how to analyze and do question answering over THIS code base.
prev... | https://python.langchain.com/en/latest/use_cases/code.html |
d934dc45558f-0 | .md
.pdf
Question Answering over Docs
Contents
Document Question Answering
Adding in sources
Additional Related Resources
End-to-end examples
Question Answering over Docs#
Conceptual Guide
Question answering in this context refers to question answering over your document data.
For question answering over other types ... | https://python.langchain.com/en/latest/use_cases/question_answering.html |
d934dc45558f-1 | The recommended way to get started using a question answering chain is:
from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(llm, chain_type="stuff")
chain.run(input_documents=docs, question=query)
The following resources exist:
Question Answering Notebook: A notebook walking through how ... | https://python.langchain.com/en/latest/use_cases/question_answering.html |
d934dc45558f-2 | CombineDocuments Chains: A conceptual overview of specific types of chains by which you can accomplish this task.
End-to-end examples#
For examples to this done in an end-to-end manner, please see the following resources:
Semantic search over a group chat with Sources Notebook: A notebook that semantically searches ove... | https://python.langchain.com/en/latest/use_cases/question_answering.html |
b8d46e06fabf-0 | .ipynb
.pdf
Question Answering Benchmarking: State of the Union Address
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Question Answering Benchmarking: State of the Union Address#
Here we go over how to benchmark performance on a question answering task over ... | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html |
b8d46e06fabf-1 | Now we can create a question answering chain.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a ... | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html |
b8d46e06fabf-2 | for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 7, ' INCORRECT': 4})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for ... | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html |
acf14a31ec26-0 | .ipynb
.pdf
QA Generation
QA Generation#
This notebook shows how to use the QAGenerationChain to come up with question-answer pairs over a specific document.
This is important because often times you may not have data to evaluate your question-answer system over, so this is a cheap and lightweight way to generate it!
f... | https://python.langchain.com/en/latest/use_cases/evaluation/qa_generation.html |
770309e918be-0 | .ipynb
.pdf
LLM Math
Contents
Setting up a chain
LLM Math#
Evaluating chains that know how to do math.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("llm-math")
Downloading and prepar... | https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html |
770309e918be-1 | sum(correct) / len(correct)
1.0
for i, example in enumerate(dataset):
print("input: ", example["question"])
print("expected output :", example["answer"])
print("prediction: ", numeric_output[i])
input: 5
expected output : 5.0
prediction: 5.0
input: 5 + 3
expected output : 8.0
prediction: 8.0
input: 2^3... | https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html |
770309e918be-2 | next
Evaluating an OpenAPI Chain
Contents
Setting up a chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html |
af52bd3b0017-0 | .ipynb
.pdf
Question Answering Benchmarking: Paul Graham Essay
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Question Answering Benchmarking: Paul Graham Essay#
Here we go over how to benchmark performance on a question answering task over a Paul Graham essa... | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html |
af52bd3b0017-1 | Now we can create a question answering chain.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a ... | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html |
af52bd3b0017-2 | from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 12, ' INCORRECT': 10})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'What did the a... | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html |
61980aa226be-0 | .ipynb
.pdf
SQL Question Answering Benchmarking: Chinook
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
SQL Question Answering Benchmarking: Chinook#
Here we go over how to benchmark performance on a question answering task over a SQL database.
It is highly r... | https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html |
61980aa226be-1 | {'question': 'How many employees are there?', 'answer': '8'}
Setting up a chain#
This uses the example Chinook database.
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.
Note that here we load a simple c... | https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html |
61980aa226be-2 | llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(predicted_dataset, predictions, question_key="question", prediction_key="result")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions... | https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html |
fb1d1e7269fd-0 | .ipynb
.pdf
Evaluating an OpenAPI Chain
Contents
Load the API Chain
Optional: Generate Input Questions and Request Ground Truth Queries
Run the API Chain
Evaluate the requests chain
Evaluate the Response Chain
Generating Test Datasets
Evaluating an OpenAPI Chain#
This notebook goes over ways to semantically evaluate ... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-1 | See Generating Test Datasets at the end of this notebook for more details.
# import re
# from langchain.prompts import PromptTemplate
# template = """Below is a service description:
# {spec}
# Imagine you're a new user trying to use {operation} through a search bar. What are 10 different things you want to request?
# W... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-2 | dataset
[{'question': 'What iPhone models are available?',
'expected_query': {'max_price': None, 'q': 'iPhone'}},
{'question': 'Are there any budget laptops?',
'expected_query': {'max_price': 300, 'q': 'laptop'}},
{'question': 'Show me the cheapest gaming PC.',
'expected_query': {'max_price': 500, 'q': 'gaming ... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-3 | chain_outputs = []
failed_examples = []
for question in questions:
try:
chain_outputs.append(api_chain(question))
scores["completed"].append(1.0)
except Exception as e:
if raise_error:
raise e
failed_examples.append({'q': question, 'error': e})
scores["complet... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-4 | 'Yes, there are several tablets under $400. These include the Apple iPad 10.2" 32GB (2019), Samsung Galaxy Tab A8 10.5 SM-X200 32GB, Samsung Galaxy Tab A7 Lite 8.7 SM-T220 32GB, Amazon Fire HD 8" 32GB (10th Generation), and Amazon Fire HD 10 32GB.',
'It looks like you are looking for the best headphones. Based on the ... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-5 | "I found several Nike and Adidas shoes in the API response. Here are the links to the products: Nike Dunk Low M - Black/White: https://www.klarna.com/us/shopping/pl/cl337/3200177969/Shoes/Nike-Dunk-Low-M-Black-White/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 4 Retro M - Midnight Navy: https://www.klarna... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-6 | Nike Air Jordan 11 Retro Cherry - White/Varsity Red/Black: https://www.klarna.com/us/shopping/pl/cl337/3202929696/Shoes/Nike-Air-Jordan-11-Retro-Cherry-White-Varsity-Red-Black/?utm_source=openai&ref-site=openai_plugin, Nike Dunk High W - White/Black: https://www.klarna.com/us/shopping/pl/cl337/3201956448/Shoes/Nike-Dun... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-7 | "I found several skirts that may interest you. Please take a look at the following products: Avenue Plus Size Denim Stretch Skirt, LoveShackFancy Ruffled Mini Skirt - Antique White, Nike Dri-Fit Club Golf Skirt - Active Pink, Skims Soft Lounge Ruched Long Skirt, French Toast Girl's Front Pleated Skirt with Tabs, Alexia... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-8 | from langchain.prompts import PromptTemplate
template = """You are trying to answer the following question by querying an API:
> Question: {question}
The query you know you should be executing against the API is:
> Query: {truth_query}
Is the following predicted query semantically the same (eg likely to produce the sam... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-9 | ' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, th... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-10 | " The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter shou... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-11 | ' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-12 | Evaluate this against the user’s original question.
from langchain.prompts import PromptTemplate
template = """You are trying to answer the following question by querying an API:
> Question: {question}
The API returned a response of:
> API result: {api_response}
Your response to the user: {answer}
Please evaluate the a... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-13 | request_eval_results
[' The original query is asking for all iPhone models, so the "q" parameter is correct. The "max_price" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, "size" and "min_price". The "size" parameter is not nece... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-14 | ' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters "size" and "min_price", which are not necessary for the original query. The "size" parameter is not relevant to the question, and the "min_price" parameter is redundant since ... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-15 | ' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-16 | " The API response provided a list of laptops with their prices and attributes. The user asked if there were any budget laptops, and the response provided a list of laptops that are all priced under $500. Therefore, the response was accurate and useful in answering the user's question. Final Grade: A",
" The API respo... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-17 | ' The API response provided a list of shoes from both Adidas and Nike, which is exactly what the user asked for. The response also included the product name, price, and attributes for each shoe, which is useful information for the user to make an informed decision. The response also included links to the products, whic... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-18 | parsed_response_results = parse_eval_results(request_eval_results)
# Collect the scores for a final evaluation table
scores['result_synthesizer'].extend(parsed_response_results)
# Print out Score statistics for the evaluation session
header = "{:<20}\t{:<10}\t{:<10}\t{:<10}".format("Metric", "Min", "Mean", "Max")
print... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-19 | Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
# List the paths in the OpenAPI Spec
paths = sorted(spec.paths.keys())
paths
['/v1/public/openai/explain-phrase',
'/v1/public/openai/explain-task',
'/v1/public/openai/transla... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-20 | additional_context?: string,
/* Full text of the user's question. */
full_query?: string,
}) => any;
# Compress the service definition to avoid leaking too much input structure to the sample data
template = """In 20 words or less, what does this service accomplish?
{spec}
Function: It's designed to """
prompt = Promp... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-21 | "I'm looking for the Dutch word for 'no'.",
"Can you explain the meaning of 'hello' in Japanese?",
"I need help understanding the Russian word for 'thank you'.",
"Can you tell me how to say 'goodbye' in Chinese?",
"I'm trying to learn the Arabic word for 'please'."]
# Define the generation chain to get hypotheses
a... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-22 | '{"task_description": "Help with pronunciation of \'yes\' in Portuguese", "learning_language": "Portuguese", "native_language": "English", "full_query": "Can you help me with the pronunciation of \'yes\' in Portuguese?"}',
'{"task_description": "Find the Dutch word for \'no\'", "learning_language": "Dutch", "native_la... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-23 | ground_truth = []
for query, request_arg in list(zip(queries, request_args)):
feedback = input(f"Query: {query}\nRequest: {request_arg}\nRequested changes: ")
if feedback == 'n' or feedback == 'none' or not feedback:
ground_truth.append(request_arg)
continue
resolved = correction_chain.run(r... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-24 | Query: Can you help me with the pronunciation of 'yes' in Portuguese?
Request: {"task_description": "Help with pronunciation of 'yes' in Portuguese", "learning_language": "Portuguese", "native_language": "English", "full_query": "Can you help me with the pronunciation of 'yes' in Portuguese?"}
Requested changes:
Query... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-25 | Requested changes:
Query: I'm trying to learn the Arabic word for 'please'.
Request: {"task_description": "Learn the Arabic word for 'please'", "learning_language": "Arabic", "native_language": "English", "full_query": "I'm trying to learn the Arabic word for 'please'."}
Requested changes:
Now you can use the ground_... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
fb1d1e7269fd-26 | '{"task_description": "Find the Dutch word for \'no\'", "learning_language": "Dutch", "native_language": "English", "full_query": "I\'m looking for the Dutch word for \'no\'."}',
'{"task_description": "Explain the meaning of \'hello\' in Japanese", "learning_language": "Japanese", "native_language": "English", "full_q... | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
b36ca427c8d7-0 | .ipynb
.pdf
Using Hugging Face Datasets
Contents
Setup
Examples
Predictions
Evaluation
Using Hugging Face Datasets#
This example shows how to use Hugging Face datasets to evaluate models. Specifically, we show how to load examples to evaluate models on from Hugging Face’s dataset package.
Setup#
For demonstration pur... | https://python.langchain.com/en/latest/use_cases/evaluation/huggingface_datasets.html |
b36ca427c8d7-1 | 'You will not digest the watermelon seeds',
'The watermelon seeds will be excreted'],
'incorrect_answers': ['You grow watermelons in your stomach',
'You get sick',
'You have bad dreams',
'You die',
'You get indigestion',
'You fall unconscious',
'You digest the watermelon seeds'],
'source': 'https://wond... | https://python.langchain.com/en/latest/use_cases/evaluation/huggingface_datasets.html |
b36ca427c8d7-2 | from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(examples, predictions, question_key="question", answer_key="best_answer", prediction_key="text")
graded_outputs
[{'text': ' INCORRECT'},
{'text': ' INCORRECT'},
{'tex... | https://python.langchain.com/en/latest/use_cases/evaluation/huggingface_datasets.html |
22503ebddd56-0 | .ipynb
.pdf
Generic Agent Evaluation
Contents
Setup
Testing the Agent
Evaluating the Agent
Generic Agent Evaluation#
Good evaluation is key for quickly iterating on your agent’s prompts and tools. Here we provide an example of how to use the TrajectoryEvalChain to evaluate your agent.
Setup#
Let’s start by defining o... | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
22503ebddd56-1 | memory_key="chat_history", return_messages=True, output_key="output"
)
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
return_intermediate_steps=True, # This is n... | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
22503ebddd56-2 | > Entering new AgentExecutor chain...
{
"action": "Calculator",
"action_input": "The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many... | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
22503ebddd56-3 | }
> Entering new LLMMathChain chain...
The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by... | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
22503ebddd56-4 | )
print("Score from 1 to 5: ", evaluation["score"])
print("Reasoning: ", evaluation["reasoning"])
Score from 1 to 5: 1
Reasoning: First, let's evaluate the final answer. The final answer is incorrect because it uses the volume of golf balls instead of ping pong balls. The answer is not helpful.
Second, does the model... | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
22503ebddd56-5 | )
print("Score from 1 to 5: ", evaluation["score"])
print("Reasoning: ", evaluation["reasoning"])
Score from 1 to 5: 3
Reasoning: i. Is the final answer helpful?
Yes, the final answer is helpful as it provides an approximate number of Eiffel Towers needed to cover the US from coast to coast.
ii. Does the AI language ... | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
22503ebddd56-6 | Setup
Testing the Agent
Evaluating the Agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
cb0f8595321e-0 | .ipynb
.pdf
Benchmarking Template
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Benchmarking Template#
This is an example notebook that can be used to create a benchmarking notebook for a task of your choice. Evaluation is really hard, and so we greatly welc... | https://python.langchain.com/en/latest/use_cases/evaluation/benchmarking_template.html |
cb0f8595321e-1 | Evaluate performance#
Any guide to evaluating performance in a more systematic manner goes here.
previous
Agent VectorDB Question Answering Benchmarking
next
Data Augmented Question Answering
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
... | https://python.langchain.com/en/latest/use_cases/evaluation/benchmarking_template.html |
f000a5c43b82-0 | .ipynb
.pdf
Question Answering
Contents
Setup
Examples
Predictions
Evaluation
Customize Prompt
Evaluation without Ground Truth
Comparing to other evaluation metrics
Question Answering#
This notebook covers how to evaluate generic question answering problems. This is a situation where you have an example containing a ... | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
f000a5c43b82-1 | predictions
[{'text': ' 11 tennis balls'},
{'text': ' No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.'}]
Evaluation#
We can see that if we tried to just do exact matc... | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
f000a5c43b82-2 | Real Answer: No
Predicted Answer: No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.
Predicted Grade: CORRECT
Customize Prompt#
You can also customize the prompt that i... | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
f000a5c43b82-3 | context_examples = [
{
"question": "How old am I?",
"context": "I am 30 years old. I live in New York and take the train to work everyday.",
},
{
"question": 'Who won the NFC championship game in 2023?"',
"context": "NFC Championship Game 2023: Philadelphia Eagles 31, San Fra... | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
f000a5c43b82-4 | predictions[i]['id'] = str(i)
predictions[i]['prediction_text'] = predictions[i]['text']
for p in predictions:
del p['text']
new_examples = examples.copy()
for eg in new_examples:
del eg ['question']
del eg['answer']
from evaluate import load
squad_metric = load("squad")
results = squad_metric.compute(
... | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
754ecda5d94f-0 | .ipynb
.pdf
Data Augmented Question Answering
Contents
Setup
Examples
Evaluate
Evaluate with Other Metrics
Data Augmented Question Answering#
This notebook uses some generic prompts/language models to evaluate an question answering system that uses other sources of data besides what is in the model. For example, this... | https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html |
754ecda5d94f-1 | "answer": "Nothing"
}
]
# Generated examples
from langchain.evaluation.qa import QAGenerateChain
example_gen_chain = QAGenerateChain.from_llm(OpenAI())
new_examples = example_gen_chain.apply_and_parse([{"doc": t} for t in texts[:5]])
new_examples
[{'query': 'According to the document, what did Vladimir Putin miscal... | https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html |
754ecda5d94f-2 | eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(examples, predictions)
for i, eg in enumerate(examples):
print(f"Example {i}:")
print("Question: " + predictions[i]['query'])
print("Real Answer: " + predictions[i]['answer'])
print("Predicted Answer: " + predictions[i]['result'... | https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html |
754ecda5d94f-3 | Real Answer: The Ukrainian Ambassador to the United States is here tonight.
Predicted Answer: I don't know.
Predicted Grade: INCORRECT
Example 4:
Question: How many countries were part of the coalition formed to confront Putin?
Real Answer: 27 members of the European Union, France, Germany, Italy, the United Kingdom,... | https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html |
754ecda5d94f-4 | Predicted Grade: CORRECT
Evaluate with Other Metrics#
In addition to predicting whether the answer is correct or incorrect using a language model, we can also use other metrics to get a more nuanced view on the quality of the answers. To do so, we can use the Critique library, which allows for simple calculation of va... | https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html |
754ecda5d94f-5 | for k, v in metrics.items()
}
Finally, we can print out the results. We can see that overall the scores are higher when the output is semantically correct, and also when the output closely matches with the gold-standard answer.
for i, eg in enumerate(examples):
score_string = ", ".join([f"{k}={v['examples'][i]['val... | https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html |
754ecda5d94f-6 | Example 2:
Question: According to the document, what did Vladimir Putin miscalculate?
Real Answer: He miscalculated that he could roll into Ukraine and the world would roll over.
Predicted Answer: Putin miscalculated that the world would roll over when he rolled into Ukraine.
Predicted Scores: rouge=0.5185, chrf=0.695... | https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html |
754ecda5d94f-7 | Question: What action is the U.S. Department of Justice taking to target Russian oligarchs?
Real Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and joining with European allies to find and seize their yachts, luxury apartments, and private jets.
P... | https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html |
7df939d2c0f5-0 | .ipynb
.pdf
Agent VectorDB Question Answering Benchmarking
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Agent VectorDB Question Answering Benchmarking#
Here we go over how to benchmark performance on a question answering task using an agent to route between... | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
7df939d2c0f5-1 | dataset[-1]
{'question': 'What is the purpose of YC?',
'answer': 'The purpose of YC is to cause startups to be founded that would not otherwise have existed.',
'steps': [{'tool': 'Paul Graham QA System', 'tool_input': None},
{'tool': None, 'tool_input': 'What is the purpose of YC?'}]}
Setting up a chain#
Now we nee... | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
7df939d2c0f5-2 | from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
tools = [
Tool(
name = "State of Union QA System",
func=chain_sota.run,
description="useful for when you need to answer questions about the most recent state of the union address. Input should be a ful... | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
7df939d2c0f5-3 | 'output': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'}
Next, we can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evalu... | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
7df939d2c0f5-4 | Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.