From f2d10946646c1fca450941fa2769bf208776d38b Mon Sep 17 00:00:00 2001 From: yosufzaizb Date: Tue, 5 Mar 2024 20:31:53 +0000 Subject: [PATCH 1/3] SQL chatbot for structured data --- .../notebooks/AzureAIStudio_sql_chatbot.ipynb | 644 ++++++++++++++++++ 1 file changed, 644 insertions(+) create mode 100644 tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb diff --git a/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb b/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb new file mode 100644 index 0000000..4c5b643 --- /dev/null +++ b/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb @@ -0,0 +1,644 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "66bdb4fe-7ae4-4b3f-8b61-0004d49baa91", + "metadata": {}, + "source": [ + "# Creating a Chatbot for Structured Data using SQL" + ] + }, + { + "cell_type": "markdown", + "id": "431e4421-0b41-4a12-9811-0d7a030cf0f9", + "metadata": {}, + "source": [ + "### Objectives" + ] + }, + { + "cell_type": "markdown", + "id": "8aee4c83-bb83-442b-a158-61962f43c80a", + "metadata": {}, + "source": [ + "In this tutorial you will learn:\n", + "- Setting up a Azure SQL database\n", + "- Creating a SQl table and query from it\n", + "- Creating a chatbot and utilizing langchains SQL agent to connect the bot to a database" + ] + }, + { + "cell_type": "markdown", + "id": "3d2aa60a-cf87-4083-80fa-e9dc9179dcc8", + "metadata": {}, + "source": [ + "### Table of Contents" + ] + }, + { + "cell_type": "markdown", + "id": "3bad1638-6fcd-4299-b714-48c7cfd865ff", + "metadata": {}, + "source": [ + "- [Summary](#summary)\n", + "- [Install Packages](#packages)\n", + "- [Create Azure SQL Database](#azure_db)\n", + "- [Create Azure SQL Table](#azure_table)\n", + "- [Submitting a Query](#query)\n", + "- [Setting up a Chatbot](#chatbot)\n", + "- [Cleaning up Resources](#cleanup)" + ] + }, + { + "cell_type": "markdown", + "id": "83ce92a3-dff9-4a30-8f65-4c5b75349119", + "metadata": {}, + "source": [ + "### Summary " + ] + }, + { + "cell_type": "markdown", + "id": "84a3fa39-a341-4a0c-a17d-a8c657759117", + "metadata": {}, + "source": [ + "**Generative AI (GenAI)** is a groundbreaking technology that generates human-like texts, images, code, and other forms of content. Although this is all true the focus of many GenAI techniques or implementations have been on unstructured data such as PDF's, text docs, image files, websites, etc. where it is required to set a parameter called *top K*. Top K utilizes an algorithm to only retrieve the top scored pieces of content or docs that is relevant to the users ask. This limits the amount of data the model is presented putting a disadvantage for users that may want to gather information from structured data like CSV and JSON files because they typically want all the occurrences relevant data appears. \n", + "\n", + "An example would be if you had a table that lists different types of apples, where they originate, and their colors and you want a list of red apples that originate from the US the model would only give you partial amount of the data you need because it is limited to looking for the top relevant data which may be limited to only finding the top 4 or 20 names of apples (depending on how you have configured your model) instead of listing them all. \n", + "\n", + "The technique that is laid our in this tutorial utilizes **SQL databases** and asks the model to create a query based on the ask of the user. It will then submit that query to the database and present the user with the results. This will not only give us all the information we need but will also decrease the chances of hitting our token limit." + ] + }, + { + "cell_type": "markdown", + "id": "79baaa6a-b851-45b5-9002-68af981fb145", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "### Install Packages " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "db3117be-a03f-490f-84ed-b322d9df992e", + "metadata": {}, + "outputs": [], + "source": [ + "pip install 'pyodbc' 'fast_to_sql' 'sqlalchemy'\n", + "pip install --upgrade \"langchain-openai\" \"langchain\"" + ] + }, + { + "cell_type": "markdown", + "id": "094bf011-77ca-41ba-a42c-b2b95f890fc7", + "metadata": {}, + "source": [ + "### Create Azure SQL Database " + ] + }, + { + "cell_type": "markdown", + "id": "64e959a6-4515-49cd-bdf8-b0da0544c10a", + "metadata": {}, + "source": [ + "Follow the instructions [here](https://learn.microsoft.com/en-us/azure/azure-sql/database/single-database-create-quickstart?view=azuresql&tabs=azure-portal) to create a single database in Azure SQL Database. Note that for this tutorials database the field name **Use existing data** was set to **None**." + ] + }, + { + "cell_type": "markdown", + "id": "35d70a02-7a15-4c32-b453-3a97752f9755", + "metadata": {}, + "source": [ + "### Create Azure SQL Table " + ] + }, + { + "cell_type": "markdown", + "id": "69ff0cb2-89cb-4c34-a7c0-19cd09b1d3fb", + "metadata": {}, + "source": [ + "Now that we have our SQL database we will connect to it using the python package `pyodbc` which will allow us to commit changes to our database and query tables." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5776725d-df8e-4a74-8b01-eb4f33a74b83", + "metadata": {}, + "outputs": [], + "source": [ + "import pyodbc\n", + "\n", + "server_name = \"\"\n", + "user = \"\"\n", + "password = \"\"\n", + "database = \"\"\n", + "driver= '{ODBC Driver 18 for SQL Server}'\n", + "\n", + "conn = pyodbc.connect('DRIVER='+driver+';PORT=1433;SERVER='+server+'.database.windows.net/;PORT=1443;DATABASE='+database+';UID='+user+';PWD='+ password)" + ] + }, + { + "cell_type": "markdown", + "id": "506c4b63-276e-438a-a1e5-f4b16ff34cfd", + "metadata": {}, + "source": [ + "Now that we are connected to our database we can upload our data as a table, in this example we are using a csv file from Kaggle that can be downloaded from [here](https://www.kaggle.com/datasets/henryshan/2023-data-scientists-salary). \n", + "\n", + "**Tip:** If you are using a json file you can used the command `pd.read_json` to load in the data frame." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6db341b4-62c0-4bb4-a9b6-925b7ebbeccd", + "metadata": {}, + "outputs": [], + "source": [ + "import pandas as pd\n", + "import numpy as np \n", + "# reading the csv file using read_csv and storing the data frame in variable called df\n", + "df = pd.read_csv('ds_salaries.csv')\n", + "\n", + "# view the data\n", + "df.head()" + ] + }, + { + "cell_type": "markdown", + "id": "d071bf99-2f0d-4450-935c-94732a1f27f7", + "metadata": {}, + "source": [ + "**Tip:** If you receive a **timeout error** wait a couple of minutes and then run the above code again." + ] + }, + { + "cell_type": "markdown", + "id": "48f72cf4-b307-47b1-a3e4-7b5809ec7715", + "metadata": {}, + "source": [ + "Our second python package we are using is `fast_to_sql` **(fts)** which will allow us to easily create tables from our data. Usually, you would have to create a SQL query that outlines the columns, datatype, and values of our table but **fts** does all the work for us." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "41c6cba4-f6f0-4210-a116-1a0242804fae", + "metadata": {}, + "outputs": [], + "source": [ + "from fast_to_sql import fast_to_sql as fts\n", + "table_name = \"ds_salaries\"\n", + "create_table = fts(df, table_name , conn, if_exists=\"replace\", temp=\"FALSE\")" + ] + }, + { + "cell_type": "markdown", + "id": "8beb69bb-864c-41e9-b5d2-0ed9625861b8", + "metadata": {}, + "source": [ + "Now we will commit our change to make it permanent." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "298dbb17-b78a-4b72-8ac2-db18581a8cc8", + "metadata": {}, + "outputs": [], + "source": [ + "conn.commit()" + ] + }, + { + "cell_type": "markdown", + "id": "5d68afeb-3166-4729-9dd4-7c67e84f7673", + "metadata": {}, + "source": [ + "### Submiting a Query " + ] + }, + { + "cell_type": "markdown", + "id": "217c8b47-ac37-4c28-aa5b-01a7cc997e7a", + "metadata": {}, + "source": [ + "To submit a query to our database we first need to establish our connection with a **cursor** which allows you to process data row by row.\n", + "\n", + "**Tip:** At any time you can close the connection to your database using the command `conn.close()`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5f0f9982-87fa-43c0-9b5a-0c349efad1e6", + "metadata": {}, + "outputs": [], + "source": [ + "cursor = conn.cursor()" + ] + }, + { + "cell_type": "markdown", + "id": "a67e7f2a-c081-47db-a699-e64987f8ed58", + "metadata": {}, + "source": [ + "Now we can finally submit a query to our database! In the query below we ask to count the number of workers that worked in 2023. Then we use the `execute` command to send our query to the database. The result will be an **iterable** which we will need to create a for loop to see our query result. the result you should receive is **1785**." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "329f1fa3-a542-4033-926b-58e55085c73b", + "metadata": {}, + "outputs": [], + "source": [ + "query=\"SELECT COUNT(work_year) FROM ds_salaries WHERE work_year = '2023';\"\n", + "\n", + "cursor.execute(query)\n", + "for row in cursor:\n", + " print(f'QUERY RESULT: {str(row)}') " + ] + }, + { + "cell_type": "markdown", + "id": "5666452a-54be-414f-a29d-561f91f6de82", + "metadata": {}, + "source": [ + "Another way to output our query is to make it into a list and we can use the python function `replace` to get rid of the parentheses." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "117d2be0-6e06-4f57-81e7-51ce2548e71a", + "metadata": {}, + "outputs": [], + "source": [ + "query=\"\"\"SELECT name FROM sys.columns WHERE object_id = OBJECT_ID('ds_salaries') \n", + "\"\"\"\n", + "cursor.execute(query)\n", + "\n", + "result = [str(row).replace(\"('\", \"\").replace(\"',)\", \"\") for row in cursor]\n", + "\n", + "print(result)" + ] + }, + { + "cell_type": "markdown", + "id": "9523a38d-16b7-4c34-a8bc-af64ae696853", + "metadata": {}, + "source": [ + "### Setting Up A Chatbot " + ] + }, + { + "cell_type": "markdown", + "id": "0b2aa0b8-fcc1-440d-b5fe-4762f0ec7f86", + "metadata": {}, + "source": [ + "For our chatbot we will be utilizing langchain to connect our model to our database." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f2225a78-8919-4821-ba29-d14f8445ace0", + "metadata": {}, + "outputs": [], + "source": [ + "#load in the required tools\n", + "from langchain_openai import AzureChatOpenAI\n", + "from sqlalchemy import create_engine\n", + "from langchain.agents import AgentType, create_sql_agent\n", + "from langchain.sql_database import SQLDatabase\n", + "from langchain.agents.agent_toolkits.sql.toolkit import SQLDatabaseToolkit" + ] + }, + { + "cell_type": "markdown", + "id": "4d09ef93-36e5-41e5-89e3-f7b34355b6da", + "metadata": {}, + "source": [ + "Enter in your OpenAI model's endpoint and key. For this tutorial we used gpt 3.5." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2ad907b4-37f2-4536-b87d-1132d8dad04e", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"\"\n", + "os.environ[\"AZURE_OPENAI_API_KEY\"] = \"\"" + ] + }, + { + "cell_type": "markdown", + "id": "1e44d275-e7cc-4ad0-acc3-d61f8b97f0aa", + "metadata": {}, + "source": [ + "Set our model to the variable `llm` and enter the model name which was set when the model was deployed, this will connect langchain to our model. We are also setting the **temperature** to **0** because we don't want any randomness or creativity in the models answer only what is in the date." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0b026497-73b5-4d93-8cf4-3e0fd4a5c72c", + "metadata": {}, + "outputs": [], + "source": [ + "model_name=\"\"\n", + "\n", + "llm = AzureChatOpenAI(\n", + " openai_api_version=\"2023-05-15\",\n", + " azure_deployment=model_name,\n", + " temperature = 0\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "51f83b06-e938-459e-8f31-5223326ead34", + "metadata": {}, + "source": [ + "The first step to connecting our model to our database will be to create an engine that will help langchain connect to our SQL database using a package called `sqlalchemy`. The package will take the same info from the connection we name before but the format of driver is a little different where in this package it does not require curly brackets." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f4862c17-e38d-4220-9ccd-c1b295d6e401", + "metadata": {}, + "outputs": [], + "source": [ + "driver= \"ODBC Driver 18 for SQL Server\"" + ] + }, + { + "cell_type": "markdown", + "id": "c59d18f0-2d93-47e0-a743-f48275df47ed", + "metadata": {}, + "source": [ + "The database information will be entered as a connection string and then converted to our database engine using the command `create_engine`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c37e37ca-f6f0-4f5b-a94f-5906a25d6681", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "py_connectionString=f\"mssql+pyodbc://{user}:{password}@{server_name}.database.windows.net/{database}?driver={driver}\"\n", + "db_engine = create_engine(py_connectionString)" + ] + }, + { + "cell_type": "markdown", + "id": "0a696eeb-6a6e-4a32-994d-7460083413c2", + "metadata": {}, + "source": [ + "Now that we have established a connection to the databse we need to use the langchain package `SQLDatabase` to pass that connection to langchain. Notice that we leave the schema as **\"dbo\"** which stands for database owner and will be the default schema for all users, unless some other schema is specified. The dbo schema cannot be dropped." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "91adaf0e-1b38-4dee-988d-ceb2f3992b21", + "metadata": {}, + "outputs": [], + "source": [ + "db = SQLDatabase(db_engine, view_support=True, schema=\"dbo\")" + ] + }, + { + "cell_type": "markdown", + "id": "f3dce9b8-b054-4dad-a1c5-c53f6bcc5f04", + "metadata": {}, + "source": [ + "Lets run a test query below to ensure we are connected!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4263a3d5-488e-44ee-8616-79e63758a141", + "metadata": {}, + "outputs": [], + "source": [ + "print(db.dialect)\n", + "db.run(\"SELECT COUNT(*) FROM ds_salaries WHERE work_year = 2023 AND experience_level = 'SE' \")" + ] + }, + { + "cell_type": "markdown", + "id": "46ad4ab1-84ac-43e2-b152-038abadb7184", + "metadata": {}, + "source": [ + "The last step will be to create a SQL agent. The SQL agent will provide our bot with the following instructions:\n", + "1. Taken in the users ask or question and survey the SQL table mentioned in the ask/question\n", + "2. Create a SQL query based on the columns that have relevant information to the ask/question\n", + "3. Submit the query to our database and present the results to the user\n", + "\n", + "There is no need for a prompt because the agent already supplies that.\n", + "\n", + "**Tip**: If you do not want to see the reasoning of the agent and only want to answer set `verbose` to `false` (e.g., `verbose=False`)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "10b17ab9-b1b2-4315-97be-06cb39572fcd", + "metadata": {}, + "outputs": [], + "source": [ + "toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n", + "\n", + "agent_executor = create_sql_agent(llm=llm,\n", + "toolkit=toolkit,\n", + "verbose=True,\n", + "agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "faa22b65-e518-4e24-9154-e42bae28940e", + "metadata": {}, + "source": [ + "Now we can ask our bot questions about our data! Notice how in the question below we mention that the table we are looking at is **ds_salaries**." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8f968c90-4151-4d4f-b5a9-516ca34a7a58", + "metadata": {}, + "outputs": [], + "source": [ + "question = \"count the number of employees that worked in 2023 and have a experience level of SE in table ds_salaries.\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2d80a800-ba4f-431d-bab2-1e4b3e50f8da", + "metadata": {}, + "outputs": [], + "source": [ + "agent_executor.invoke(question)" + ] + }, + { + "cell_type": "markdown", + "id": "116c547b-c569-4843-a6a9-e81c6c0f8252", + "metadata": {}, + "source": [ + "### Cleaning Up Resources " + ] + }, + { + "cell_type": "markdown", + "id": "5a66120b-79a4-4a5a-a78b-125cbb1e8aac", + "metadata": {}, + "source": [ + "Dont forget to turn off or delete any notebooks or compute resources! Below you will find instructions to delete the SQL database. With the first step to close the connection to the database." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0dfaedac-271b-4277-b34b-eac1c4c5fc62", + "metadata": {}, + "outputs": [], + "source": [ + "conn.close()" + ] + }, + { + "cell_type": "markdown", + "id": "d21fcfc6-23e0-40ca-bb25-6f000db03aad", + "metadata": {}, + "source": [ + "We will be using Azure CLI commands which first require use to login. Run the command below and follow the steps outputted." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5fd0edc8-af4d-469f-8eb5-81ffec8d3033", + "metadata": {}, + "outputs": [], + "source": [ + "! az login" + ] + }, + { + "cell_type": "markdown", + "id": "0190471e-7238-4a89-a276-e9ab4ff61f62", + "metadata": {}, + "source": [ + " Next we will delete our database, wait for the command to output **'Finished'**." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "27ff38bb-3a93-4c0e-90ee-dd34b8a37d9c", + "metadata": {}, + "outputs": [], + "source": [ + "resource_group=\"\"\n", + "!az sql db delete --name {database} --resource-group --server {server_name}" + ] + }, + { + "cell_type": "markdown", + "id": "5ecc188e-c4eb-40a2-8854-03027d99e079", + "metadata": {}, + "source": [ + "For this command you will need your subscriptions ID which can be found running the following command:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "34b2beb0-6cd0-4a89-ac0b-1cafcc7f762d", + "metadata": {}, + "outputs": [], + "source": [ + "!az sql server list --resource-group {resource_group}" + ] + }, + { + "cell_type": "markdown", + "id": "e70cea5d-ad6e-49f9-bac7-255f3cf3d147", + "metadata": {}, + "source": [ + "Finally delete your SQL server, wait for the command to output **'Finished'**." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2ca10666-81a4-47f2-b168-6fd0a590d4a3", + "metadata": {}, + "outputs": [], + "source": [ + "subscription_id=''\n", + "!az sql server delete --name {server_name} --resource-group {resource_group} --subscription {subscription_id} -y" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1f91700b-bfe3-452c-b5a0-0b8fed115fd8", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernel_info": { + "name": "python310-sdkv2" + }, + "kernelspec": { + "display_name": "Python 3.10 - SDK v2", + "language": "python", + "name": "python310-sdkv2" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.11" + }, + "microsoft": { + "ms_spell_check": { + "ms_spell_check_language": "en" + } + }, + "nteract": { + "version": "nteract-front-end@1.0.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 6fdd900cfe633ba8abf1f94440c92f48049e04ea Mon Sep 17 00:00:00 2001 From: Kyle O'Connell Date: Tue, 5 Mar 2024 16:55:09 -0500 Subject: [PATCH 2/3] modified format slightly --- .../notebooks/AzureAIStudio_sql_chatbot.ipynb | 33 +++++++++---------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb b/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb index 4c5b643..d61a7f5 100644 --- a/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb +++ b/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb @@ -8,12 +8,24 @@ "# Creating a Chatbot for Structured Data using SQL" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Overview\n", + "**Generative AI (GenAI)** is a groundbreaking technology that generates human-like texts, images, code, and other forms of content. Although this is all true the focus of many GenAI techniques or implementations have been on unstructured data such as PDF's, text docs, image files, websites, etc. where it is required to set a parameter called *top K*. Top K utilizes an algorithm to only retrieve the top scored pieces of content or docs that is relevant to the users ask. This limits the amount of data the model is presented putting a disadvantage for users that may want to gather information from structured data like CSV and JSON files because they typically want all the occurrences relevant data appears. \n", + "\n", + "An example would be if you had a table that lists different types of apples, where they originate, and their colors and you want a list of red apples that originate from the US the model would only give you partial amount of the data you need because it is limited to looking for the top relevant data which may be limited to only finding the top 4 or 20 names of apples (depending on how you have configured your model) instead of listing them all. \n", + "\n", + "The technique that is laid our in this tutorial utilizes **SQL databases** and asks the model to create a query based on the ask of the user. It will then submit that query to the database and present the user with the results. This will not only give us all the information we need but will also decrease the chances of hitting our token limit." + ] + }, { "cell_type": "markdown", "id": "431e4421-0b41-4a12-9811-0d7a030cf0f9", "metadata": {}, "source": [ - "### Objectives" + "## Learning Objectives" ] }, { @@ -32,7 +44,7 @@ "id": "3d2aa60a-cf87-4083-80fa-e9dc9179dcc8", "metadata": {}, "source": [ - "### Table of Contents" + "## Table of Contents" ] }, { @@ -51,22 +63,9 @@ }, { "cell_type": "markdown", - "id": "83ce92a3-dff9-4a30-8f65-4c5b75349119", "metadata": {}, "source": [ - "### Summary " - ] - }, - { - "cell_type": "markdown", - "id": "84a3fa39-a341-4a0c-a17d-a8c657759117", - "metadata": {}, - "source": [ - "**Generative AI (GenAI)** is a groundbreaking technology that generates human-like texts, images, code, and other forms of content. Although this is all true the focus of many GenAI techniques or implementations have been on unstructured data such as PDF's, text docs, image files, websites, etc. where it is required to set a parameter called *top K*. Top K utilizes an algorithm to only retrieve the top scored pieces of content or docs that is relevant to the users ask. This limits the amount of data the model is presented putting a disadvantage for users that may want to gather information from structured data like CSV and JSON files because they typically want all the occurrences relevant data appears. \n", - "\n", - "An example would be if you had a table that lists different types of apples, where they originate, and their colors and you want a list of red apples that originate from the US the model would only give you partial amount of the data you need because it is limited to looking for the top relevant data which may be limited to only finding the top 4 or 20 names of apples (depending on how you have configured your model) instead of listing them all. \n", - "\n", - "The technique that is laid our in this tutorial utilizes **SQL databases** and asks the model to create a query based on the ask of the user. It will then submit that query to the database and present the user with the results. This will not only give us all the information we need but will also decrease the chances of hitting our token limit." + "## Get Started" ] }, { @@ -505,7 +504,7 @@ "id": "116c547b-c569-4843-a6a9-e81c6c0f8252", "metadata": {}, "source": [ - "### Cleaning Up Resources " + "## Clean Up " ] }, { From cc27af3abfa3a633f7af45ef5e764f5cb8d54608 Mon Sep 17 00:00:00 2001 From: yosufzaizb Date: Wed, 6 Mar 2024 15:12:09 +0000 Subject: [PATCH 3/3] Added Prereq and conclusion section --- .../notebooks/AzureAIStudio_sql_chatbot.ipynb | 49 ++++++++++++++++--- 1 file changed, 42 insertions(+), 7 deletions(-) diff --git a/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb b/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb index d61a7f5..0609d1f 100644 --- a/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb +++ b/tutorials/notebooks/GenAI/notebooks/AzureAIStudio_sql_chatbot.ipynb @@ -5,11 +5,12 @@ "id": "66bdb4fe-7ae4-4b3f-8b61-0004d49baa91", "metadata": {}, "source": [ - "# Creating a Chatbot for Structured Data using SQL" + "# Creating a chatbot for structured data using SQL" ] }, { "cell_type": "markdown", + "id": "4d7509ad", "metadata": {}, "source": [ "## Overview\n", @@ -20,12 +21,28 @@ "The technique that is laid our in this tutorial utilizes **SQL databases** and asks the model to create a query based on the ask of the user. It will then submit that query to the database and present the user with the results. This will not only give us all the information we need but will also decrease the chances of hitting our token limit." ] }, + { + "cell_type": "markdown", + "id": "6c69574a-dc53-414c-9606-97c1f871f603", + "metadata": {}, + "source": [ + "## Prerequisites" + ] + }, + { + "cell_type": "markdown", + "id": "7497c624-f592-4061-8dbf-8a9e2baf7fb2", + "metadata": {}, + "source": [ + "We assume you have access to Azure AI Studio, Azure SQL Databases, and have already deployed an LLM. For this tutorial we used **gpt 3.5** and used the **Python 3.10** kernel within our Azure Jupyter notebook." + ] + }, { "cell_type": "markdown", "id": "431e4421-0b41-4a12-9811-0d7a030cf0f9", "metadata": {}, "source": [ - "## Learning Objectives" + "## Learning objectives" ] }, { @@ -58,14 +75,16 @@ "- [Create Azure SQL Table](#azure_table)\n", "- [Submitting a Query](#query)\n", "- [Setting up a Chatbot](#chatbot)\n", + "- [Conclusion](#conclusion)\n", "- [Cleaning up Resources](#cleanup)" ] }, { "cell_type": "markdown", + "id": "3d98bdb4", "metadata": {}, "source": [ - "## Get Started" + "## Get started" ] }, { @@ -79,7 +98,7 @@ } }, "source": [ - "### Install Packages " + "### Install packages " ] }, { @@ -220,7 +239,7 @@ "id": "5d68afeb-3166-4729-9dd4-7c67e84f7673", "metadata": {}, "source": [ - "### Submiting a Query " + "### Submiting a query " ] }, { @@ -294,7 +313,7 @@ "id": "9523a38d-16b7-4c34-a8bc-af64ae696853", "metadata": {}, "source": [ - "### Setting Up A Chatbot " + "### Setting up a chatbot " ] }, { @@ -499,12 +518,28 @@ "agent_executor.invoke(question)" ] }, + { + "cell_type": "markdown", + "id": "032bc4e2-9a10-4e20-b775-3e34dc3683ee", + "metadata": {}, + "source": [ + "## Conclusion " + ] + }, + { + "cell_type": "markdown", + "id": "edac27e0-45dd-450b-bb78-fa341a667575", + "metadata": {}, + "source": [ + "In this notebook you learned how to set up a Azure SQL database and connect your model to the database using langchain tools, creating a chatbot that can read and retrieve data from structured data formats." + ] + }, { "cell_type": "markdown", "id": "116c547b-c569-4843-a6a9-e81c6c0f8252", "metadata": {}, "source": [ - "## Clean Up " + "## Clean up " ] }, {