Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR4: Write code to test your data after labeling (can use Cleanlab or Deepchecks) #14

Merged
merged 3 commits into from
Sep 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions homework_4/pr4/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# PR4: Write code for transforming your dataset into a vector format, and utilize VectorDB for ingestion and querying.


# Cleanlab Discoveries
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great!


**Duplicate Issues**

- Cleanlab identified 6 duplicate issues in our dataset.
- All of them belong to category 4 or category 8.

**Label Issues**

- Cleanlab identified 4 label issues in our dataset.
- they all have score below 0.20 (which is quite low)
- Mislabeled emails belong to category 4 or category 2.
- Detailed analysis of label issues can be found in `label_issues_scores.csv` and `label_issues.csv`

**Outlier Issues**

- Cleanlab identified 1 outlier issue in our dataset.
- It belongs to category 1 and has a score lower than 0.20.
- Detailed analysis of outlier issues can be found in `outlier_issues_scores.csv` and `outlier_issues.csv`
7 changes: 7 additions & 0 deletions homework_4/pr4/duplicate_issues.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Original_Email,Original_Category,Duplicate_Email,Duplicate_Category
"Sehr geehrte Damen und Herren, ich möchte um die Kopie meines Vertrags bitten.",8,"Sehr geehrte Damen und Herren, ich möchte eine Kopie meines Vertrags anfordern.",8
"Guten Tag, ich möchte meinen Vertrag schnellstmöglich kündigen.",4,"Guten Tag, ich möchte den Vertrag so schnell wie möglich kündigen.",4
"Guten Tag, ich möchte meine Bestellung stornieren.",4,"Guten Tag, ich möchte meine Bestellung stornieren.",4
"Sehr geehrte Damen und Herren, ich möchte eine Kopie meines Vertrags anfordern.",8,"Sehr geehrte Damen und Herren, ich möchte um die Kopie meines Vertrags bitten.",8
"Guten Tag, ich möchte meine Bestellung stornieren.",4,"Guten Tag, ich möchte meine Bestellung stornieren.",4
"Guten Tag, ich möchte den Vertrag so schnell wie möglich kündigen.",4,"Guten Tag, ich möchte meinen Vertrag schnellstmöglich kündigen.",4
5 changes: 5 additions & 0 deletions homework_4/pr4/label_issues.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Email,Category
"Sehr geehrter Kundenservice, ich möchte mein Internet-Abo zum Monatsende kündigen.",4
"Ich habe den Service von Ihnen bereits gekündigt, aber ich erhalte weiterhin Rechnungen.",4
"Guten Tag, können Sie mir bitte die Zahlungseingangsbestätigung zusenden?",2
"Guten Tag, ich habe ein Problem mit der letzten Abbuchung.",2
5 changes: 5 additions & 0 deletions homework_4/pr4/label_issues_scores.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
is_label_issue,label_score,given_label,predicted_label
True,0.20127963476428865,4,6
True,0.1453738242128867,4,2
True,0.14309154875404048,2,5
True,0.09542877980390857,2,6
56 changes: 56 additions & 0 deletions homework_4/pr4/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
import pandas as pd
from sklearn.model_selection import cross_val_predict
from sklearn.linear_model import LogisticRegression
from sentence_transformers import SentenceTransformer

from cleanlab import Datalab

import warnings
warnings.filterwarnings("ignore")
def main():
# Read parquet data into pandas DataFrame
df = pd.read_parquet('synthetic_reviews.parquet')

raw_texts, labels = df["Email"].values, df["Category"].values
num_classes = len(set(labels))


transformer = SentenceTransformer('distiluse-base-multilingual-cased-v2')
text_embeddings = transformer.encode(raw_texts)

model = LogisticRegression(max_iter=400)
pred_probs = cross_val_predict(model, text_embeddings, labels, method="predict_proba")


data_dict = {"texts": raw_texts, "labels": labels}
lab = Datalab(data_dict, label_name="labels",verbosity = 0)
lab.find_issues(pred_probs=pred_probs, features=text_embeddings)


label_issues = lab.get_issues("label")
label_issues_idx = label_issues[label_issues["is_label_issue"] == True].index.to_numpy()
label_issues_df = df.iloc[label_issues_idx]
label_issues_df.to_csv('label_issues.csv', index=False)
label_issues[label_issues["is_label_issue"] == True].to_csv('label_issues_scores.csv', index=False)

outlier_issues = lab.get_issues("outlier")
outlier_issues_idx = outlier_issues[outlier_issues["is_outlier_issue"] == True].index.to_numpy()
outlier_issues_df = df.iloc[outlier_issues_idx]
outlier_issues_df.to_csv('outlier_issues.csv', index=False)
outlier_issues[outlier_issues["is_outlier_issue"] == True].to_csv('outlier_issues_scores.csv', index=False)


duplicate_issues = lab.get_issues("near_duplicate")
duplicate_issues_idx = duplicate_issues[duplicate_issues["is_near_duplicate_issue"] == True].index.to_numpy()
duplicate_issues_idx_2 = duplicate_issues[duplicate_issues["is_near_duplicate_issue"] == True].near_duplicate_sets.to_numpy()

duplicate_issues_idx_2 = [item for sublist in duplicate_issues_idx_2 for item in sublist]

duplicates_df = pd.concat([df.loc[duplicate_issues_idx].reset_index(drop=True),
df.loc[duplicate_issues_idx_2].reset_index(drop=True)], axis=1)
duplicates_df.columns = ['Original_Email', 'Original_Category', 'Duplicate_Email', 'Duplicate_Category']
duplicates_df.to_csv('duplicate_issues.csv', index=False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! could you add a report to README? what did cleanlab able to find?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done



if __name__=='__main__':
main()
2 changes: 2 additions & 0 deletions homework_4/pr4/outlier_issues.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Email,Category
Ich habe Fragen zu Ihrer Geschäftslösung und wie wir sie in unserem Unternehmen einsetzen können.,1
2 changes: 2 additions & 0 deletions homework_4/pr4/outlier_issues_scores.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
is_outlier_issue,outlier_score
True,0.18030228
1 change: 1 addition & 0 deletions homework_4/pr4/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
cleanlab==2.6.6
Binary file added homework_4/pr4/synthetic_reviews.parquet
Binary file not shown.