Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation #28

Merged
merged 3 commits into from
Jun 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ share/python-wheels/
*.egg
MANIFEST


# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
Expand Down Expand Up @@ -134,4 +135,6 @@ dmypy.json
.vscode/

#Mac Preview
.DS_Store
.DS_Store
src/oxonfair/version.py
src/oxonfair/version.py
32 changes: 14 additions & 18 deletions sklearn.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,23 +87,19 @@ Evaluate on the test set using

Evaluate fairness using standard metrics with:

fpred.evalaute_fairness()

| | original | updated |
|:--------------------------------------------------------|-----------:|----------:|
| Class Imbalance | 0.172661 | 0.172661 |
| Demographic Parity | 0.154614 | 0.0984474 |
| Disparate Impact | 0.325866 | 0.517006 |
| Maximal Group Difference in Accuracy | 0.111118 | 0.102622 |
| Maximal Group Difference in Recall | 0.146103 | 0.0195962 |
| Maximal Group Difference in Conditional Acceptance Rate | 0.411181 | 0.255626 |
| Maximal Group Difference in Acceptance Rate | 0.03979 | 0.144976 |
| Maximal Group Difference in Specificity | 0.07428 | 0.033638 |
| Maximal Group Difference in Conditional Rejectance Rate | 0.0351948 | 0.0964846 |
| Maximal Group Difference in Rejection Rate | 0.101413 | 0.120799 |
| Treatment Equality | 0.172428 | 0.28022 |
| Generalized Entropy | 0.102481 | 0.105529 |

Call `fpredict.predict( )`, and `fpredict.predict_proba( )` to score new data.
fpred.evaluate_fairness()

| | original | updated |
|:------------------------------------------------|-----------:|-----------:|
| Statistical Parity | 0.157001 | 0.0783781 |
| Predictive Parity | 0.0182043 | 0.092402 |
| Equal Opportunity | 0.170043 | 0.00632223 |
| Average Group Difference in False Negative Rate | 0.170043 | 0.00632223 |
| Equalized Odds | 0.126215 | 0.0170495 |
| Conditional Use Accuracy | 0.0526152 | 0.104521 |
| Average Group Difference in Accuracy | 0.10703 | 0.104349 |
| Treatment Equality | 0.294522 | 0.131856 |

Call `fpredict.predict()`, and `fpredict.predict_proba()` to score new data.

Once the base predictor has been trained, and the object built, you can use the fair predictor in the same way as with autogluon. See [README.md](./README.md) for details.
2 changes: 0 additions & 2 deletions src/oxonfair/version.py

This file was deleted.

8 changes: 4 additions & 4 deletions tests/unittests/test_callable.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ def square_align(array):
def test_runs(use_fast=True):
fpred = oxonfair.FairPredictor(sigmoid, val_dict, val_groups, inferred_groups=square_align,
use_fast=use_fast)
fpred.fit(gm.accuracy, gm.equal_opportunity, 0.002)
fpred.fit(gm.accuracy, gm.equal_opportunity, 0.005)
tmp = np.asarray(fpred.evaluate(metrics={'eo': gm.equal_opportunity}))[0, 1]
assert tmp < 0.002
assert tmp < 0.005
fpred.plot_frontier()
fpred.plot_frontier(test_dict)
fpred.evaluate()
Expand All @@ -50,9 +50,9 @@ def test_runs_hybrid():

def test_fairdeep(use_fast=True, use_true_groups=False):
fpred = oxonfair.DeepFairPredictor(val_target, val, val_groups, use_fast=use_fast, use_actual_groups=use_true_groups)
fpred.fit(gm.accuracy, gm.equal_opportunity, 0.002)
fpred.fit(gm.accuracy, gm.equal_opportunity, 0.005)
tmp = np.asarray(fpred.evaluate(metrics={'eo': gm.equal_opportunity}))[0, 1]
assert tmp < 0.002
assert tmp < 0.005
fpred.plot_frontier()
fpred.plot_frontier(test_dict)
fpred.evaluate()
Expand Down
Loading