Skip to content

Commit

Permalink
chore: update versions (#17)
Browse files Browse the repository at this point in the history
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
  • Loading branch information
zaripych and github-actions[bot] authored Jan 28, 2024
1 parent 1d26b8c commit ddb9704
Show file tree
Hide file tree
Showing 6 changed files with 63 additions and 86 deletions.
6 changes: 0 additions & 6 deletions .changeset/chilly-games-fail.md

This file was deleted.

30 changes: 0 additions & 30 deletions .changeset/chilly-pumpkins-build.md

This file was deleted.

14 changes: 0 additions & 14 deletions .changeset/little-pumpkins-fix.md

This file was deleted.

35 changes: 0 additions & 35 deletions .changeset/tasty-readers-knock.md

This file was deleted.

62 changes: 62 additions & 0 deletions packages/refactor-bot/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,67 @@
# refactor-bot

## 0.0.4

### Patch Changes

- [#16](https://github.com/zaripych/gpt-refactor-bot/pull/16) [`54d866e`](https://github.com/zaripych/gpt-refactor-bot/commit/54d866e2a215f75a0c65c4002fe9e191b4f015cf) Thanks [@zaripych](https://github.com/zaripych)! - fix: if an identifier is not found, provide LLM with suggestion to reduce specificity

- [#16](https://github.com/zaripych/gpt-refactor-bot/pull/16) [`54d866e`](https://github.com/zaripych/gpt-refactor-bot/commit/54d866e2a215f75a0c65c4002fe9e191b4f015cf) Thanks [@zaripych](https://github.com/zaripych)! - feat: improve benchmarking command

Introduce changes to the report generated by the refactor bot so that we can get better benchmark stats.

The benchmark command now outputs `promptTokens` and `completionTokens`.

The report generated by the benchmark command has been improved to include difference comparison, outliers and a list of the refactors with lowest scores.

Example:

```sh
Benchmark results

METRIC │ A │ B │ DIFF
────────────────────────┼───────────┼───────────┼──────────
numberOfRuns │ 9.00 │ 10.00 │
score │ 0.83 │ 1.00 │ +17.28%
acceptedRatio │ 0.81 │ 1.00 │ +18.52%
totalTokens │ 44688.67 │ 50365.90 │ +12.70%
totalPromptTokens │ 40015.44 │ 48283.30 │ +20.66%
totalCompletionTokens │ 4673.22 │ 2082.60 │ -55.44%
wastedTokensRatio │ 0.09 │ 0.00 │ -9.49%
durationMs │ 286141.39 │ 171294.32 │ -40.14%
```

- [#16](https://github.com/zaripych/gpt-refactor-bot/pull/16) [`54d866e`](https://github.com/zaripych/gpt-refactor-bot/commit/54d866e2a215f75a0c65c4002fe9e191b4f015cf) Thanks [@zaripych](https://github.com/zaripych)! - fix: fail if eslint is not properly configured or installed instead of ignoring the errors

If eslint is not properly configured or installed, the refactor bot would ignore the errors because it would fail to analyze `stderr` of the `eslint` command.

It now properly fails with a message that explains the problem.

This should lead to better outcomes when configuring the refactor bot for the first time.

- [#18](https://github.com/zaripych/gpt-refactor-bot/pull/18) [`1d26b8c`](https://github.com/zaripych/gpt-refactor-bot/commit/1d26b8cfe7dc956c01d5a1418942fdbd7ffbdc47) Thanks [@zaripych](https://github.com/zaripych)! - feat: introducing experimental chunky edit strategy

This strategy allows the LLM to perform edits via find-replace operations which reduce the total number of completion tokens. The completion tokens are typically priced at twice the cost of prompt tokens. In addition to the reduction of the price this strategy also significantly improves the performance of the refactoring.

Here are benchmark results for the `chunky-edit` strategy:

```sh
METRIC │ A │ B │ DIFF
────────────────────────┼───────────┼───────────┼──────────
numberOfRuns │ 9.00 │ 10.00 │
score │ 0.83 │ 1.00 │ +17.28%
acceptedRatio │ 0.81 │ 1.00 │ +18.52%
totalTokens │ 44688.67 │ 50365.90 │ +12.70%
totalPromptTokens │ 40015.44 │ 48283.30 │ +20.66%
totalCompletionTokens │ 4673.22 │ 2082.60 │ -55.44%
wastedTokensRatio │ 0.09 │ 0.00 │ -9.49%
durationMs │ 286141.39 │ 171294.32 │ -40.14%
```

While it does seem to improve the score, this should just be considered as variance introduce by the randomness of the LLM. The main outcome of this strategy is the reduction of the number of completion tokens and the improvement of the performance.

There might be some other side effects, probably depending on the type of the refactor. So, this strategy is still experimental and must be selectively opted-in via "--experiment-chunky-edit-strategy" cli option.

## 0.0.3

### Patch Changes
Expand Down
2 changes: 1 addition & 1 deletion packages/refactor-bot/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "refactor-bot",
"version": "0.0.3",
"version": "0.0.4",
"description": "Refactor your codebase using ChatGPT, ts-morph and Plan and Execute techniques",
"keywords": [
"gpt",
Expand Down

0 comments on commit ddb9704

Please sign in to comment.