-
-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(search): Adds logic to download search results #4893
base: main
Are you sure you want to change the base?
Conversation
535f37a
to
e8f6fd3
Compare
b3ba8d5
to
8721c41
Compare
Semgrep found 3 Avoid using Ignore this finding from avoid-pickle Semgrep found 1 Detected direct use of jinja2. If not done properly, this may bypass HTML escaping which opens up the application to cross-site scripting (XSS) vulnerabilities. Prefer using the Flask method 'render_template()' and templates with a '.html' extension in order to prevent XSS. |
8721c41
to
ae29bba
Compare
This commit refactors the search module by moving helper functions from `view.py` to `search_utils.py`. This improves code organization and makes these helper functions reusable across different modules.
ae29bba
to
92cddf5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave this a once-over and it feels about right. Concerns I'll highlight for you guys to consider:
-
Memory: We're putting the CSV in memory, which sure is handy. I think this is fine b/c it'll be pretty small, a couple hundred KB, right? This must be fine, but it's on my mind.
-
The fields in the result might be annoying with columns that aren't normalized to human values (like SOURCE: CR or something, and local_path: /recap/gov.xxxx.pdf instead of https://storage.courtlistener.com/recap/gov.xxx.pdf). I didn't see code to fix that, but it's probably something we should do if we can. This CSV is supposed to be for humans, in theory.
I appreciate the refactor, but I'd suggest it in a separate PR in the future, so it's not mixed in.
But this looks about right to me otherwise. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ERosendo this looks good and is on the right track. I’ve left some comments and suggestions in the code, along with additional feedback here:
-
In addition to Mike's comment about normalizing values for humans, a similar suggestion is that I noticed that the CSV headers don’t maintain a fixed order when the CSV is generated and I found it difficult to determine when the results belong to the same "Case," particularly when matching child documents. It might be a good idea to ensure that the headers are fixed for each search type and to prioritize key headers that help identify whether the results belong to the same case. For instance, in RECAP, the headers could start like this:
docket_id, docket_number, pacer_case_id, court_exact, case_name, document_number, attachment_number, ...
-
Highlighted fields in the results are always represented as a list of terms, even though we are only highlighting a single fragment. Most of the HL fields are not naturally lists of terms. However, some fields, such as citations in case law, can be lists and are also highlighted.
As in the frontend, perhaps you could use the render_string_or_list
filter or a modified version of it to render HL fields as strings instead of lists when they are not multi-fields?
-
Regarding Judge Search, I noticed that the CSV only contains fields from
PersonDocument
, and some fields currently rendered as flat fields in the frontend (such as "Appointers" and other similar fields extracted from the database inmerge_unavailable_fields_on_parent_document
) are not included. Is this behavior expected? -
I'd recommend adding at least one integration test to help catch and prevent regressions in the future related to the suggestions and bugs mentioned in the comments.
Thank you!
keys = set( | ||
[ | ||
*DocketDocument.__dict__["_fields"].keys(), | ||
*ESRECAPDocument.__dict__["_fields"].keys(), | ||
] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be simplified as:
keys = set( | |
[ | |
*DocketDocument.__dict__["_fields"].keys(), | |
*ESRECAPDocument.__dict__["_fields"].keys(), | |
] | |
) | |
keys = {*DocketDocument.__dict__["_fields"].keys(), | |
*ESRECAPDocument.__dict__["_fields"].keys()} |
Similar for SEARCH_TYPES.OPINION
search result. | ||
""" | ||
csv_rows: list[dict[str, Any]] = [] | ||
while len(csv_rows) <= settings.MAX_SEARCH_RESULTS_EXPORTED: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please explain whether the while loop is required here? I did some testing and I think this could work without the while loop and avoid additional queries for each page to reach MAX_SEARCH_RESULTS_EXPORTED
resultss since you're already using do_es_search
with MAX_SEARCH_RESULTS_EXPORTED
rows.
To make this work for RECAP, you may need to pass a parameter to do_es_search
to indicate the query is for CSV generation. This would ensure the rows
variable for both RECAP and Dockets within do_es_search
matches MAX_SEARCH_RESULTS_EXPORTED
, since this parameter is currently overridden for RECAP to 10 documents per page by the setting: rows = settings.RECAP_SEARCH_PAGE_SIZE
Additionally, since CSV queries don't require count queries, you could use this same parameter to indicate the query is for CSV generation and avoid performing count queries in fetch_es_results
. This would prevent the overhead of parent and child count queries currently performed via the Multi-Search API.
flat_results = [] | ||
for result in results.object_list: | ||
parent_dict = result.to_dict() | ||
child_docs = parent_dict.pop("child_docs") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested a search query that returned some results for empty dockets, and it failed on this line:
KeyError: 'child_docs'
Instead of pop
we should use get
child_docs = parent_dict.pop("child_docs") | |
child_docs = parent_dict.get("child_docs") |
if child_docs: | ||
flat_results.extend( | ||
[ | ||
parent_dict | doc["_source"].to_dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that you want to include HL in the CSV results, correct?
The highlights are being shown correctly on highlighted child fields and in parent fields for parent documents with no matched child documents. However, highlights are not being shown on parent fields for results that matched child documents.
I think the issue lies in this line where parent_dict
fields are being merged with child document's fields. Since child documents also contain parent fields, but parent fields are not highlighted in child documents, these fields override the highlighted parent fields from parent_dict
.
Hi {{username}}, | ||
|
||
Your requested search results are attached as a CSV file. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR implements the backend logic for exporting search results (#599).
Key changes:
Introduces a new rate limiter to throttle CSV export requests to 5 per day
Adds a new setting named
MAX_SEARCH_RESULTS_EXPORTED
(default: 250) to control the maximum number of rows included in the generated CSV file.Refactors the
view.py
file within the search module. Helper functions related to fetching Elasticsearch results have been moved to thesearch_utils.py
file for better organization and clarity.Introduces two new helper functions
fetch_es_results_for_csv
andget_headers_for_search_export
Adds a new task that takes the
user_id
and thequery
string as input. It then sends an email with a CSV file containing at mostMAX_SEARCH_RESULTS_EXPORTED
rows.