-
Notifications
You must be signed in to change notification settings - Fork 550
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BlockingError: No records have been blocked together. #1179
Comments
what if you try lowering your threshold? (Assuming you do have a bunch of training data being used) |
also, i have found that 'dirty' data in the training data can cause the process to halt with that error.. |
I have tried everything including lowering the threshold and using clean data, nothing so far. However, i may have traced the error to this line of code: b_data = deduper.fingerprinter(full_data) The other lines prior to that including full_data = ((row['donor_id'], row) for row in read_cur) are working as expected and i am able to print out and inspect them. |
@fgregg please help with this, Thanks! |
please provide a reproducible example |
|
that is not reproducible. i need data and training data and setting file. |
if you cannot share that because of privacy issues, we offer consulting services. |
Okay, Thank you. One more question, when i print out full_data this is the output I get: This is the expected format right? |
yes. |
I am getting the following Blocking Error:BlockingError: No records have been blocked together. Is the data you are trying to match like the data you trained on? If so, try adding more training data.
I figured the issue is from this line of code: clustered_dupes = deduper.cluster(deduper.score(record_pairs(read_cur)),
threshold=0.5)
I am using the PostgreSQL example.
BlockingError Traceback (most recent call last)
Cell In[3], line 398
396 print('clustering...')
397 record_pairs_data = record_pairs(read_cur)
--> 398 clustered_dupes = deduper.cluster(deduper.score(record_pairs_data), threshold=0.5)
421 print('writing results...')
422 with write_con:
File ~/dss_home/code-envs/python/pyconda_38/lib/python3.8/site-packages/dedupe/api.py:125, in IntegralMatching.score(self, pairs)
116 """
117 Scores pairs of records. Returns pairs of tuples of records id and
118 associated probabilities that the pair of records are match
(...)
122
123 """
124 try:
--> 125 matches = core.scoreDuplicates(
126 pairs, self.data_model.distances, self.classifier, self.num_cores
127 )
128 except RuntimeError:
129 raise RuntimeError(
130 """
131 You need to either turn off multiprocessing or protect
(...)
134 https://docs.python.org/3/library/multiprocessing.html#the-spawn-and-forkserver-start-methods"""
135 )
File ~/dss_home/code-envs/python/pyconda_38/lib/python3.8/site-packages/dedupe/core.py:126, in scoreDuplicates(record_pairs, featurizer, classifier, num_cores)
124 first, record_pairs = peek(record_pairs)
125 if first is None:
--> 126 raise BlockingError(
127 "No records have been blocked together. "
128 "Is the data you are trying to match like "
129 "the data you trained on? If so, try adding "
130 "more training data."
131 )
133 record_pairs_queue: _Queue = Queue(2)
134 exception_queue: _Queue = Queue()
BlockingError: No records have been blocked together. Is the data you are trying to match like the data you trained on? If so, try adding more training data.
The text was updated successfully, but these errors were encountered: