-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFLite converter effecientnet-b0 error #511
Comments
@n-berezina-nn Наталья, не могли бы вы помочь разобраться почему модель efficientnet-b0 из OpenModelZoo не конвертируется в формат tflite? |
@n-berezina-nn, @FenixFly, сконвертировала модель с использованием следующего скрипта: import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('./efficientnet-b0/saved_model/')
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS
]
converter.allow_custom_ops=True
tflite_model = converter.convert()
with open('model.tflite', 'wb') as f:
f.write(tflite_model) Запускаю вывод средствами вот такой командной строки: python3 ./inference_tensorflowlite.py -m ./efficientnet-b0/model.tflite -i ./data/ -b 1 -t classification --output_names logits -l ./labels/image_net_synset.txt --input_names sub[1,224,224,3] Получаю следующий вывод: 2024-03-20 22:07:52.309100: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-03-20 22:07:52.309335: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-03-20 22:07:52.311233: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-03-20 22:07:52.334658: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-20 22:07:52.738092: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
[ INFO ] Loading network files:
/home/itmm/Documents/kustikova_v/public/efficientnet-b0/efficientnet-b0/model.tflite
INFO: Created TensorFlow Lite delegate for select TF ops.
INFO: TfLiteFlexDelegate delegate: 64 nodes delegated out of 270 nodes with 64 partitions.
2024-03-20 22:07:52.998300: E tensorflow/core/framework/node_def_util.cc:676] NodeDef mentions attribute use_inter_op_parallelism which is not in the op definition: Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID", "EXPLICIT"]; attr=explicit_paddings:list(int),default=[]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilations:list(int),default=[1, 1, 1, 1]> This may be expected if your graph generating binary is newer than this binary. Unknown attributes will be ignored. NodeDef: {{node DepthwiseConv2dNative}}
[ INFO ] Shape for input layer sub:0: 1x224x224x3
[ INFO ] Preparing input data: ['/home/itmm/Documents/kustikova_v/data/']
[ INFO ] Starting inference (1 iterations)
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[ ERROR ] Traceback (most recent call last):
File "/home/itmm/Documents/kustikova_v/upstream/dl-benchmark/src/inference/./inference_tensorflowlite.py", line 286, in main
result, inference_time = inference_tflite(interpreter, args.number_iter, io.get_slice_input, args.time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/itmm/Documents/kustikova_v/upstream/dl-benchmark/src/inference/./inference_tensorflowlite.py", line 191, in inference_tflite
interpreter.allocate_tensors()
File "/home/itmm/miniconda/envs/tflite_converter_env/lib/python3.11/site-packages/tensorflow/lite/python/interpreter.py", line 531, in allocate_tensors
return self._interpreter.AllocateTensors()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Encountered unresolved custom op: swish_f320.
See instructions: https://www.tensorflow.org/lite/guide/ops_custom Node number 2 (swish_f320) failed to prepare.Encountered unresolved custom op: swish_f320.
See instructions: https://www.tensorflow.org/lite/guide/ops_custom Node number 2 (swish_f320) failed to prepare. Если все правильно понимаю, то это означает, что конвертер посчитал, что оператор |
@FenixFly, @n-berezina-nn, сравнила через netron модель из OMZ в формате saved_model, которую мы хотим сконвертировать, и присланную модель в формате tflite. В модели OMZ есть явный слой swish_f32 (схема ниже), в присланной модели таких преобразований вообще нет. Это означает, что либо для конвертации использована другая модель, либо эту модель как-то преобразовывали перед тем, как сконвертировать. |
Пытаюсь конвертировать TF модель efficientnet-b0 из OpenModelZoo.
tflite_converter.py
выдает ошибку с неподдерживаемым слоем__inference_swish_f32_730
.Пробовал с разными tensorflow, последний раз с tensorflow==2.12.0 и tensorflow-addons==0.19 tensorflow-estimator==2.12.0 tensorflow-probability==0.19
Строка конвертации:
Вывод скрипта с ошибкой (часть, полный вывод по ссылке: https://gist.github.com/FenixFly/3c523abc679934a7df6b67ad006c7ad6 ):
The text was updated successfully, but these errors were encountered: