Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Errno 2] No such file or directory #22

Open
bobcat7080 opened this issue Feb 23, 2024 · 2 comments
Open

[Errno 2] No such file or directory #22

bobcat7080 opened this issue Feb 23, 2024 · 2 comments

Comments

@bobcat7080
Copy link

I am running into these errors and im not sure why :
Exception has occurred: FileNotFoundError
[Errno 2] No such file or directory: 'C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\data\output\1.srt'
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\split_audio.py", line 96, in extract_audio_with_srt
subs = pysrt.open(srt_file)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\split_audio.py", line 150, in process_audio_files
extract_audio_with_srt(audio_file_path, srt_file, speaker_segments_dir)
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\split_audio.py", line 180, in main
process_audio_files(input_folder, settings)
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\split_audio.py", line 183, in
main()
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\data\output\1.srt'

CUDA is available. Running on GPU.
The torchaudio backend is switched to 'soundfile'. Note that 'sox_io' is not supported on Windows.
The torchaudio backend is switched to 'soundfile'. Note that 'sox_io' is not supported on Windows.
Lightning automatically upgraded your loaded checkpoint from v1.5.4 to v2.2.0.post0. To apply the upgrade to your files permanently, run python -m pytorch_lightning.utilities.upgrade_checkpoint C:\Users\bobca\.cache\torch\whisperx-vad-segmentation.bin
Model was trained with pyannote.audio 0.0.1, yours is 3.1.1. Bad things might happen unless you revert pyannote.audio to 0.x.
Model was trained with torch 1.10.0+cu102, yours is 2.0.0+cu118. Bad things might happen unless you revert torch to 1.x.

Performing transcription...
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Scripts\whisperx.exe_main
.py", line 7, in
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\transcribe.py", line 176, in cli
result = model.transcribe(audio, batch_size=batch_size, chunk_size=chunk_size, print_progress=print_progress)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\asr.py", line 218, in transcribe
for idx, out in enumerate(self.call(data(audio, vad_segments), batch_size=batch_size, num_workers=num_workers)):
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\transformers\pipelines\pt_utils.py", line 124, in next
item = next(self.iterator)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\transformers\pipelines\pt_utils.py", line 125, in next
processed = self.infer(item, **self.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\transformers\pipelines\base.py", line 1102, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\asr.py", line 152, in _forward
outputs = self.model.generate_segment_batched(model_inputs['inputs'], self.tokenizer, self.options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\asr.py", line 47, in generate_segment_batched
encoder_output = self.encode(features)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\asr.py", line 86, in encode
return self.model.encode(features, to_cpu=to_cpu)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Library cublas64_12.dll is not found or cannot be loaded

@lobsterchan27
Copy link

i think im having the same issue. any help would be appreciated. i had a previous issue from #16 (comment) and now have this one

@lobsterchan27
Copy link

i got it working by reinstalling cuda 12.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants