-
Notifications
You must be signed in to change notification settings - Fork 2
Conceptualizing subtitles
To understand things, our mind needs to create an image of how things actually are. This image can be created by reading, by getting things explained, or by seeing some pictures. And sometimes things might be required to be explained from scratch even if they are already known concepts so that new points of view can be unlocked. We are going all the way in and using all the ways 😄 🚀.
Subtitles can be considered as a sequence of phrases and words distributed in a non-linear manner over a timeline. These words are usually associated with the audio track, but it may depend on the media content. We can theoretically create a distinction:
- Subtitles are for people that do not understand a language and therefore need some support to understand them;
- Captions are for people that have hearing issues and that cannot hear correctly the audio. For them, additional details might be required;
In reality, technically speaking, both will be rendered in the same way: the only difference is the amount of information that captions have. Captions might have more details, like sound descriptions. Hence, the difference stands on the "data" provided to be shown. The typical situation can be seen in the picture below.
Once data becomes identified and not just random binary data, this data can be seen as a "track", a sequence of data with a specific meaning that is subsequently ordered.
< more to come... >