Engineering:Subtitles

From HandWiki
Short description: Textual representation of events and speech in motion imagery
Film with subtitles in English. Quotation dashes are used to differentiate speakers.

Subtitles are text representing the contents of the audio in a film, television show, opera or other audiovisual media. Subtitles might provide a transcription or translation of spoken dialogue. Although naming conventions can vary, captions are subtitles that include written descriptions of other elements of the audio like music or sound effects. Captions are thus especially helpful to people who are deaf or hard-of-hearing. Subtitles may also add information that is not present in the audio. Localizing subtitles provide cultural context to viewers. For example, a subtitle could be used to explain to an audience unfamiliar with sake that it is a type of Japanese wine. Lastly, subtitles are sometimes used for humor, as in Annie Hall, where subtitles show the characters' inner thoughts, which contradict what they were saying in the audio.

Creating, delivering, and displaying subtitles is a complicated and multi-step endeavor. First, the text of the subtitles needs to be written. When there is plenty of time to prepare, this process can be done by hand. However, for media produced in real-time, like live television, it may be done by stenographers or using automated speech recognition. Subtitles written by fans, rather than more official sources, are referred to as fansubs. Regardless of who does the writing, they must include information on when each line of text should be displayed.

Second, subtitles need to be distributed to the audience. Open subtitles are added directly to recorded video frames and thus cannot be removed once added. On the other hand, closed subtitles are stored separately, allowing subtitles in different languages to be used without changing the video itself. In either case, a wide variety of technical approaches and formats are used to encode the subtitles.

Third, subtitles need to be displayed to the audience. Open subtitles are always shown whenever the video is played because they are part of it. However, displaying closed subtitles is optional since they are overlaid onto the video by whatever is playing it. For example, media player software might be used to combine closed subtitles with the video itself. In some theaters or venues, a dedicated screen or screens are used to display subtitles. If that dedicated screen is above rather than below the main display area, the subtitles are called surtitles.

Methods

Sometimes, mainly at film festivals, subtitles may be shown on a separate display below the screen, thus saving the filmmaker from creating a subtitled copy for perhaps just one showing.

Creation, delivery, and display of subtitles

Professional subtitlers usually work with specialized computer software and hardware where the video is digitally stored on a hard disk, making each frame instantly accessible. Besides creating the subtitles, the subtitler usually tells the computer software the exact positions where each subtitle should appear and disappear. For cinema films, this task is traditionally done by separate technicians. The result is a subtitle file containing the actual subtitles and position markers indicating where each subtitle should appear and disappear. These markers are usually based on timecode if it is a work for electronic media (e.g., TV, video, DVD) or on film length (measured in feet and frames) if the subtitles are to be used for traditional cinema film.

The finished subtitle file is used to add the subtitles to the picture, either:

  • directly into the picture (open subtitles);
  • embedded in the vertical interval and later superimposed on the picture by the end user with the help of an external decoder or a decoder built into the TV (closed subtitles on TV or video);
  • or converted (rendered) to tiff or bmp graphics that are later superimposed on the picture by the end user's equipment (closed subtitles on DVD or as part of a DVB broadcast).

Subtitles can also be created by individuals using freely available subtitle-creation software like Subtitle Workshop for Windows, MovieCaptioner for Mac/Windows, and Subtitle Composer for Linux, and then hardcode them onto a video file with programs such as VirtualDub in combination with VSFilter which could also be used to show subtitles as softsubs in many software video players.

For multimedia-style Webcasting, check:

Automatic captioning

Some programs and online software allow automatic captions, mainly using speech-to-text features.

For example, on YouTube, automatic captions are available in English, Dutch, French, German, Italian, Japanese, Korean, Portuguese, Russian, Indonesian, Spanish, Turkish, Ukrainian and Vietnamese. If automatic captions are available for the language, they will automatically be published on the video.[1][2]

Same-language captions

Same-language captions, i.e., without translation, were primarily intended as an aid for people who are deaf or hard-of-hearing.

Closed captions

Main page: Engineering:Closed captioning
The "CC in a TV" symbol Jack Foley created, while senior graphic designer at Boston public broadcaster WGBH that invented captioning for television, is public domain so that anyone who captions TV programs can use it.

Closed captioning is the American term for closed subtitles specifically intended for people who are deaf or hard-of-hearing. These are a transcription rather than a translation, and usually also contain lyrics and descriptions of important non-dialogue audio such as (SIGHS), (WIND HOWLING), ("SONG TITLE" PLAYING), (KISSES), (THUNDER RUMBLING) and (DOOR CREAKING). From the expression "closed captions", the word "caption" has in recent years come to mean a subtitle intended for the deaf or hard-of-hearing, be it "open" or "closed". In British English, "subtitles" usually refers to subtitles for the deaf or hard-of-hearing (SDH); however, the term "SDH" is sometimes used when there is a need to make a distinction between the two.

Real time

Programs such as news bulletins, current affairs programs, sports, some talk shows, and political and special events utilize real time or online captioning.[3] Live captioning is increasingly common, especially in the United Kingdom and the United States , as a result of regulations that stipulate that virtually all TV eventually must be accessible for people who are deaf and hard-of-hearing.[4] In practice, however, these "real time" subtitles will typically lag the audio by several seconds due to the inherent delay in transcribing, encoding, and transmitting the subtitles. Real time subtitles are also challenged by typographic errors or mishearing of the spoken words, with no time available to correct before transmission.

Pre-prepared

Some programs may be prepared in their entirety several hours before broadcast, but with insufficient time to prepare a timecoded caption file for automatic play-out. Pre-prepared captions look similar to offline captions, although the accuracy of cueing may be compromised slightly as the captions are not locked to program timecode.[3]

Newsroom captioning involves the automatic transfer of text from the newsroom computer system to a device which outputs it as captions. It does work, but its suitability as an exclusive system would only apply to programs which had been scripted in their entirety on the newsroom computer system, such as short interstitial updates.[3]

In the United States and Canada, some broadcasters have used it exclusively and simply left uncaptioned sections of the bulletin for which a script was unavailable.[3] Newsroom captioning limits captions to pre-scripted materials and, therefore, does not cover 100% of the news, weather and sports segments of a typical local news broadcast which are typically not pre-scripted. This includes last-second breaking news or changes to the scripts, ad-lib conversations of the broadcasters, and emergency or other live remote broadcasts by reporters in-the-field. By failing to cover items such as these, newsroom style captioning (or use of the teleprompter for captioning) typically results in coverage of less than 30% of a local news broadcast.[5]

Live

Communication access real-time translation (CART) stenographers, who use a computer with using either stenotype or Velotype keyboards to transcribe stenographic input for presentation as captions within two or three seconds of the representing audio, must caption anything which is purely live and unscripted[where?];[3] however, more recent developments include operators using speech recognition software and re-voicing the dialogue. Speech recognition technology has advanced so quickly in the United States that about 50% of all live captioning was through speech recognition as of 2005.[citation needed] Real-time captions look different from offline captions, as they are presented as a continuous flow of text as people speak.[3][clarification needed]

Stenography is a system of rendering words phonetically, and English, with its multitude of homophones (e.g., there, their, they're), is particularly unsuited to easy transcriptions. Stenographers working in courts and inquiries usually have 24 hours in which to deliver their transcripts. Consequently, they may enter the same phonetic stenographic codes for a variety of homophones, and fix up the spelling later. Real-time stenographers must deliver their transcriptions accurately and immediately. They must therefore develop techniques for keying homophones differently, and be unswayed by the pressures of delivering accurate product on immediate demand.[3]

Submissions to recent captioning-related inquiries have revealed concerns from broadcasters about captioning sports. Captioning sports may also affect many different people because of the weather outside of it. In much sport captioning's absence, the Australian Caption Centre submitted to the National Working Party on Captioning (NWPC), in November 1998, three examples of sport captioning, each performed on tennis, rugby league and swimming programs:

  • Heavily reduced: Captioners ignore commentary and provide only scores and essential information such as "try" or "out".
  • Significantly reduced: Captioners use QWERTY input to type summary captions yielding the essence of what the commentators are saying, delayed due to the limitations of QWERTY input.
  • Comprehensive realtime: Captioners use stenography to caption the commentary in its entirety.[3]

The NWPC concluded that the standard they accept is the comprehensive real-time method, which gives them access to the commentary in its entirety. Also, not all sports are live. Many events are pre-recorded hours before they are broadcast, allowing a captioner to caption them using offline methods.[3]

Hybrid

Because different programs are produced under different conditions, a case-by-case basis must consequently determine captioning methodology. Some bulletins may have a high incidence of truly live material, or insufficient access to video feeds and scripts may be provided to the captioning facility, making stenography unavoidable. Other bulletins may be pre-recorded just before going to air, making pre-prepared text preferable.[3]

News captioning applications currently available are designed to accept text from a variety of inputs: stenography, Velotype, QWERTY, ASCII import, and the newsroom computer. This allows one facility to handle a variety of online captioning requirements and to ensure that captioners properly caption all programs.[3]

Current affairs programs usually require stenographic assistance. Even though the segments which comprise a current affairs program may be produced in advance, they are usually done so just before on-air time and their duration makes QWERTY input of text unfeasible.[3]

News bulletins, on the other hand, can often be captioned without stenographic input (unless there are live crosses or ad-libbing by the presenters). This is because:

  • Most items are scripted on the newsroom computer system and this text can be electronically imported into the captioning system.
  • Individual news stories are of short duration, so even if they are made available only just prior to broadcast, there is still time to use QWERTY in text.[3]

Offline

For non-live, or pre-recorded programs, television program providers can choose offline captioning. Captioners gear offline captioning toward the high-end television industry, providing highly customized captioning features, such as pop-on style captions, specialized screen placement, speaker identifications, italics, special characters, and sound effects.[6]

Offline captioning involves a five-step design and editing process, and does much more than simply display the text of a program. Offline captioning helps the viewer follow a story line, become aware of mood and feeling, and allows them to fully enjoy the entire viewing experience. Offline captioning is the preferred presentation style for entertainment-type programming.[6]

Subtitles for the deaf or hard-of-hearing (SDH)

Subtitles for the deaf or hard-of-hearing (SDH) is an American term introduced by the DVD industry.[7] It refers to regular subtitles in the original language where important non-dialogue information has been added, as well as speaker identification, which may be useful when the viewer cannot otherwise visually tell who is saying what.

The only significant difference for the user between SDH subtitles and closed captions is their appearance: SDH subtitles usually are displayed with the same proportional font used for the translation subtitles on the DVD; however, closed captions are displayed as white text on a black band, which blocks a large portion of the view. Closed captioning is falling out of favor as many users have no difficulty reading SDH subtitles, which are text with contrast outline. In addition, DVD subtitles can specify many colors on the same character: primary, outline, shadow, and background. This allows subtitlers to display subtitles on a usually translucent band for easier reading; however, this is rare, since most subtitles use an outline and shadow instead, in order to block a smaller portion of the picture. Closed captions may still supersede DVD subtitles, since many SDH subtitles present all of the text centered (an example of this is DVDs and Blu-ray Discs manufactured by Warner Bros.), while closed captions usually specify position on the screen: centered, left align, right align, top, etc. This is helpful for speaker identification and overlapping conversation. Some SDH subtitles (such as the subtitles of newer Universal Studios DVDs/Blu-ray Discs and most 20th Century Fox Blu-ray Discs, and some Columbia Pictures DVDs) do have positioning, but it is not as common.

DVDs for the U.S. market now sometimes have three forms of English subtitles: SDH subtitles; English subtitles, helpful for viewers who may not be hearing impaired but whose first language may not be English (although they are usually an exact transcript and not simplified); and closed caption data that is decoded by the end-user's closed caption decoder. Most anime releases in the U.S. only include translations of the original material as subtitles; therefore, SDH subtitles of English dubs ("dubtitles") are uncommon. [8][9]

High-definition disc media (HD DVD, Blu-ray Disc) uses SDH subtitles as the sole method because technical specifications do not require HD to support line 21 closed captions. Some Blu-ray Discs, however, are said to carry a closed caption stream that only displays through standard-definition connections. Many HDTVs allow the end-user to customize the captions, including the ability to remove the black band.

Song lyrics are not always captioned, as additional copyright permissions may be required to reproduce the lyrics on-screen as part of the subtitle track. In October 2015, major studios and Netflix were sued over this practice, citing claims of false advertising (as the work is henceforth not completely subtitled) and civil rights violations (under California's Unruh Civil Rights Act, guaranteeing equal rights for people with disabilities). Judge Stephen Victor Wilson dismissed the suit in September 2016, ruling that allegations of civil rights violations did not present evidence of intentional discrimination against viewers with disabilities, and that allegations over misrepresenting the extent of subtitles "fall far short of demonstrating that reasonable consumers would actually be deceived as to the amount of subtitled content provided, as there are no representations whatsoever that all song lyrics would be captioned, or even that the content would be 'fully' captioned."[10][11]

Use by hearing people for convenience

Although same-language subtitles and captions are produced primarily with the deaf and hard-of-hearing in mind, many others use them for convenience. Subtitles are increasingly popular among younger viewers for improved understanding and faster comprehension. Subtitles allow viewers to understand dialogue that is poorly enunciated, delivered quietly, in unfamiliar dialects, or spoken by background characters. A 2021 UK survey found that 80% of viewers between 18 and 25 regularly used subtitles, while less than a quarter of those between 56 and 75 did.[12][13][14]

Same-language subtitling

Same language subtitling (SLS) is the use of synchronized captioning of musical lyrics (or any text with an audio/video source) as a repeated reading activity. The basic reading activity involves students viewing a short subtitled presentation projected onscreen, while completing a response worksheet. To be really effective, the subtitling should have high quality synchronization of audio and text, and better yet, subtitling should change color in syllabic synchronization to audio model, and the text should be at a level to challenge students' language abilities.[15][16] Studies (including those by the University of Nottingham and the What Works Clearinghouse of the United States Department of Education) have found that use of subtitles can help promote reading comprehension in school-aged children.[17] Same-language captioning can improve literacy and reading growth across a broad range of reading abilities.[18][19] It is used for this purpose by national television broadcasters in China and in India such as Doordarshan.[citation needed][18][20]

Asia

In some Asian television programming, captioning is considered a part of the genre, and has evolved beyond simply capturing what is being said. The captions are used artistically; it is common to see the words appear one by one as they are spoken, in a multitude of fonts, colors, and sizes that capture the spirit of what is being said. Languages like Japanese also have a rich vocabulary of onomatopoeia which is used in captioning.

Chinese-speaking world

In some East Asian countries, especially Chinese-speaking ones, subtitling is common in all taped television programs and films. In these countries, written text remains mostly uniform while regional dialects in the spoken form can be mutually unintelligible. Therefore, subtitling offers a distinct advantage to aid comprehension. With subtitles, programs in Mandarin or any dialect can be understood by viewers unfamiliar with it.

According to HK Magazine, the practice to caption in Standard Chinese was pioneered in Hong Kong during the 1960s by Run Run Shaw of Shaw Brothers Studio. In a bid to reach the largest audience possible, Shaw had already recorded his films in Mandarin, reasoning it would be most universal variety of Chinese. However, this did not guarantee that the films could be understood by non-Mandarin-speaking audiences, and dubbing into different varieties was seen as too costly. The decision was thus made to include Standard Chinese subtitles in all Shaw Brothers films. As the films were made in British-ruled Hong Kong, Shaw also decided to include English subtitles to reach English speakers in Hong Kong and allow for exports outside Asia.[21]

Japanese reality television

On-screen subtitles as seen in Japan ese variety and other reality television shows are more for decorative purpose, something that is not seen in television in Europe and the Americas. Some shows even place sound effects over those subtitles. This practice of subtitling has been spread to neighbouring countries including South Korea and Taiwan. ATV in Hong Kong once practiced this style of decorative subtitles on its variety shows while it was owned by Want Want Holdings in Taiwan (which also owns CTV and CTI) during 2009.

Translation

Translation basically means conversion of one language into another language in written or spoken form. Subtitles can be used to translate dialogue from a foreign language into the native language of the audience. It is not only the quickest and cheapest method of translating content, but is also usually preferred as it is possible for the audience to hear the original dialogue and voices of the actors.

Subtitle translation can be different from the translation of written text. Usually, during the process of creating subtitles for a film or television program, the picture and each sentence of the audio are analyzed by the subtitle translator; also, the subtitle translator may or may not have access to a written transcript of the dialogue. Especially in the field of commercial subtitles, the subtitle translator often interprets what is meant, rather than translating the manner in which the dialogue is stated; that is, the meaning is more important than the form—the audience does not always appreciate this, as it can be frustrating for people who are familiar with some of the spoken language; spoken language may contain verbal padding or culturally implied meanings that cannot be conveyed in the written subtitles. Also, the subtitle translator may also condense the dialogue to achieve an acceptable reading speed, whereby purpose is more important than form.

Especially in fansubs, the subtitle translator may translate both form and meaning. The subtitle translator may also choose to display a note in the subtitles, usually in parentheses ("(" and ")"), or as a separate block of on-screen text—this allows the subtitle translator to preserve form and achieve an acceptable reading speed; that is, the subtitle translator may leave a note on the screen, even after the character has finished speaking, to both preserve form and facilitate understanding. For example, Japanese has multiple first-person pronouns (see Japanese pronouns) and each pronoun is associated with a different degree of politeness. In order to compensate during the English translation process, the subtitle translator may reformulate the sentence, add appropriate words and/or use notes.

Subtitling

Real-time

Real-time translation subtitling usually involves an interpreter and a stenographer working concurrently, whereby the former quickly translates the dialogue while the latter types; this form of subtitling is rare. The unavoidable delay, typing errors, lack of editing, and high cost mean that real-time translation subtitling is in low demand. Allowing the interpreter to directly speak to the viewers is usually both cheaper and quicker; however, the translation is not accessible to people who are deaf and hard-of-hearing.

Offline

Some subtitlers purposely provide edited subtitles or captions to match the needs of their audience, for learners of the spoken dialogue as a second or foreign language, visual learners, beginning readers who are deaf or hard of hearing and for people with learning and/or mental disabilities. For example, for many of its films and television programs, PBS displays standard captions representing speech from the program audio, word-for-word, if the viewer selects "CC1" by using the television remote control or on-screen menu; however, they also provide edited captions to present simplified sentences at a slower rate, if the viewer selects "CC2". Programs with a diverse audience also often have captions in another language. This is common with popular Latin American soap operas in Spanish. Since CC1 and CC2 share bandwidth, the U.S. Federal Communications Commission (FCC) recommends translation subtitles be placed in CC3. CC4, which shares bandwidth with CC3, is also available, but programs seldom use it.

Subtitles vs. dubbing and lectoring

The two alternative methods of 'translating' films in a foreign language are dubbing, in which other actors record over the voices of the original actors in a different language, and lectoring, a form of voice-over for fictional material where a narrator tells the audience what the actors are saying while their voices can be heard in the background. Lectoring is common for television in Russia, Poland, and a few other East European countries, while cinemas in these countries commonly show films dubbed or subtitled.

The preference for dubbing or subtitling in various countries is largely based on decisions made in the late 1920s and early 1930s. With the arrival of sound film, the film importers in Germany , Italy, France , Switzerland , Luxembourg, Austria, San Marino, Liechtenstein, Monaco, Czech Republic, Slovakia, Hungary, Belarus , Ukraine , Russia , Andorra, Spain and United Kingdom decided to dub the foreign voices, while the rest of Europe elected to display the dialogue as translated subtitles. The choice was largely due to financial reasons (subtitling is more economical and quicker than dubbing), but during the 1930s it also became a political preference in Germany, Italy and Spain; an expedient form of censorship that ensured that foreign views and ideas could be stopped from reaching the local audience, as dubbing makes it possible to create a dialogue which is totally different from the original. In larger German cities a few "special cinemas" use subtitling instead of dubbing.

Dubbing is still the norm and favored form in these four countries, but the proportion of subtitling is slowly growing, mainly to save cost and turnaround-time, but also due to a growing acceptance among younger generations, who are better readers and increasingly have a basic knowledge of English (the dominant language in film and TV) and thus prefer to hear the original dialogue.

Nevertheless, in Spain, for example, only public TV channels show subtitled foreign films, usually at late night. It is extremely rare that any Spanish TV channel shows subtitled versions of TV programs, series or documentaries. With the advent of digital land broadcast TV, it has become common practice in Spain to provide optional audio and subtitle streams that allow watching dubbed programs with the original audio and subtitles. In addition, only a small proportion of cinemas show subtitled films. Films with dialogue in Galician, Catalan or Basque are always dubbed, not subtitled, when they are shown in the rest of the country. Some non-Spanish-speaking TV stations subtitle interviews in Spanish; others do not.

In many Latin American countries, local network television will show dubbed versions of English-language programs and movies, while cable stations (often international) more commonly broadcast subtitled material. Preference for subtitles or dubbing varies according to individual taste and reading ability, and theaters may order two prints of the most popular films, allowing moviegoers to choose between dubbing or subtitles. Animation and children's programming, however, is nearly universally dubbed, as in other regions.

Since the introduction of the DVD and, later, the Blu-ray Disc, some high budget films include the simultaneous option of both subtitles and/or dubbing. Often in such cases, the translations are made separately, rather than the subtitles being a verbatim transcript of the dubbed scenes of the film. While this allows for the smoothest possible flow of the subtitles, it can be frustrating for someone attempting to learn a foreign language.

In the traditional subtitling countries, dubbing is generally regarded as something strange and unnatural and is only used for animated films and TV programs intended for pre-school children. As animated films are "dubbed" even in their original language and ambient noise and effects are usually recorded on a separate sound track, dubbing a low quality production into a second language produces little or no noticeable effect on the viewing experience. In dubbed live-action television or film, however, viewers are often distracted by the fact that the audio does not match the actors' lip movements. Furthermore, the dubbed voices may seem detached, inappropriate for the character, or overly expressive, and some ambient sounds may not be transferred to the dubbed track, creating a less enjoyable viewing experience.

Subtitling as a practice

A map of Europe showing which audiovisual translation methods are preferred in each country.
  Subtitles: Countries in Europe where dubbing is only used for children's programs and family films; otherwise, subtitles are solely used.
  Mixed areas: Countries in Europe occasionally using either multi-voice voice-over translations (Bulgaria) or dubbing (Turkey, Northern Cyprus); otherwise, subtitles are solely used, with the exception of children's programs and family films.
  Voice-over: Countries in Europe usually using voice-over translations which feature one or two voices, while lowering the volume of the original soundtrack; examples are Poland , Russia and Lithuania. This method is mainly used in television broadcasts and home media, but these countries generally also use dubbing or subtitles as well in other contexts.
  General dubbing: Countries in Europe where dubbing is used for most foreign-language films and TV series.
  Countries which occasionally produce their own dubbings, but generally instead use dubbing versions from neighboring countries produced in mutually intelligible languages instead (Belarus and Slovakia).
  Belgium: The Dutch-speaking region occasionally produces its own dubbing versions (but usually uses the same ones as the Netherlands), otherwise solely subtitles. The French-speaking region of Wallonia and the German-speaking region of East Belgium use exclusively full-cast dubbing for both films and TV series.

In several countries or regions nearly all foreign language TV programs are subtitled, instead of dubbed, such as:

It is also common that television services in minority languages subtitle their programs in the dominant language as well. Examples include the Welsh S4C and Irish TG4 who subtitle in English and the Swedish Yle Fem in Finland who subtitle in the majority language Finnish.

In Wallonia (Belgium) films are usually dubbed, but sometimes they are played on two channels at the same time: one dubbed (on La Une) and the other subtitled (on La Deux), but this is no longer done as frequently due to low ratings.

In Australia, one FTA network, SBS airs its foreign-language shows subtitled in English.

Categories

Subtitles in the same language on the same production can be in different categories:

  • Hearing Impaired subtitles (sometimes abbreviated as HI or SDH) are intended for people who are hearing impaired, providing information about music, environmental sounds and off-screen speakers (e.g. when a doorbell rings or a gunshot is heard). In other words, they indicate the kinds and the sources of the sounds coming from the movie, and usually put this information inside brackets to demarcate it from actors' dialogues. For example: [sound of typing on a keyboard], [mysterious music], [glass breaks], [woman screaming].
  • Narrative is the most common type of subtitle in which spoken dialogue is displayed. These are most commonly used to translate a film with one spoken language and the text of a second language.
  • Forced subtitles are common on movies and only provide subtitles when the characters speak a foreign or alien language, or a sign, flag, or other text in a scene is not translated in the localization and dubbing process. In some cases, foreign dialogue may be left untranslated if the movie is meant to be seen from the point of view of a particular character who does not speak the language in question. For example, in Steven Spielberg's Amistad the dialogue of the Spanish slave traders is subtitled, while African languages are left untranslated.[24]
  • Content subtitles are a North American Secondary Industry (non-Hollywood, often low-budget) staple. They add content dictation that is missing from filmed action or dialogue. Due to the general low-budget allowances in such films, it is often more feasible to add the overlay subtitles to fill in information. They are most commonly seen on America's Maverick films as forced subtitles, and on Canada's MapleLeaf films as optional subtitles. Content subtitles also appear in the beginning of some higher-budget films (e.g., Star Wars) or at the end of a film (e.g., Gods and Generals).
  • Titles only are typically used by dubbed programs and provide only the text for any untranslated on-screen text. They are most commonly forced (see above).
  • Bonus subtitles are an additional set of text blurbs that are added to DVDs. They are similar to Blu-ray Discs' in-movie content or to the "info nuggets" in VH1 Pop-up Video. Often shown in popup or balloon form, they point out background, behind-the-scenes information relative to what is appearing on screen, often indicating filming and performance mistakes in continuity or consistency.
  • Localized subtitles are a separate subtitle track that uses expanded references (i.e., "The sake [a Japanese Wine] was excellent as was the Wasabi") or can replace the standardized subtitle track with a localized form replacing references to local custom (i.e., from above, "The wine was excellent as was the spicy dip").
  • Extended/Expanded subtitles combine the standard subtitle track with the localization subtitle track. Originally found only on Celestial DVDs in the early 2000s, the format has expanded to many export-intended releases from China, Japan, India, and Taiwan. The term "Expanded Subtitles" is owned by Celestial, with "Extended Subtitles" being used by other companies.

Types

Subtitles exist in two forms; open subtitles are 'open to all' and cannot be turned off by the viewer; closed subtitles are designed for a certain group of viewers, and can usually be turned on/off or selected by the viewer – examples being teletext pages, U.S. Closed captions (608/708), DVB Bitmap subtitles, DVD/Blu-ray subtitles.

While distributing content, subtitles can appear in one of 3 types:

  • Hard (also known as hardsubs or open subtitles). The subtitle text is irreversibly merged in original video frames, and so no special equipment or software is required for playback. Hence, complex transition effects and animation can be implemented, such as karaoke song lyrics using various colors, fonts, sizes, animation (like a bouncing ball) etc. to follow the lyrics. However, these subtitles cannot be turned off unless the original video is also included in the distribution as they are now part of the original frame, and thus it is impossible to have several variants of subtitling, such as in multiple languages.
  • Prerendered (also known as closed) subtitles are separate video frames that are overlaid on the original video stream while playing. Prerendered subtitles are used on DVD and Blu-ray (though they are contained in the same file as the video stream). It is possible to turn them off or have multiple language subtitles and switch among them, but the player has to support such subtitles to display them. Also, subtitles are usually encoded as images with minimal bitrate and number of colors; they usually lack anti-aliased font rasterization. Also, changing such subtitles is hard, but special OCR software, such as SubRip exists to convert such subtitles to "soft" ones.
  • Soft (also known as softsubs or closed subtitles) are, like closed captions, separate instructions, usually a specially marked up text with time stamps to be optionally displayed during playback. It requires player support and, moreover, there are multiple incompatible (but usually reciprocally convertible) subtitle file formats, but enables greater versatility in post production. Softsubs are relatively easy to create and change, and thus are frequently used for fansubs. Text rendering quality can vary depending on the player, but is generally higher than prerendered subtitles. Also, some formats introduce text encoding troubles for the end-user, especially if different languages are used simultaneously (for example, Latin and Asian scripts). A subtitle track with time stamp also allows for accurate time keeping after having paused the video recording, which would otherwise cause discrepancy between the duration of the video recording since the usually memorized clock time at start and real clock time. Camcorders may record additional metadata such as technical parameters (aperture, exposure value, exposure duration, photosensitivity, etc.).[25]

In other categorization, digital video subtitles are sometimes called internal, if they are embedded in a single video file container along with video and audio streams, and external if they are distributed as separate file (that is less convenient, but it is easier to edit/change such file).

Comparison table
Feature Hard Prerendered Soft
Can be turned off/on No Yes Yes
Multiple subtitle variants (for example, languages) Yes, though all displayed at the same time Yes Yes
Editable No Difficult, but possible Yes
Player requirements None Majority of players support DVD subtitles Usually requires installation of special software, unless national regulators mandate its distribution
Visual appearance, colors, font quality Low to high, depending on video resolution/compression Low Low to high, depending on player and subtitle file format
Transitions, karaoke and other special effects Highest Low
Distribution Inside original video Separate low-bitrate video stream, commonly multiplexed Relatively small subtitle file or instructions stream, multiplexed or separate
Additional overhead None, though subtitles added by re-encoding of the original video may degrade overall image quality, and the sharp edges of text may introduce artifacts in surrounding video High Low

Subtitle formats

For software video players

Sortable table
Name Extension Type Text styling Metadata Timings Timing precision
AQTitle .aqt Text Yes Yes Framings As frames
EBU-TT-D[26] N/A XML Yes Yes Elapsed time Unlimited
Gloss Subtitle .gsub HTML/XML Yes Yes Elapsed time 10 milliseconds
JACOSub[27] .jss Text with markup Yes No Elapsed time As frames
MicroDVD .sub Text No No Framings As frames
MPEG-4 Timed Text .ttxt (or mixed with A/V stream) XML Yes No Elapsed time 1 millisecond
MPSub .sub Text No Yes Sequential time 10 milliseconds
Ogg Writ N/A (embedded in Ogg container) Text Yes Yes Sequential granules Dependent on bitstream
Phoenix Subtitle .pjs Text No No Framings As frames
PowerDivX .psb Text No No Elapsed time 1 second
RealText[28] .rt HTML Yes No Elapsed time 10 milliseconds
SAMI .smi HTML Yes Yes Framings As frames
Spruce subtitle format[29] .stl Text Yes Yes Sequential time+frames Sequential time+frames
Structured Subtitle Format .ssf XML Yes Yes Elapsed time 1 millisecond
SubRip .srt Text Yes No Elapsed time 1 millisecond
(Advanced) SubStation Alpha[30] .ssa or .ass (advanced) Text Yes Yes Elapsed time 10 milliseconds
SubViewer .sub or .sbv Text No Yes Elapsed time 10 milliseconds
Universal Subtitle Format .usf XML Yes Yes Elapsed time 1 millisecond
VobSub .sub + .idx Image N/A N/A Elapsed time 1 millisecond
WebVTT .vtt HTML Yes Yes Elapsed time 1 millisecond
XSUB N/A (embedded in .divx container) Image N/A N/A Elapsed time 1 millisecond

There are still many more uncommon formats. Most of them are text-based and have the extension .txt.

For media

For cinema movies shown in a theatre:

  • Cinema
  • D-Cinema: digital projection of movie in DCP format

For movies on DVD Video:

  • DVD-Video subtitles (related to VobSub)
  • Blu-ray Disc subtitles (related to PGS)

For TV broadcast:

  • Teletext
  • DVB Subtitles (DVB-SUB)
  • Philips Overlay Graphics Text
  • Imitext

Subtitles created for TV broadcast are stored in a variety of file formats. The majority of these formats are proprietary to the vendors of subtitle insertion systems.

Broadcast subtitle formats include: .ESY, .XIF, .X32, .PAC, .RAC, .CHK, .AYA, .890, .CIP, .CAP, .ULT, .USF, .CIN, .L32, .ST4, .ST7, .TIT, .STL

The EBU format defined by Technical Reference 3264-E[31] is an 'open' format intended for subtitle exchange between broadcasters. Files in this format have the extension .stl (not to be mixed up with text "Spruce subtitle format" mentioned above, which also has extension .stl)

For internet delivery:

The Timed Text format currently a "Candidate Recommendation" of the W3C (called DFXP[32]) is also proposed as an 'open' format for subtitle exchange and distribution to media players, such as Microsoft Silverlight.

Reasons for not subtitling a foreign language

Most times a foreign language is spoken in film, subtitles are used to translate the dialogue for the viewer. However, there are occasions when foreign dialogue is left unsubtitled (and thus incomprehensible to most of the target audience). This is often done if the movie is seen predominantly from the viewpoint of a particular character who does not speak the language. Such absence of subtitles allows the audience to feel a similar sense of incomprehension and alienation that the character feels. An example of this is seen in Not Without My Daughter. The Persian dialogue spoken by the Iranian characters is not subtitled because the main character Betty Mahmoody does not speak Persian and the audience is seeing the film from her viewpoint.

A variation of this was used in the video game Max Payne 3. Subtitles are used on all 3 the English, Spanish (only Chapter 11) and Portuguese (Chapter 1, 2, 3, 5, 6, 7, 9, 10, 12, 13 and 14 only) dialogues, but the latter is left untranslated[33] as the main character does not understand the language.

Subtitles as a source of humor

Occasionally, movies will use subtitles as a source of humor, parody and satire.

  • In Annie Hall, the characters of Woody Allen and Diane Keaton are having a conversation; their real thoughts are shown in subtitles.
  • In Austin Powers in Goldmember, Japanese dialogue is subtitled using white type that blends in with white objects in the background. An example is when white binders turn the subtitle "I have a huge rodent problem" into "I have a huge rod." After many cases of this, Mr. Roboto says "Why don't I just speak English?", in English. In the same film, Austin and Nigel Powers directly speak in Cockney English to make the content of their conversation unintelligible; subtitles appear for the first part of the conversation, but then cease and are replaced with a series of question marks.
  • In Yellow Submarine, the Beatles use the subtitles of "All you need is love" to defeat a giant glove.
  • In The Impostors, one character speaks in a foreign language, while another character hides under the bed. Although the hidden character cannot understand what is being spoken, he can read the subtitles. Since the subtitles are overlaid on the film, they appear to be reversed from his point of view. His attempt to puzzle out these subtitles enhances the humor of the scene.
  • The movie Airplane! and its sequel feature two inner-city African Americans speaking in heavily accented slang, which another character refers to as if it were a foreign language: "Jive". Subtitles translate their speech, which is full of colorful expressions and mild profanity, into bland standard English, but the typical viewer can understand enough of what they are saying to recognize the incongruity. Transcript of the dialogue
  • In Cars 2, Susie Chef and Mater speak Chinese with English subtitles and Luigi, Mama Lopolino and Uncle Topolino speak Italian with English subtitles.
  • In parodies of the German film Downfall, incorrect subtitles are deliberately used, often with offensive and humorous results.
  • In the Carl Reiner comedy The Man with Two Brains, after stopping Dr. Michael Hfuhruhurr (Steve Martin) for speeding, a German police officer realizes that Hfuhruhurr can speak English. He asks his colleague in their squad car to turn off the subtitles, and indicates toward the bottom of the screen, commenting that "This is better — we have more room down there now".
  • In the opening credits of Monty Python and the Holy Grail, the Swedish subtitler switches to English and promotes his country, until the introduction is cut off and the subtitler "sacked". In the DVD version of the same film, the viewer could choose, instead of hearing aid and local languages, lines from Shakespeare's Henry IV, part 2 that vaguely resemble the lines that are actually being spoken in film, if they are "people who hate the film".
  • In Scary Movie 4, there is a scene where the actors speak in faux Japanese (nonsensical words which mostly consist of Japanese company names), but the content of the subtitles is the "real" conversation.
  • In Not Another Teen Movie, the nude foreign exchange student character Areola speaks lightly accented English, but her dialogue is subtitled anyway. Also, the text is spaced in such a way that a view of her bare breasts is unhindered.
  • In Trainspotting, the leading characters have a conversation in a crowded club. To understand what is being said, the entire dialogue is subtitled.
  • Simon Ellis' 2000 short film Telling Lies juxtaposes a soundtrack of a man telling lies on the telephone against subtitles which expose the truth.[34]
  • Animutations commonly use subtitles to present the comical "fake lyrics" (English words that sound close to what is actually being sung in the song in the non-English language). These fake lyrics are a major staple of the Animutation genre.
  • Lock, Stock and Two Smoking Barrels contains a scene spoken entirely in cockney rhyming slang that is subtitled in standard English.
  • In an episode of The Angry Beavers, at one point Norbert begins to speak with such a heavy European accent that his words are subtitled on the bottom of the screen. Daggett actually touches the subtitles, shoving them out of the way.
  • In the American theatrical versions of Night Watch and Day Watch, Russian dialogues are translated by subtitles which are designed accordingly to the depicted events. For instance, subtitles dissolve in water like blood, tremble along with a shaking floor or get cut by sword.
  • The film Crank contains a scene where Jason Statham's character understands an Asian character's line of dialogue from reading the on-screen subtitle. The subtitle is even in reverse when his character reads the line. Later, an exclamation made by another Asian character is subtitled, but both the spoken words and the subtitles are in Chinese.
  • In Fatal Instinct, also directed by Carl Reiner, one scene involving two characters talking about their murder plan in Yiddish to prevent anyone from knowing about it, only to be foiled by a man on the bench reading the on-screen subtitles.
  • Ken Loach released the film Riff-Raff into American theatres with subtitles not only so people could understand the thick Scottish accents, but also to make fun of what he believes to be many Americans' need for them (mentioned in the theatrical trailer). Many of Loach's films contain traditional dialect, with some (e.g. The Price of Coal) requiring subtitles even when shown on television in England.
  • In Bobby Lee's "Tae Do", a parody of Korean dramas in a Mad TV episode, the subtitles make more sense of the story than the Korean language being spoken. The subtitles are made to appear as though written by someone with a poor understanding of grammar and are often intentionally made longer than what they actually say in the drama. For example, an actor says "Sarang" ("I love you"), but the subtitle is so long that it covers the whole screen.
  • In television series Skithouse, a journalist interviews a group of Afghan terrorists in English, but one of them gets subtitled and sees it. He gets mad because he takes as an insult that he is the only one to get subtitled.[35]
  • In Mel Brooks film Robin Hood: Men in Tights, the thoughts of the overweight Broomhilde's (Megan Cavanaugh) horse Farfelkugel are shown as subtitles when Broomhilde attempts to jump on to the saddle off a balcony, as Maid Marion had done gracefully moments earlier. As Farfelkugel shudders, the subtitles show "She must be kidding!"
  • In the television series Drawn Together, the character Ling-Ling can only be understood through English subtitles, as his dialogue is delivered in a nonexistent language referred to as "Japorean" by Abbey DiGregorio, the voice actress for the character.
  • In the television series Green Acres episode "Lisa's Mudder Comes for a Visit" (season 5 episode 1), Lisa and her mother converse in Hungarian, with English subtitles. First, Lisa looks down and corrects the subtitles, "No no no, I said you hadn't changed a bit! We have a lot of trouble here with subtitles", and they change. Mother's Japanese chauffeur asks "I begga pardon – I bringa bags inna house?" that elicits a gong sound and Japanese subtitles. This is followed by Mother's Great Dane barking with the subtitles "I've seen better doghouses than this" with Lisa responding "We're not interested in what the dog says", and the subtitles disappear. Later, the subtitles ask farmhand Eb if they will be needing any more subtitles for the episode.
  • In the UK television series Top Gear, in episode 6 of Series 13, they purposely mistranslate the song sung by Carla Bruni, having her supposedly denouncing hatred towards the trio of presenters ("but mainly James May") for destroying what is claimed to be her own Morris Marina.
  • In Vance Joy's music video "Riptide" it shows a woman singing the lyrics to the song. At many points the lyrics which are sung "I got a lump in my throat cause you're gonna sing the words wrong"[36] are deliberately mis-subtitled as "I got a lump in my throat cause you gone and sank the worlds wolf".[37]
  • In "Weird Al" Yankovic's music video for "Smells Like Nirvana", the second verse is subtitled as a way to mock the supposed unintelligibility of the song. One of the lines is "It's hard to bargle nawdle zouss???" (with three question marks), which has no meaning, but is explained by the following line, "With all these marbles in my mouth". While singing the latter, Yankovic indeed spits out a couple of marbles.

One unintentional source of humor in subtitles comes from illegal DVDs produced in non-English-speaking countries (especially China). These DVDs often contain poorly worded subtitle tracks, possibly produced by machine translation, with humorous results. One of the better-known examples is a copy of Star Wars: Episode III – Revenge of the Sith whose opening title was subtitled, "Star war: The backstroke of the west".[38]

See also


Note

  1. Indonesian has a diglossic situation: newscasters and public officials from the country use a "high" or formal variety which is relatively intelligible to Malaysians, popular media like soap operas (sinetron) however use a "low" Betawified informal register which have lesser degrees of intelligibility.[23]

References

  1. Use automatic captioning, YouTube.
  2. Forster, Peter (2018-01-18). "How to Add Subtitles to Video Automatically". Subtitles. https://subtitles.love/blog/add-subtitles-to-video-automatically/. 
  3. 3.00 3.01 3.02 3.03 3.04 3.05 3.06 3.07 3.08 3.09 3.10 3.11 3.12 "Submissions to the captioning standards review | Department of Communications, Information Technology and the Arts" (Microsoft Word). 1999-02-26. http://www.dcita.gov.au/media_broadcasting/consultation_and_submissions/captioning_standards_review/submissions_to_the_captioning_standards_review.  AUSCAP Website . Document
  4. "National Captioning Institute". http://www.ncicap.org/caphist.asp. 
  5. "Caption Colorado". 2002. http://www.captioncolorado.com/about/index.html. ""Real-time" vs. Newsroom Captioning
    Caption Colorado offers "real-time" closed captioning that utilizes unique technologies coupled with the talents of highly skilled captioners who use stenographic court reporting machines to transcribe the audio on the fly, as the words are spoken by the broadcasters. real-time captioning is not limited to pre-scripted materials and, therefore, covers 100% of the news, weather and sports segments of a typical local news broadcast. It will cover such things as the weather and sports segments which are typically not pre-scripted, last second breaking news or changes to the scripts, ad lib conversations of the broadcasters, emergency or other live remote broadcasts by reporters in the field. By failing to cover items such as these, newsroom style captioning (or use of the TelePrompTer for captioning) typically results in coverage of less than 30% of a local news broadcast. … 2002"
     
  6. 6.0 6.1 "Caption Colorado". 2002. http://www.captioncolorado.com/products/offline.html. "Offline Captioning
    For non-live, or pre-recorded programs, you can choose from two presentation styles models for offline captioning or transcription needs in English or Spanish.

    Premiere Offline Captioning
    Premiere Offline Captioning is geared toward the high-end television industry, providing highly customized captioning features, such as pop-on style captions, specialized screen placement, speaker identifications, italics, special characters, and sound effects.

    Premiere Offline involves a five-step design and editing process, and does much more than simply display the text of a program. Premiere Offline helps the viewer follow a story line, become aware of mood and feeling, and allows them to fully enjoy the entire viewing experience. Premiere Offline is the preferred presentation style for entertainment-type programming. … 2002"
     
  7. Edelberg, Elisa (2017-06-19). "Closed Captions v. Subtitles for the Deaf and Hard of Hearing (SDH)" (in en). https://www.3playmedia.com/2017/06/19/whats-the-difference-subtitles-for-the-deaf-and-hard-of-hearing-sdh-v-closed-captions/. 
  8. U.S. Federal Communications Commission (FCC) (2008-05-01). Closed Captioning and the DTV Transition (swf). Washington, D.C. Event occurs at 1m58s. In addition to passing through closed caption signals, many converter boxes also include the ability to take over the captioning role that the tuner plays in your analog TV set. To determine whether your converter box is equipped to generate captions in this way, you should refer to the user manual that came with the converter box. If your converter box. If your converter box is equipped to generate captions in this way, then follow the instructions that came with the converter box to turn the captioning feature on/off via your converter box or converter box remote control. When you access the closed captions in the way, you also will be able to change the way your digital captions look. The converter box will come with instructions on how to change the caption size, font, caption color, background color, and opacity. This ability to adjust your captions is something you cannot do now with an analog television and analog captions.
  9. The Digital TV Transition – Audio and Video (2008-05-01). "What you need to know about the DTV Transition in American Sign Language: Part 3 – Closed Captioning – Flash Video". The Digital TV Transition: What You Need to Know About DTV. U.S. Federal Communications Commission (FCC). http://www.dtv.gov/video/DTV_ASL-Part3.html.  Details
  10. Patten, Dominic (2015-10-20). "Hollywood Studios & Netflix Blasted For Civil Rights Violations In Song-Captioning Class Action" (in en). https://deadline.com/2015/10/hollywood-studios-netflix-class-action-lawsuit-hearing-impaired-civil-right-violations-1201588229/. 
  11. "Netflix and film studios face lawsuit over song captioning for deaf" (in en-GB). The Guardian. 2015-10-20. ISSN 0261-3077. https://www.theguardian.com/film/2015/oct/20/netflix-film-studios-lawsuit-song-captioning-deaf-skyfall. 
  12. Kelly, Guy (2022-07-24). "How Generation Z became obsessed with subtitles" (in en-GB). The Telegraph. ISSN 0307-1235. https://www.telegraph.co.uk/tv/0/how-generation-z-became-obsessed-subtitles/. 
  13. Kehe, Jason (2018-06-26). "The Real Reason You Use Closed Captions for Everything Now". Wired. ISSN 1059-1028. https://www.wired.com/story/closed-captions-everywhere/. 
  14. Farley, Rebecca. "Get Over Your Fear Of Subtitles, Please" (in en). https://www.refinery29.com/en-us/tv-closed-captions-movie-subtitles-benefits. 
  15. "McCall, W. (2008). Same-Language-Subtitling and Karaoke: The Use of Subtitled Music as a Reading Activity in a High School Special Education Classroom. In K. McFerrin et al. (Eds.), Proceedings of Society for Information Technology and Teacher Education International Conference 2008 (pp. 1190–1195). Chesapeake, VA: AACE". http://go.editlib.org/p/27350. 
  16. Gannon, Jack. 1981. Deaf Heritage–A Narrative History of Deaf America, Silver Spring, MD: National Association of the Deaf, p. 266-270
  17. "Closed Captioning Gives Literacy a Boost". Education Week. 2015-07-21. https://www.edweek.org/ew/articles/2015/07/21/closed-captioning-gives-literacy-a-boost.html. 
  18. 18.0 18.1 Brij Kothari from Ashoka.org. Accessed on February 10, 2009
  19. Biswas, Ranjita (2005). Hindi film songs can boost literacy rates in India
  20. Biswas, Ranjita (2005). Hindi film songs can boost literacy rates in India from the Asian Film Foundation website. Accessed on February 10, 2009
  21. Mr. Know-It-All (2015-05-21). "Ask Mr. Know-It-All: Why do all films in Hong Kong have subtitles?". HK Magazine. https://www.scmp.com/magazines/hk-magazine/article/2037102/ask-mr-know-it-all-why-do-all-films-hong-kong-have-subtitles. 
  22. Pavelic, Boris (27 January 2012). "Croatian TV Risks Row Over Serbian Film". Zagreb: Balkan Insight. https://balkaninsight.com/2012/01/27/croatian-tv-may-soon-translate-serbian-films/. 
  23. Sneddon, J.N. (2003). "Diglossia in Indonesian". Bijdragen tot de Taal-, Land- en Volkenkunde 159 (4): 519–549. doi:10.1163/22134379-90003741. ISSN 0006-2294. https://www.jstor.org/stable/27868068. 
  24. "MISSING HEAVEN, MAKING HELL" (in en). 1998-01-05. https://www.washingtonexaminer.com/weekly-standard/missing-heaven-making-hell. 
  25. "Sony Digital Video Recorder Handycam Operating Guide – DCR-HC52/HC54 (MiniDV)" (in en). Sony. 2008. pp. 34. https://c.searspartsdirect.com/mmh/lis_pdf/OWNM/L0802253.pdf. 
  26. EBU (2015). "EBU-TT-D Subtitling Distribution Format". European Broadcasting Union. http://tech.ebu.ch/publications/tech3380. 
  27. Alex Matulich (1997–2002). "JACOsub Script File Format Specification". Unicorn Research Corporation. http://unicorn.us.com/jacosub/jscripts.html. 
  28. "RealText Authoring Guide". Real. RealNetworks. 1998–2000. http://service.real.com/help/library/guides/realtext/realtext.htm. 
  29. "Spruce Subtitle Format". Internet Archive Wayback Machine. http://geocities.com/McPoodle43/DVDMaestro/stl_format.html. 
  30. "ASS File Format Specification". http://www.tcax.org/docs/ass-specs.htm. 
  31. "Specification of the EBU Subtitling data exchange format". European Broadcasting Union. February 1991. http://tech.ebu.ch/docs/tech/tech3264.pdf. 
  32. Philippe Le Hégaret; Sean Hayes (6 September 2012). "Mission". Timed Text Working Group. http://www.w3.org/AudioVideo/TT/. 
  33. "(Xbox 360 Review) Max Payne 3". The Entertainment Depot. http://www.entdepot.com/2012/06/01/xbox-360-review-max-payne-3/. 
  34. "BBC – Film Network". https://www.bbc.co.uk/dna/filmnetwork/A7079510. 
  35. Skithouse: News report from Iraq. YouTube. 5 August 2007. Archived from the original on 2021-11-07.
  36. "Vance Joy – Riptide Lyrics – MetroLyrics". metrolyrics.com. http://www.metrolyrics.com/riptide-lyrics-vance-joy.html. 
  37. Vance Joy – 'Riptide' Official Video. YouTube. 2 April 2013. Archived from the original on 2021-11-07.
  38. jeremy (6 July 2005). "episode iii, the backstroke of the west". winterson.com. Google, Inc. http://winterson.com/2005/06/episode-iii-backstroke-of-west.html. 

External links