Manipulating video using canvas – web apis mdn gas efficient cars 2012

• This document establishes two canvas elements, with the IDs c1 and c2. Canvas c1 is used to display the current frame of the original video, while c2 is used to display the video after performing the chroma-keying effect; c2 is preloaded with the still image that will be used to replace the green background in the video.

The doLoad() method is called when the XHTML document initially loads. This method’s job is to prepare the variables needed by the chroma-key processing code, and to set up an event listener so we can detect when the user starts playing the video. var processor;

This code grabs references to the elements in the XHTML document that are of particular interest, namely the video element and the two canvas elements. It also fetches references to the graphics contexts for each of the two canvases. These will be used when we’re actually doing the chroma-keying effect.

Then addEventListener() is called to begin watching the video element so that we obtain notification when the user presses the play button on the video. In response to the user beginning playback, this code fetches the width and height of the video, halving each (we will be halving the size of the video when we perform the chroma-keying effect), then calls the timerCallback() method to start watching the video and computing the visual effect. The timer callback

The timer callback is called initially when the video starts playing (when the "play" event occurs), then takes responsibility for establishing itself to be called periodically in order to launch the keying effect for each frame. processor.timerCallback = function timerCallback() {

The last thing the callback does is call setTimeout() to schedule itself to be called again as soon as possible. In the real world, you would probably schedule this to be done based on knowledge of the video’s frame rate. Manipulating the video frame data

In line 2, that frame of video is copied into the graphics context ctx1 of the first canvas, specifying as the height and width the values we previously saved to draw the frame at half size. Note that you can simply pass the video element into the context’s drawImage() method to draw the current video frame into the context. The result is:

Line 3 fetches a copy of the raw graphics data for the current frame of video by calling the getImageData() method on the first context. This provides raw 32-bit pixel image data we can then manipulate. Line 4 computes the number of pixels in the image by dividing the total size of the frame’s image data by four.

The for loop that begins on line 6 scans through the frame’s pixels, pulling out the red, green, and blue values for each pixel, and compares the values against predetermined numbers that are used to detect the green screen that will be replaced with the still background image imported from foo.png.

Every pixel in the frame’s image data that is found that is within the parameters that are considered to be part of the green screen has its alpha value replaced with a zero, indicating that the pixel is entirely transparent. As a result, the final image has the entire green screen area 100% transparent, so that when it’s drawn into the destination context in line 13, the result is an overlay onto the static backdrop.