video.tools¶
This module regroups advanced, useful (and less useful) functions for editing videos, by alphabetical order.
Credits¶
Contains different functions to make end and opening credits, even though it is difficult to fill everyone needs in this matter.
-
class
moviepy.video.tools.credits.
CreditsClip
(creditfile, width, stretch=30, color='white', stroke_color='black', stroke_width=2, font='Impact-Normal', font_size=60, bg_color=None, gap=0)[source]¶ Bases:
moviepy.video.VideoClip.TextClip
Credits clip.
- Parameters:
- creditfile
A string or path like object pointing to a text file whose content must be as follows:
# This is a comment # The next line says : leave 4 blank lines .blank 4 ..Executive Story Editor MARCEL DURAND ..Associate Producers MARTIN MARCEL DIDIER MARTIN ..Music Supervisor JEAN DIDIER
- width
Total width of the credits text in pixels
- gap
Horizontal gap in pixels between the jobs and the names
- color
Color of the text. See
TextClip.list('color')
for a list of acceptable names.- font
Name of the font to use. See
TextClip.list('font')
for the list of fonts you can use on your computer.- font_size
Size of font to use
- stroke_color
Color of the stroke (=contour line) of the text. If
None
, there will be no stroke.- stroke_width
Width of the stroke, in pixels. Can be a float, like 1.5.
- bg_color
Color of the background. If
None
, the background will be transparent.
- Returns:
- image
An ImageClip instance that looks like this and can be scrolled to make some credits:
Executive Story Editor MARCEL DURAND Associate Producers MARTIN MARCEL DIDIER MARTIN Music Supervisor JEAN DIDIER
-
accel_decel
(new_duration=None, abruptness=1.0, soonness=1.0)¶ Accelerates and decelerates a clip, useful for GIF making.
- Parameters:
- new_durationfloat
Duration for the new transformed clip. If None, will be that of the current clip.
- abruptnessfloat
Slope shape in the acceleration-deceleration function. It will depend on the value of the parameter:
-1 < abruptness < 0
: speed up, down, up.abruptness == 0
: no effect.abruptness > 0
: speed down, up, down.
- soonnessfloat
For positive abruptness, determines how soon the transformation occurs. Should be a positive number.
- Raises:
- ValueError
When
sooness
argument is lower than 0.
Examples
The following graphs show functions generated by different combinations of arguments, where the value of the slopes represents the speed of the videos generated, being the linear function (in red) a combination that does not produce any transformation.
-
add_mask
()¶ Add a mask VideoClip to the VideoClip.
Returns a copy of the clip with a completely opaque mask (made of ones). This makes computations slower compared to having a None mask but can be useful in many cases. Choose
Set
constant_size
to False for clips with moving image size.
-
add_mask_if_none
(clip)¶ Add a mask to the clip if there is none.
-
afx
(fun, *args, **kwargs)¶ Transform the clip’s audio.
Return a new clip whose audio has been transformed by
fun
.
-
property
aspect_ratio
¶ Returns the aspect ratio of the video.
-
audio_delay
(offset=0.2, n_repeats=8, decay=1)¶ Repeats audio certain number of times at constant intervals multiplying their volume levels using a linear space in the range 1 to
decay
argument value.- Parameters:
- offsetfloat, optional
Gap between repetitions start times, in seconds.
- n_repeatsint, optional
Number of repetitions (without including the clip itself).
- decayfloat, optional
Multiplication factor for the volume level of the last repetition. Each repetition will have a value in the linear function between 1 and this value, increasing or decreasing constantly. Keep in mind that the last repetition will be muted if this is 0, and if is greater than 1, the volume will increase for each repetition.
Examples
>>> from moviepy import * >>> videoclip = AudioFileClip('myaudio.wav').fx( ... audio_delay, offset=.2, n_repeats=10, decayment=.2 ... )
>>> # stereo A note >>> make_frame = lambda t: np.array( ... [np.sin(440 * 2 * np.pi * t), np.sin(880 * 2 * np.pi * t)] ... ).T ... clip = AudioClip(make_frame=make_frame, duration=0.1, fps=44100) ... clip = audio_delay(clip, offset=.2, n_repeats=11, decay=0)
-
audio_fadein
(duration)¶ Return an audio (or video) clip that is first mute, then the sound arrives progressively over
duration
seconds.- Parameters:
- durationfloat
How long does it take for the sound to return to its normal level.
Examples
>>> clip = VideoFileClip("media/chaplin.mp4") >>> clip.fx(audio_fadein, "00:00:06")
-
audio_fadeout
(duration)¶ Return a sound clip where the sound fades out progressively over
duration
seconds at the end of the clip.- Parameters:
- durationfloat
How long does it take for the sound to reach the zero level at the end of the clip.
Examples
>>> clip = VideoFileClip("media/chaplin.mp4") >>> clip.fx(audio_fadeout, "00:00:06")
-
audio_loop
(n_loops=None, duration=None)¶ Loops over an audio clip.
Returns an audio clip that plays the given clip either n_loops times, or during duration seconds.
Examples
>>> from moviepy import * >>> videoclip = VideoFileClip('myvideo.mp4') >>> music = AudioFileClip('music.ogg') >>> audio = afx.audio_loop( music, duration=videoclip.duration) >>> videoclip.with_audio(audio)
-
audio_normalize
()¶ Return a clip whose volume is normalized to 0db.
Return an audio (or video) clip whose audio volume is normalized so that the maximum volume is at 0db, the maximum achievable volume.
Examples
>>> from moviepy import * >>> videoclip = VideoFileClip('myvideo.mp4').fx(afx.audio_normalize)
-
blackwhite
(RGB=None, preserve_luminosity=True)¶ Desaturates the picture, makes it black and white. Parameter RGB allows to set weights for the different color channels. If RBG is ‘CRT_phosphor’ a special set of values is used. preserve_luminosity maintains the sum of RGB to 1.
-
blink
(duration_on, duration_off)¶ Makes the clip blink. At each blink it will be displayed
duration_on
seconds and disappearduration_off
seconds. Will only work in composite clips.
-
blit_on
(picture, t)¶ Returns the result of the blit of the clip’s frame at time t on the given picture, the position of the clip being given by the clip’s
pos
attribute. Meant for compositing.
-
close
()¶ Release any resources that are in use.
-
copy
()¶ Mixed copy of the clip.
Returns a shallow copy of the clip whose mask and audio will be shallow copies of the clip’s mask and audio if they exist.
This method is intensively used to produce new clips every time there is an outplace transformation of the clip (clip.resize, clip.subclip, etc.)
Acts like a deepcopy except for the fact that readers and other possible unpickleables objects are not copied.
-
crop
(x1=None, y1=None, x2=None, y2=None, width=None, height=None, x_center=None, y_center=None)¶ Returns a new clip in which just a rectangular subregion of the original clip is conserved. x1,y1 indicates the top left corner and x2,y2 is the lower right corner of the croped region. All coordinates are in pixels. Float numbers are accepted.
To crop an arbitrary rectangle:
>>> crop(clip, x1=50, y1=60, x2=460, y2=275)
Only remove the part above y=30:
>>> crop(clip, y1=30)
Crop a rectangle that starts 10 pixels left and is 200px wide
>>> crop(clip, x1=10, width=200)
Crop a rectangle centered in x,y=(300,400), width=50, height=150 :
>>> crop(clip, x_center=300 , y_center=400, width=50, height=150)
Any combination of the above should work, like for this rectangle centered in x=300, with explicit y-boundaries:
>>> crop(clip, x_center=300, width=400, y1=100, y2=600)
-
crossfadein
(duration)¶ Makes the clip appear progressively, over
duration
seconds. Only works when the clip is included in a CompositeVideoClip.
-
crossfadeout
(duration)¶ Makes the clip disappear progressively, over
duration
seconds. Only works when the clip is included in a CompositeVideoClip.
-
cutout
(start_time, end_time)¶ Returns a clip playing the content of the current clip but skips the extract between
start_time
andend_time
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.If the original clip has a
duration
attribute set, the duration of the returned clip is automatically computed as `` duration - (end_time - start_time)``.The resulting clip’s
audio
andmask
will also be cutout if they exist.- Parameters:
- start_timefloat or tuple or str
Moment from which frames will be ignored in the resulting output.
- end_timefloat or tuple or str
Moment until which frames will be ignored in the resulting output.
-
even_size
()¶ Crops the clip to make dimensions even.
-
fadein
(duration, initial_color=None)¶ Makes the clip progressively appear from some color (black by default), over
duration
seconds at the beginning of the clip. Can be used for masks too, where the initial color must be a number between 0 and 1.For cross-fading (progressive appearance or disappearance of a clip over another clip, see
transfx.crossfadein
-
fadeout
(duration, final_color=None)¶ Makes the clip progressively fade to some color (black by default), over
duration
seconds at the end of the clip. Can be used for masks too, where the final color must be a number between 0 and 1.For cross-fading (progressive appearance or disappearance of a clip over another clip, see
transfx.crossfadeout
-
fill_array
(pre_array, shape=(0, 0))¶ TODO: needs documentation.
-
freeze
(t=0, freeze_duration=None, total_duration=None, padding_end=0)¶ Momentarily freeze the clip at time t.
Set t=’end’ to freeze the clip at the end (actually it will freeze on the frame at time clip.duration - padding_end seconds - 1 / clip_fps). With
duration
you can specify the duration of the freeze. Withtotal_duration
you can specify the total duration of the clip and the freeze (i.e. the duration of the freeze is automatically computed). One of them must be provided.
-
freeze_region
(t=0, region=None, outside_region=None, mask=None)¶ Freezes one region of the clip while the rest remains animated.
You can choose one of three methods by providing either region, outside_region, or mask.
- Parameters:
- t
Time at which to freeze the freezed region.
- region
A tuple (x1, y1, x2, y2) defining the region of the screen (in pixels) which will be freezed. You can provide outside_region or mask instead.
- outside_region
A tuple (x1, y1, x2, y2) defining the region of the screen (in pixels) which will be the only non-freezed region.
- mask
If not None, will overlay a freezed version of the clip on the current clip, with the provided mask. In other words, the “visible” pixels in the mask indicate the freezed region in the final picture.
-
fx
(func, *args, **kwargs)¶ Returns the result of
func(self, *args, **kwargs)
, for instance>>> new_clip = clip.fx(resize, 0.2, method="bilinear")
is equivalent to
>>> new_clip = resize(clip, 0.2, method="bilinear")
The motivation of fx is to keep the name of the effect near its parameters when the effects are chained:
>>> from moviepy.video.fx import multiply_volume, resize, mirrorx >>> clip.fx(multiply_volume, 0.5).fx(resize, 0.3).fx(mirrorx) >>> # Is equivalent, but clearer than >>> mirrorx(resize(multiply_volume(clip, 0.5), 0.3))
-
gamma_corr
(gamma)¶ Gamma-correction of a video clip.
-
get_frame
(t)¶ Gets a numpy array representing the RGB picture of the clip, or (mono or stereo) value for a sound clip, at time
t
.- Parameters:
- tfloat or tuple or str
Moment of the clip whose frame will be returned.
-
property
h
¶ Returns the height of the video.
-
headblur
(fx, fy, r_zone, r_blur=None)¶ Returns a filter that will blur a moving part (a head ?) of the frames.
The position of the blur at time t is defined by (fx(t), fy(t)), the radius of the blurring by
radius
and the intensity of the blurring byintensity
.Requires OpenCV for the circling and the blurring. Automatically deals with the case where part of the image goes offscreen.
-
image_transform
(image_func, apply_to=None)¶ Image-transformation filter.
Does the same as VideoClip.image_transform, but for ImageClip the transformed clip is computed once and for all at the beginning, and not for each ‘frame’.
-
invert_colors
()¶ Returns the color-inversed clip.
The values of all pixels are replaced with (255-v) or (1-v) for masks Black becomes white, green becomes purple, etc.
-
is_playing
(t)¶ If
t
is a time, returns true if t is between the start and the end of the clip.t
can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Ift
is a numpy array, returns False if none of thet
is in the clip, else returns a vector [b_1, b_2, b_3…] where b_i is true if tti is in the clip.
-
iter_frames
(fps=None, with_times=False, logger=None, dtype=None)¶ Iterates over all the frames of the clip.
Returns each frame of the clip as a HxWxN Numpy array, where N=1 for mask clips and N=3 for RGB clips.
This function is not really meant for video editing. It provides an easy way to do frame-by-frame treatment of a video, for fields like science, computer vision…
- Parameters:
- fpsint, optional
Frames per second for clip iteration. Is optional if the clip already has a
fps
attribute.- with_timesbool, optional
Ff
True
yield tuples of(t, frame)
wheret
is the current time for the frame, otherwise only aframe
object.- loggerstr, optional
Either
"bar"
for progress bar orNone
or any Proglog logger.- dtypetype, optional
Type to cast Numpy array frames. Use
dtype="uint8"
when using the pictures to write video, images…
Examples
>>> # prints the maximum of red that is contained >>> # on the first line of each frame of the clip. >>> from moviepy import VideoFileClip >>> myclip = VideoFileClip('myvideo.mp4') >>> print ( [frame[0,:,0].max() for frame in myclip.iter_frames()])
-
static
list
(arg)¶ Returns a list of all valid entries for the
font
orcolor
argument ofTextClip
.
-
loop
(n=None, duration=None)¶ Returns a clip that plays the current clip in an infinite loop. Ideal for clips coming from GIFs.
- Parameters:
- n
Number of times the clip should be played. If None the the clip will loop indefinitely (i.e. with no set duration).
- duration
Total duration of the clip. Can be specified instead of n.
-
lum_contrast
(lum=0, contrast=0, contrast_threshold=127)¶ Luminosity-contrast correction of a clip.
-
make_loopable
(overlap_duration)¶ Makes the clip fade in progressively at its own end, this way it can be looped indefinitely.
- Parameters:
- overlap_durationfloat
Duration of the fade-in (in seconds).
-
margin
(margin_size=None, left=0, right=0, top=0, bottom=0, color=(0, 0, 0), opacity=1.0)¶ Draws an external margin all around the frame.
- Parameters:
- margin_sizeint, optional
If not
None
, then the new clip has a margin size of sizemargin_size
in pixels on the left, right, top, and bottom.- leftint, optional
If
margin_size=None
, margin size for the new clip in left direction.- rightint, optional
If
margin_size=None
, margin size for the new clip in right direction.- topint, optional
If
margin_size=None
, margin size for the new clip in top direction.- bottomint, optional
If
margin_size=None
, margin size for the new clip in bottom direction.- colortuple, optional
Color of the margin.
- opacityfloat, optional
Opacity of the margin. Setting this value to 0 yields transparent margins.
-
mask_and
(other_clip)¶ Returns the logical ‘and’ (minimum pixel color values) between two masks.
The result has the duration of the clip to which has been applied, if it has any.
- Parameters:
- other_clip ImageClip or np.ndarray
Clip used to mask the original clip.
Examples
>>> clip = ColorClip(color=(255, 0, 0), size=(1, 1)) # red >>> mask = ColorClip(color=(0, 255, 0), size=(1, 1)) # green >>> masked_clip = clip.fx(mask_and, mask) # black >>> masked_clip.get_frame(0) [[[0 0 0]]]
-
mask_color
(color=None, threshold=0, stiffness=1)¶ Returns a new clip with a mask for transparency where the original clip is of the given color.
You can also have a “progressive” mask by specifying a non-null distance threshold
threshold
. In this case, if the distance between a pixel and the given color is d, the transparency will bed**stiffness / (threshold**stiffness + d**stiffness)
which is 1 when d>>threshold and 0 for d<<threshold, the stiffness of the effect being parametrized by
stiffness
-
mask_or
(other_clip)¶ Returns the logical ‘or’ (maximum pixel color values) between two masks.
The result has the duration of the clip to which has been applied, if it has any.
- Parameters:
- other_clip ImageClip or np.ndarray
Clip used to mask the original clip.
Examples
>>> clip = ColorClip(color=(255, 0, 0), size=(1, 1)) # red >>> mask = ColorClip(color=(0, 255, 0), size=(1, 1)) # green >>> masked_clip = clip.fx(mask_or, mask) # yellow >>> masked_clip.get_frame(0) [[[255 255 0]]]
-
mirror_x
(apply_to='mask')¶ Flips the clip horizontally (and its mask too, by default).
-
mirror_y
(apply_to='mask')¶ Flips the clip vertically (and its mask too, by default).
-
multiply_color
(factor)¶ Multiplies the clip’s colors by the given factor, can be used to decrease or increase the clip’s brightness (is that the right word ?)
-
multiply_speed
(factor=None, final_duration=None)¶ Returns a clip playing the current clip but at a speed multiplied by
factor
.Instead of factor one can indicate the desired
final_duration
of the clip, and the factor will be automatically computed. The same effect is applied to the clip’s audio and mask if any.
-
multiply_stereo_volume
(left=1, right=1)¶ For a stereo audioclip, this function enables to change the volume of the left and right channel separately (with the factors left and right). Makes a stereo audio clip in which the volume of left and right is controllable.
Examples
>>> from moviepy import AudioFileClip >>> music = AudioFileClip('music.ogg') >>> audio_r = music.multiply_stereo_volume(left=0, right=1) # mute left channel/s >>> audio_h = music.multiply_stereo_volume(left=0.5, right=0.5) # half audio
-
multiply_volume
(factor, start_time=None, end_time=None)¶ Returns a clip with audio volume multiplied by the value factor. Can be applied to both audio and video clips.
- Parameters:
- factorfloat
Volume multiplication factor.
- start_timefloat, optional
Time from the beginning of the clip until the volume transformation begins to take effect, in seconds. By default at the beginning.
- end_timefloat, optional
Time from the beginning of the clip until the volume transformation ends to take effect, in seconds. By default at the end.
Examples
>>> from moviepy import AudioFileClip >>> >>> music = AudioFileClip('music.ogg') >>> doubled_audio_clip = clip.multiply_volume(2) # doubles audio volume >>> half_audio_clip = clip.multiply_volume(0.5) # half audio >>> >>> # silenced clip during one second at third >>> silenced_clip = clip.multiply_volume(0, start_time=2, end_time=3)
-
property
n_frames
¶ Returns the number of frames of the video.
-
on_color
(size=None, color=(0, 0, 0), pos=None, col_opacity=None)¶ Place the clip on a colored background.
Returns a clip made of the current clip overlaid on a color clip of a possibly bigger size. Can serve to flatten transparent clips.
- Parameters:
- size
Size (width, height) in pixels of the final clip. By default it will be the size of the current clip.
- color
Background color of the final clip ([R,G,B]).
- pos
Position of the clip in the final clip. ‘center’ is the default
- col_opacity
Parameter in 0..1 indicating the opacity of the colored background.
-
painting
(saturation=None, black=None)¶ Transforms any photo into some kind of painting. Saturation tells at which point the colors of the result should be flashy.
black
gives the amount of black lines wanted. Requires Scikit-image or Scipy installed.
-
preview
(*args, **kwargs)¶ NOT AVAILABLE: clip.preview requires importing from moviepy.editor
-
requires_duration
(clip)¶ Raises an error if the clip has no duration.
-
resize
(new_size=None, height=None, width=None, apply_to_mask=True)¶ Returns a video clip that is a resized version of the clip.
- Parameters:
- new_sizetuple or float or function, optional
- Can be either
(width, height)
in pixels or a float representingA scaling factor, like
0.5
.A function of time returning one of these.
- widthint, optional
Width of the new clip in pixels. The height is then computed so that the width/height ratio is conserved.
- heightint, optional
Height of the new clip in pixels. The width is then computed so that the width/height ratio is conserved.
Examples
>>> myClip.resize( (460,720) ) # New resolution: (460,720) >>> myClip.resize(0.6) # width and height multiplied by 0.6 >>> myClip.resize(width=800) # height computed automatically. >>> myClip.resize(lambda t : 1+0.02*t) # slow swelling of the clip
-
rotate
(angle, unit='deg', resample='bicubic', expand=True, center=None, translate=None, bg_color=None)¶ Rotates the specified clip by
angle
degrees (or radians) anticlockwise If the angle is not a multiple of 90 (degrees) orcenter
,translate
, andbg_color
are notNone
, the packagepillow
must be installed, and there will be black borders. You can make them transparent with:>>> new_clip = clip.add_mask().rotate(72)
- Parameters:
- clipVideoClip
A video clip.
- anglefloat
Either a value or a function angle(t) representing the angle of rotation.
- unitstr, optional
Unit of parameter angle (either “deg” for degrees or “rad” for radians).
- resamplestr, optional
An optional resampling filter. One of “nearest”, “bilinear”, or “bicubic”.
- expandbool, optional
If true, expands the output image to make it large enough to hold the entire rotated image. If false or omitted, make the output image the same size as the input image.
- translatetuple, optional
An optional post-rotate translation (a 2-tuple).
- centertuple, optional
Optional center of rotation (a 2-tuple). Origin is the upper left corner.
- bg_colortuple, optional
An optional color for area outside the rotated image. Only has effect if
expand
is true.
-
save_frame
(filename, t=0, with_mask=True)¶ Save a clip’s frame to an image file.
Saves the frame of clip corresponding to time
t
infilename
.t
can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.- Parameters:
- filenamestr
Name of the file in which the frame will be stored.
- tfloat or tuple or str, optional
Moment of the frame to be saved. As default, the first frame will be saved.
- with_maskbool, optional
If is
True
the mask is saved in the alpha layer of the picture (only works with PNGs).
-
scroll
(w=None, h=None, x_speed=0, y_speed=0, x_start=0, y_start=0, apply_to='mask')¶ Scrolls horizontally or vertically a clip, e.g. to make end credits
- Parameters:
- w, h
The width and height of the final clip. Default to clip.w and clip.h
- x_speed, y_speed
- x_start, y_start
- apply_to
-
static
search
(string, arg)¶ Returns the of all valid entries which contain
string
for the argumentarg
ofTextClip
, for instance>>> # Find all the available fonts which contain "Courier" >>> print(TextClip.search('Courier', 'font'))
-
show
(*args, **kwargs)¶ NOT AVAILABLE: clip.show requires importing from moviepy.editor
-
slide_in
(duration, side)¶ Makes the clip arrive from one side of the screen.
Only works when the clip is included in a CompositeVideoClip, and if the clip has the same size as the whole composition.
- Parameters:
- clipmoviepy.Clip.Clip
A video clip.
- durationfloat
Time taken for the clip to be fully visible
- sidestr
Side of the screen where the clip comes from. One of ‘top’, ‘bottom’, ‘left’ or ‘right’.
Examples
>>> from moviepy import * >>> >>> clips = [... make a list of clips] >>> slided_clips = [ ... CompositeVideoClip([clip.fx(transfx.slide_in, 1, "left")]) ... for clip in clips ... ] >>> final_clip = concatenate_videoclips(slided_clips, padding=-1) >>> >>> clip = ColorClip( ... color=(255, 0, 0), duration=1, size=(300, 300) ... ).with_fps(60) >>> final_clip = CompositeVideoClip([transfx.slide_in(clip, 1, "right")])
-
slide_out
(duration, side)¶ Makes the clip go away by one side of the screen.
Only works when the clip is included in a CompositeVideoClip, and if the clip has the same size as the whole composition.
- Parameters:
- clipmoviepy.Clip.Clip
A video clip.
- durationfloat
Time taken for the clip to fully disappear.
- sidestr
Side of the screen where the clip goes. One of ‘top’, ‘bottom’, ‘left’ or ‘right’.
Examples
>>> clips = [... make a list of clips] >>> slided_clips = [ ... CompositeVideoClip([clip.fx(transfx.slide_out, 1, "left")]) ... for clip in clips ... ] >>> final_clip = concatenate_videoclips(slided_clips, padding=-1) >>> >>> clip = ColorClip( ... color=(255, 0, 0), duration=1, size=(300, 300) ... ).with_fps(60) >>> final_clip = CompositeVideoClip([transfx.slide_out(clip, 1, "right")])
-
subclip
(start_time=0, end_time=None)¶ Returns a clip playing the content of the current clip between times
start_time
andend_time
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.The
mask
andaudio
of the resulting subclip will be subclips ofmask
andaudio
the original clip, if they exist.- Parameters:
- start_timefloat or tuple or str, optional
Moment that will be chosen as the beginning of the produced clip. If is negative, it is reset to
clip.duration + start_time
.- end_timefloat or tuple or str, optional
Moment that will be chosen as the end of the produced clip. If not provided, it is assumed to be the duration of the clip (potentially infinite). If is negative, it is reset to
clip.duration + end_time
. For instance:>>> # cut the last two seconds of the clip: >>> new_clip = clip.subclip(0, -2)
If
end_time
is provided or if the clip has a duration attribute, the duration of the returned clip is set automatically.
-
subfx
(fx, start_time=0, end_time=None, **kwargs)¶ Apply a transformation to a part of the clip.
Returns a new clip in which the function
fun
(clip->clip) has been applied to the subclip between times start_time and end_time (in seconds).Examples
>>> # The scene between times t=3s and t=6s in ``clip`` will be >>> # be played twice slower in ``new_clip`` >>> new_clip = clip.subapply(lambda c:c.multiply_speed(0.5) , 3,6)
-
supersample
(d, n_frames)¶ Replaces each frame at time t by the mean of n_frames equally spaced frames taken in the interval [t-d, t+d]. This results in motion blur.
-
time_mirror
()¶ Returns a clip that plays the current clip backwards. The clip must have its
duration
attribute set. The same effect is applied to the clip’s audio and mask if any.
-
time_symmetrize
()¶ Returns a clip that plays the current clip once forwards and then once backwards. This is very practival to make video that loop well, e.g. to create animated GIFs. This effect is automatically applied to the clip’s mask and audio if they exist.
-
time_transform
(time_func, apply_to=None, keep_duration=False)¶ Time-transformation filter.
Applies a transformation to the clip’s timeline (see Clip.time_transform).
This method does nothing for ImageClips (but it may affect their masks or their audios). The result is still an ImageClip.
-
to_ImageClip
(t=0, with_mask=True, duration=None)¶ Returns an ImageClip made out of the clip’s frame at time
t
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.
-
to_RGB
()¶ Return a non-mask video clip made from the mask video clip.
-
to_mask
(canal=0)¶ Return a mask a video clip made from the clip.
-
transform
(func, apply_to=None, keep_duration=True)¶ General transformation filter.
Equivalent to VideoClip.transform. The result is no more an ImageClip, it has the class VideoClip (since it may be animated)
-
property
w
¶ Returns the width of the video.
-
with_audio
(audioclip)¶ Attach an AudioClip to the VideoClip.
Returns a copy of the VideoClip instance, with the audio attribute set to
audio
, which must be an AudioClip instance.
-
with_duration
(duration, change_end=True)¶ Returns a copy of the clip, with the
duration
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Also sets the duration of the mask and audio, if any, of the returned clip.If
change_end is False
, the start attribute of the clip will be modified in function of the duration and the preset end of the clip.- Parameters:
- durationfloat
New duration attribute value for the clip.
- change_endbool, optional
If
True
, theend
attribute value of the clip will be adjusted accordingly to the new duration usingclip.start + duration
.
-
with_end
(t)¶ Returns a copy of the clip, with the
end
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Also sets the duration of the mask and audio, if any, of the returned clip.- Parameters:
- tfloat or tuple or str
New
end
attribute value for the clip.
-
with_fps
(fps, change_duration=False)¶ Returns a copy of the clip with a new default fps for functions like write_videofile, iterframe, etc.
- Parameters:
- fpsint
New
fps
attribute value for the clip.- change_durationbool, optional
If
change_duration=True
, then the video speed will change to match the new fps (conserving all frames 1:1). For example, if the fps is halved in this mode, the duration will be doubled.
-
with_is_mask
(is_mask)¶ Says whether the clip is a mask or not.
- Parameters:
- is_maskbool
New
is_mask
attribute value for the clip.
-
with_layer
(layer)¶ Set the clip’s layer in compositions. Clips with a greater
layer
attribute will be displayed on top of others.Note: Only has effect when the clip is used in a CompositeVideoClip.
-
with_make_frame
(mf)¶ Change the clip’s
get_frame
.Returns a copy of the VideoClip instance, with the make_frame attribute set to mf.
-
with_mask
(mask)¶ Set the clip’s mask.
Returns a copy of the VideoClip with the mask attribute set to
mask
, which must be a greyscale (values in 0-1) VideoClip.
-
with_memoize
(memoize)¶ Sets whether the clip should keep the last frame read in memory.
- Parameters:
- memoizebool
Indicates if the clip should keep the last frame read in memory.
-
with_opacity
(opacity)¶ Set the opacity/transparency level of the clip.
Returns a semi-transparent copy of the clip where the mask is multiplied by
op
(any float, normally between 0 and 1).
-
with_position
(pos, relative=False)¶ Set the clip’s position in compositions.
Sets the position that the clip will have when included in compositions. The argument
pos
can be either a couple(x,y)
or a functiont-> (x,y)
. x and y mark the location of the top left corner of the clip, and can be of several types.Examples
>>> clip.with_position((45,150)) # x=45, y=150 >>> >>> # clip horizontally centered, at the top of the picture >>> clip.with_position(("center","top")) >>> >>> # clip is at 40% of the width, 70% of the height: >>> clip.with_position((0.4,0.7), relative=True) >>> >>> # clip's position is horizontally centered, and moving up ! >>> clip.with_position(lambda t: ('center', 50+t) )
-
with_start
(t, change_end=True)¶ Returns a copy of the clip, with the
start
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.These changes are also applied to the
audio
andmask
clips of the current clip, if they exist.- Parameters:
- tfloat or tuple or str
New
start
attribute value for the clip.- change_endbool optional
Indicates if the
end
attribute value must be changed accordingly, if possible. Ifchange_end=True
and the clip has aduration
attribute, theend
attribute of the clip will be updated tostart + duration
. Ifchange_end=False
and the clip has aend
attribute, theduration
attribute of the clip will be updated toend - start
.
-
without_audio
()¶ Remove the clip’s audio.
Return a copy of the clip with audio set to None.
-
write_gif
(filename, fps=None, program='imageio', opt='nq', fuzz=1, loop=0, dispose=False, colors=None, tempfiles=False, logger='bar', pixel_format=None)¶ Write the VideoClip to a GIF file.
Converts a VideoClip into an animated GIF using ImageMagick or ffmpeg.
- Parameters:
- filename
Name of the resulting gif file, as a string or a path-like object.
- fps
Number of frames per second (see note below). If it isn’t provided, then the function will look for the clip’s
fps
attribute (VideoFileClip, for instance, have one).- program
Software to use for the conversion, either ‘imageio’ (this will use the library FreeImage through ImageIO), or ‘ImageMagick’, or ‘ffmpeg’.
- opt
Optimalization to apply. If program=’imageio’, opt must be either ‘wu’ (Wu) or ‘nq’ (Neuquant). If program=’ImageMagick’, either ‘optimizeplus’ or ‘OptimizeTransparency’.
- fuzz
(ImageMagick only) Compresses the GIF by considering that the colors that are less than fuzz% different are in fact the same.
- tempfiles
Writes every frame to a file instead of passing them in the RAM. Useful on computers with little RAM. Can only be used with ImageMagick’ or ‘ffmpeg’.
- progress_bar
If True, displays a progress bar
- pixel_format
Pixel format for the output gif file. If is not specified ‘rgb24’ will be used as the default format unless
clip.mask
exist, then ‘rgba’ will be used. This option is only going to be accepted ifprogram=ffmpeg
or whentempfiles=True
Notes
The gif will be playing the clip in real time (you can only change the frame rate). If you want the gif to be played slower than the clip you will use
>>> # slow down clip 50% and make it a gif >>> myClip.multiply_speed(0.5).to_gif('myClip.gif')
-
write_images_sequence
(name_format, fps=None, with_mask=True, logger='bar')¶ Writes the videoclip to a sequence of image files.
- Parameters:
- name_format
A filename specifying the numerotation format and extension of the pictures. For instance “frame%03d.png” for filenames indexed with 3 digits and PNG format. Also possible: “some_folder/frame%04d.jpeg”, etc.
- fps
Number of frames per second to consider when writing the clip. If not specified, the clip’s
fps
attribute will be used if it has one.- with_mask
will save the clip’s mask (if any) as an alpha canal (PNGs only).
- logger
Either
"bar"
for progress bar orNone
or any Proglog logger.
- Returns:
- names_list
A list of all the files generated.
Notes
The resulting image sequence can be read using e.g. the class
ImageSequenceClip
.
-
write_videofile
(filename, fps=None, codec=None, bitrate=None, audio=True, audio_fps=44100, preset='medium', audio_nbytes=4, audio_codec=None, audio_bitrate=None, audio_bufsize=2000, temp_audiofile=None, temp_audiofile_path='', remove_temp=True, write_logfile=False, threads=None, ffmpeg_params=None, logger='bar', pixel_format=None)¶ Write the clip to a videofile.
- Parameters:
- filename
Name of the video file to write in, as a string or a path-like object. The extension must correspond to the “codec” used (see below), or simply be ‘.avi’ (which will work with any codec).
- fps
Number of frames per second in the resulting video file. If None is provided, and the clip has an fps attribute, this fps will be used.
- codec
Codec to use for image encoding. Can be any codec supported by ffmpeg. If the filename is has extension ‘.mp4’, ‘.ogv’, ‘.webm’, the codec will be set accordingly, but you can still set it if you don’t like the default. For other extensions, the output filename must be set accordingly.
Some examples of codecs are:
'libx264'
(default codec for file extension.mp4
) makes well-compressed videos (quality tunable using ‘bitrate’).'mpeg4'
(other codec for extension.mp4
) can be an alternative to'libx264'
, and produces higher quality videos by default.'rawvideo'
(use file extension.avi
) will produce a video of perfect quality, of possibly very huge size.png
(use file extension.avi
) will produce a video of perfect quality, of smaller size than withrawvideo
.'libvorbis'
(use file extension.ogv
) is a nice video format, which is completely free/ open source. However not everyone has the codecs installed by default on their machine.'libvpx'
(use file extension.webm
) is tiny a video format well indicated for web videos (with HTML5). Open source.
- audio
Either
True
,False
, or a file name. IfTrue
and the clip has an audio clip attached, this audio clip will be incorporated as a soundtrack in the movie. Ifaudio
is the name of an audio file, this audio file will be incorporated as a soundtrack in the movie.- audio_fps
frame rate to use when generating the sound.
- temp_audiofile
the name of the temporary audiofile, as a string or path-like object, to be created and then used to write the complete video, if any.
- temp_audiofile_path
the location that the temporary audiofile is placed, as a string or path-like object. Defaults to the current working directory.
- audio_codec
Which audio codec should be used. Examples are ‘libmp3lame’ for ‘.mp3’, ‘libvorbis’ for ‘ogg’, ‘libfdk_aac’:’m4a’, ‘pcm_s16le’ for 16-bit wav and ‘pcm_s32le’ for 32-bit wav. Default is ‘libmp3lame’, unless the video extension is ‘ogv’ or ‘webm’, at which case the default is ‘libvorbis’.
- audio_bitrate
Audio bitrate, given as a string like ‘50k’, ‘500k’, ‘3000k’. Will determine the size/quality of audio in the output file. Note that it mainly an indicative goal, the bitrate won’t necessarily be the this in the final file.
- preset
Sets the time that FFMPEG will spend optimizing the compression. Choices are: ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo. Note that this does not impact the quality of the video, only the size of the video file. So choose ultrafast when you are in a hurry and file size does not matter.
- threads
Number of threads to use for ffmpeg. Can speed up the writing of the video on multicore computers.
- ffmpeg_params
Any additional ffmpeg parameters you would like to pass, as a list of terms, like [‘-option1’, ‘value1’, ‘-option2’, ‘value2’].
- write_logfile
If true, will write log files for the audio and the video. These will be files ending with ‘.log’ with the name of the output file in them.
- logger
Either
"bar"
for progress bar orNone
or any Proglog logger.- pixel_format
Pixel format for the output video file.
Examples
>>> from moviepy import VideoFileClip >>> clip = VideoFileClip("myvideo.mp4").subclip(100,120) >>> clip.write_videofile("my_new_video.mp4") >>> clip.close()
Drawing¶
Deals with making images (np arrays). It provides drawing methods that are difficult to do with the existing Python libraries.
-
moviepy.video.tools.drawing.
blit
(im1, im2, pos=None, mask=None)[source]¶ Blit an image over another.
Blits
im1
onim2
as positionpos=(x,y)
, using themask
if provided.
-
moviepy.video.tools.drawing.
circle
(screensize, center, radius, color=1.0, bg_color=0, blur=1)[source]¶ Draw an image with a circle.
Draws a circle of color
color
, on a background of colorbg_color
, on a screen of sizescreensize
at the positioncenter=(x, y)
, with a radiusradius
but slightly blurred on the border byblur
pixels.- Parameters:
- screensizetuple or list
Size of the canvas.
- centertuple or list
Center of the circle.
- radiusfloat
Radius of the circle, in pixels.
- bg_colortuple or float, optional
Color for the background of the canvas. As default, black.
- blurfloat, optional
Blur for the border of the circle.
Examples
>>> from moviepy.video.tools.drawing import circle >>> >>> circle( ... (5, 5), # size ... (2, 2), # center ... 2, # radius ... ) array([[0. , 0. , 0. , 0. , 0. ], [0. , 0.58578644, 1. , 0.58578644, 0. ], [0. , 1. , 1. , 1. , 0. ], [0. , 0.58578644, 1. , 0.58578644, 0. ], [0. , 0. , 0. , 0. , 0. ]])
-
moviepy.video.tools.drawing.
color_gradient
(size, p1, p2=None, vector=None, radius=None, color_1=0.0, color_2=1.0, shape='linear', offset=0)[source]¶ Draw a linear, bilinear, or radial gradient.
The result is a picture of size
size
, whose color varies gradually from color color_1 in positionp1
to colorcolor_2
in positionp2
.If it is a RGB picture the result must be transformed into a ‘uint8’ array to be displayed normally:
- Parameters:
- sizetuple or list
Size (width, height) in pixels of the final image array.
- p1tuple or list
Position for the first coordinate of the gradient in pixels (x, y). The color ‘before’
p1
iscolor_1
and it gradually changes in the direction ofp2
until it iscolor_2
when it reachesp2
.- p2tuple or list, optional
- Position for the second coordinate of the gradient in pixels (x, y).
Coordinates (x, y) of the limit point for
color_1
andcolor_2
.
- vectortuple or list, optional
A vector (x, y) in pixels that can be provided instead of
p2
.p2
is then defined as (p1 + vector).- color_1tuple or list, optional
Starting color for the gradient. As default, black. Either floats between 0 and 1 (for gradients used in masks) or [R, G, B] arrays (for colored gradients).
- color_2tuple or list, optional
Color for the second point in the gradient. As default, white. Either floats between 0 and 1 (for gradients used in masks) or [R, G, B] arrays (for colored gradients).
- shapestr, optional
Shape of the gradient. Can be either
"linear"
,"bilinear"
or"circular"
. In a linear gradient the color varies in one direction, from pointp1
to pointp2
. In a bilinear gradient it also varies symmetrically fromp1
in the other direction. In a circular gradient it goes fromcolor_1
tocolor_2
in all directions.- radiusfloat, optional
If
shape="radial"
, the radius of the gradient is defined with the parameterradius
, in pixels.- offsetfloat, optional
Real number between 0 and 1 indicating the fraction of the vector at which the gradient actually starts. For instance if
offset
is 0.9 in a gradient going from p1 to p2, then the gradient will only occur near p2 (before that everything is of colorcolor_1
) If the offset is 0.9 in a radial gradient, the gradient will occur in the region located between 90% and 100% of the radius, this creates a blurry disc of radiusd(p1, p2)
.
- Returns:
- image
An Numpy array of dimensions (width, height, n_colors) of type float representing the image of the gradient.
Examples
>>> color_gradient((10, 1), (0, 0), p2=(10, 0)) # from white to black [[1. 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1]] >>> >>> color_gradient( # from red to green ... (10, 1), # size ... (0, 0), # p1 ... p2=(10, 0), ... color_1=(255, 0, 0), # red ... color_2=(0, 255, 0), # green ... ) [[[ 0. 255. 0. ] [ 25.5 229.5 0. ] [ 51. 204. 0. ] [ 76.5 178.5 0. ] [102. 153. 0. ] [127.5 127.5 0. ] [153. 102. 0. ] [178.5 76.5 0. ] [204. 51. 0. ] [229.5 25.5 0. ]]]
-
moviepy.video.tools.drawing.
color_split
(size, x=None, y=None, p1=None, p2=None, vector=None, color_1=0, color_2=1.0, gradient_width=0)[source]¶ Make an image split in 2 colored regions.
Returns an array of size
size
divided in two regions called 1 and 2 in what follows, and which will have colors color_1 and color_2 respectively.- Parameters:
- xint, optional
If provided, the image is split horizontally in x, the left region being region 1.
- yint, optional
If provided, the image is split vertically in y, the top region being region 1.
- p1, p2: tuple or list, optional
Positions (x1, y1), (x2, y2) in pixels, where the numbers can be floats. Region 1 is defined as the whole region on the left when going from
p1
top2
.- p1, vector: tuple or list, optional
p1
is (x1,y1) and vector (v1,v2), where the numbers can be floats. Region 1 is then the region on the left when starting in positionp1
and going in the direction given byvector
.- gradient_widthfloat, optional
If not zero, the split is not sharp, but gradual over a region of width
gradient_width
(in pixels). This is preferable in many situations (for instance for antialiasing).
Examples
>>> size = [200, 200] >>> >>> # an image with all pixels with x<50 =0, the others =1 >>> color_split(size, x=50, color_1=0, color_2=1) >>> >>> # an image with all pixels with y<50 red, the others green >>> color_split(size, x=50, color_1=[255, 0, 0], color_2=[0, 255, 0]) >>> >>> # An image split along an arbitrary line (see below) >>> color_split(size, p1=[20, 50], p2=[25, 70] color_1=0, color_2=1)
Segmenting¶
Subtitles¶
Experimental module for subtitles support.
-
class
moviepy.video.tools.subtitles.
SubtitlesClip
(subtitles, make_textclip=None, encoding=None)[source]¶ Bases:
moviepy.video.VideoClip.VideoClip
A Clip that serves as “subtitle track” in videos.
One particularity of this class is that the images of the subtitle texts are not generated beforehand, but only if needed.
- Parameters:
- subtitles
Either the name of a file as a string or path-like object, or a list
- encoding
Optional, specifies srt file encoding. Any standard Python encoding is allowed (listed at https://docs.python.org/3.8/library/codecs.html#standard-encodings)
Examples
>>> from moviepy.video.tools.subtitles import SubtitlesClip >>> from moviepy.video.io.VideoFileClip import VideoFileClip >>> generator = lambda text: TextClip(text, font='Georgia-Regular', ... font_size=24, color='white') >>> sub = SubtitlesClip("subtitles.srt", generator) >>> sub = SubtitlesClip("subtitles.srt", generator, encoding='utf-8') >>> myvideo = VideoFileClip("myvideo.avi") >>> final = CompositeVideoClip([clip, subtitles]) >>> final.write_videofile("final.mp4", fps=myvideo.fps)
-
accel_decel
(new_duration=None, abruptness=1.0, soonness=1.0)¶ Accelerates and decelerates a clip, useful for GIF making.
- Parameters:
- new_durationfloat
Duration for the new transformed clip. If None, will be that of the current clip.
- abruptnessfloat
Slope shape in the acceleration-deceleration function. It will depend on the value of the parameter:
-1 < abruptness < 0
: speed up, down, up.abruptness == 0
: no effect.abruptness > 0
: speed down, up, down.
- soonnessfloat
For positive abruptness, determines how soon the transformation occurs. Should be a positive number.
- Raises:
- ValueError
When
sooness
argument is lower than 0.
Examples
The following graphs show functions generated by different combinations of arguments, where the value of the slopes represents the speed of the videos generated, being the linear function (in red) a combination that does not produce any transformation.
-
add_mask
()¶ Add a mask VideoClip to the VideoClip.
Returns a copy of the clip with a completely opaque mask (made of ones). This makes computations slower compared to having a None mask but can be useful in many cases. Choose
Set
constant_size
to False for clips with moving image size.
-
add_mask_if_none
(clip)¶ Add a mask to the clip if there is none.
-
afx
(fun, *args, **kwargs)¶ Transform the clip’s audio.
Return a new clip whose audio has been transformed by
fun
.
-
property
aspect_ratio
¶ Returns the aspect ratio of the video.
-
audio_delay
(offset=0.2, n_repeats=8, decay=1)¶ Repeats audio certain number of times at constant intervals multiplying their volume levels using a linear space in the range 1 to
decay
argument value.- Parameters:
- offsetfloat, optional
Gap between repetitions start times, in seconds.
- n_repeatsint, optional
Number of repetitions (without including the clip itself).
- decayfloat, optional
Multiplication factor for the volume level of the last repetition. Each repetition will have a value in the linear function between 1 and this value, increasing or decreasing constantly. Keep in mind that the last repetition will be muted if this is 0, and if is greater than 1, the volume will increase for each repetition.
Examples
>>> from moviepy import * >>> videoclip = AudioFileClip('myaudio.wav').fx( ... audio_delay, offset=.2, n_repeats=10, decayment=.2 ... )
>>> # stereo A note >>> make_frame = lambda t: np.array( ... [np.sin(440 * 2 * np.pi * t), np.sin(880 * 2 * np.pi * t)] ... ).T ... clip = AudioClip(make_frame=make_frame, duration=0.1, fps=44100) ... clip = audio_delay(clip, offset=.2, n_repeats=11, decay=0)
-
audio_fadein
(duration)¶ Return an audio (or video) clip that is first mute, then the sound arrives progressively over
duration
seconds.- Parameters:
- durationfloat
How long does it take for the sound to return to its normal level.
Examples
>>> clip = VideoFileClip("media/chaplin.mp4") >>> clip.fx(audio_fadein, "00:00:06")
-
audio_fadeout
(duration)¶ Return a sound clip where the sound fades out progressively over
duration
seconds at the end of the clip.- Parameters:
- durationfloat
How long does it take for the sound to reach the zero level at the end of the clip.
Examples
>>> clip = VideoFileClip("media/chaplin.mp4") >>> clip.fx(audio_fadeout, "00:00:06")
-
audio_loop
(n_loops=None, duration=None)¶ Loops over an audio clip.
Returns an audio clip that plays the given clip either n_loops times, or during duration seconds.
Examples
>>> from moviepy import * >>> videoclip = VideoFileClip('myvideo.mp4') >>> music = AudioFileClip('music.ogg') >>> audio = afx.audio_loop( music, duration=videoclip.duration) >>> videoclip.with_audio(audio)
-
audio_normalize
()¶ Return a clip whose volume is normalized to 0db.
Return an audio (or video) clip whose audio volume is normalized so that the maximum volume is at 0db, the maximum achievable volume.
Examples
>>> from moviepy import * >>> videoclip = VideoFileClip('myvideo.mp4').fx(afx.audio_normalize)
-
blackwhite
(RGB=None, preserve_luminosity=True)¶ Desaturates the picture, makes it black and white. Parameter RGB allows to set weights for the different color channels. If RBG is ‘CRT_phosphor’ a special set of values is used. preserve_luminosity maintains the sum of RGB to 1.
-
blink
(duration_on, duration_off)¶ Makes the clip blink. At each blink it will be displayed
duration_on
seconds and disappearduration_off
seconds. Will only work in composite clips.
-
blit_on
(picture, t)¶ Returns the result of the blit of the clip’s frame at time t on the given picture, the position of the clip being given by the clip’s
pos
attribute. Meant for compositing.
-
close
()¶ Release any resources that are in use.
-
copy
()¶ Mixed copy of the clip.
Returns a shallow copy of the clip whose mask and audio will be shallow copies of the clip’s mask and audio if they exist.
This method is intensively used to produce new clips every time there is an outplace transformation of the clip (clip.resize, clip.subclip, etc.)
Acts like a deepcopy except for the fact that readers and other possible unpickleables objects are not copied.
-
crop
(x1=None, y1=None, x2=None, y2=None, width=None, height=None, x_center=None, y_center=None)¶ Returns a new clip in which just a rectangular subregion of the original clip is conserved. x1,y1 indicates the top left corner and x2,y2 is the lower right corner of the croped region. All coordinates are in pixels. Float numbers are accepted.
To crop an arbitrary rectangle:
>>> crop(clip, x1=50, y1=60, x2=460, y2=275)
Only remove the part above y=30:
>>> crop(clip, y1=30)
Crop a rectangle that starts 10 pixels left and is 200px wide
>>> crop(clip, x1=10, width=200)
Crop a rectangle centered in x,y=(300,400), width=50, height=150 :
>>> crop(clip, x_center=300 , y_center=400, width=50, height=150)
Any combination of the above should work, like for this rectangle centered in x=300, with explicit y-boundaries:
>>> crop(clip, x_center=300, width=400, y1=100, y2=600)
-
crossfadein
(duration)¶ Makes the clip appear progressively, over
duration
seconds. Only works when the clip is included in a CompositeVideoClip.
-
crossfadeout
(duration)¶ Makes the clip disappear progressively, over
duration
seconds. Only works when the clip is included in a CompositeVideoClip.
-
cutout
(start_time, end_time)¶ Returns a clip playing the content of the current clip but skips the extract between
start_time
andend_time
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.If the original clip has a
duration
attribute set, the duration of the returned clip is automatically computed as `` duration - (end_time - start_time)``.The resulting clip’s
audio
andmask
will also be cutout if they exist.- Parameters:
- start_timefloat or tuple or str
Moment from which frames will be ignored in the resulting output.
- end_timefloat or tuple or str
Moment until which frames will be ignored in the resulting output.
-
even_size
()¶ Crops the clip to make dimensions even.
-
fadein
(duration, initial_color=None)¶ Makes the clip progressively appear from some color (black by default), over
duration
seconds at the beginning of the clip. Can be used for masks too, where the initial color must be a number between 0 and 1.For cross-fading (progressive appearance or disappearance of a clip over another clip, see
transfx.crossfadein
-
fadeout
(duration, final_color=None)¶ Makes the clip progressively fade to some color (black by default), over
duration
seconds at the end of the clip. Can be used for masks too, where the final color must be a number between 0 and 1.For cross-fading (progressive appearance or disappearance of a clip over another clip, see
transfx.crossfadeout
-
fill_array
(pre_array, shape=(0, 0))¶ TODO: needs documentation.
-
freeze
(t=0, freeze_duration=None, total_duration=None, padding_end=0)¶ Momentarily freeze the clip at time t.
Set t=’end’ to freeze the clip at the end (actually it will freeze on the frame at time clip.duration - padding_end seconds - 1 / clip_fps). With
duration
you can specify the duration of the freeze. Withtotal_duration
you can specify the total duration of the clip and the freeze (i.e. the duration of the freeze is automatically computed). One of them must be provided.
-
freeze_region
(t=0, region=None, outside_region=None, mask=None)¶ Freezes one region of the clip while the rest remains animated.
You can choose one of three methods by providing either region, outside_region, or mask.
- Parameters:
- t
Time at which to freeze the freezed region.
- region
A tuple (x1, y1, x2, y2) defining the region of the screen (in pixels) which will be freezed. You can provide outside_region or mask instead.
- outside_region
A tuple (x1, y1, x2, y2) defining the region of the screen (in pixels) which will be the only non-freezed region.
- mask
If not None, will overlay a freezed version of the clip on the current clip, with the provided mask. In other words, the “visible” pixels in the mask indicate the freezed region in the final picture.
-
fx
(func, *args, **kwargs)¶ Returns the result of
func(self, *args, **kwargs)
, for instance>>> new_clip = clip.fx(resize, 0.2, method="bilinear")
is equivalent to
>>> new_clip = resize(clip, 0.2, method="bilinear")
The motivation of fx is to keep the name of the effect near its parameters when the effects are chained:
>>> from moviepy.video.fx import multiply_volume, resize, mirrorx >>> clip.fx(multiply_volume, 0.5).fx(resize, 0.3).fx(mirrorx) >>> # Is equivalent, but clearer than >>> mirrorx(resize(multiply_volume(clip, 0.5), 0.3))
-
gamma_corr
(gamma)¶ Gamma-correction of a video clip.
-
get_frame
(t)¶ Gets a numpy array representing the RGB picture of the clip, or (mono or stereo) value for a sound clip, at time
t
.- Parameters:
- tfloat or tuple or str
Moment of the clip whose frame will be returned.
-
property
h
¶ Returns the height of the video.
-
headblur
(fx, fy, r_zone, r_blur=None)¶ Returns a filter that will blur a moving part (a head ?) of the frames.
The position of the blur at time t is defined by (fx(t), fy(t)), the radius of the blurring by
radius
and the intensity of the blurring byintensity
.Requires OpenCV for the circling and the blurring. Automatically deals with the case where part of the image goes offscreen.
-
image_transform
(image_func, apply_to=None)¶ Modifies the images of a clip by replacing the frame get_frame(t) by another frame, image_func(get_frame(t)).
-
in_subclip
(start_time=None, end_time=None)[source]¶ Returns a sequence of [(t1,t2), text] covering all the given subclip from start_time to end_time. The first and last times will be cropped so as to be exactly start_time and end_time if possible.
-
invert_colors
()¶ Returns the color-inversed clip.
The values of all pixels are replaced with (255-v) or (1-v) for masks Black becomes white, green becomes purple, etc.
-
is_playing
(t)¶ If
t
is a time, returns true if t is between the start and the end of the clip.t
can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Ift
is a numpy array, returns False if none of thet
is in the clip, else returns a vector [b_1, b_2, b_3…] where b_i is true if tti is in the clip.
-
iter_frames
(fps=None, with_times=False, logger=None, dtype=None)¶ Iterates over all the frames of the clip.
Returns each frame of the clip as a HxWxN Numpy array, where N=1 for mask clips and N=3 for RGB clips.
This function is not really meant for video editing. It provides an easy way to do frame-by-frame treatment of a video, for fields like science, computer vision…
- Parameters:
- fpsint, optional
Frames per second for clip iteration. Is optional if the clip already has a
fps
attribute.- with_timesbool, optional
Ff
True
yield tuples of(t, frame)
wheret
is the current time for the frame, otherwise only aframe
object.- loggerstr, optional
Either
"bar"
for progress bar orNone
or any Proglog logger.- dtypetype, optional
Type to cast Numpy array frames. Use
dtype="uint8"
when using the pictures to write video, images…
Examples
>>> # prints the maximum of red that is contained >>> # on the first line of each frame of the clip. >>> from moviepy import VideoFileClip >>> myclip = VideoFileClip('myvideo.mp4') >>> print ( [frame[0,:,0].max() for frame in myclip.iter_frames()])
-
loop
(n=None, duration=None)¶ Returns a clip that plays the current clip in an infinite loop. Ideal for clips coming from GIFs.
- Parameters:
- n
Number of times the clip should be played. If None the the clip will loop indefinitely (i.e. with no set duration).
- duration
Total duration of the clip. Can be specified instead of n.
-
lum_contrast
(lum=0, contrast=0, contrast_threshold=127)¶ Luminosity-contrast correction of a clip.
-
make_loopable
(overlap_duration)¶ Makes the clip fade in progressively at its own end, this way it can be looped indefinitely.
- Parameters:
- overlap_durationfloat
Duration of the fade-in (in seconds).
-
margin
(margin_size=None, left=0, right=0, top=0, bottom=0, color=(0, 0, 0), opacity=1.0)¶ Draws an external margin all around the frame.
- Parameters:
- margin_sizeint, optional
If not
None
, then the new clip has a margin size of sizemargin_size
in pixels on the left, right, top, and bottom.- leftint, optional
If
margin_size=None
, margin size for the new clip in left direction.- rightint, optional
If
margin_size=None
, margin size for the new clip in right direction.- topint, optional
If
margin_size=None
, margin size for the new clip in top direction.- bottomint, optional
If
margin_size=None
, margin size for the new clip in bottom direction.- colortuple, optional
Color of the margin.
- opacityfloat, optional
Opacity of the margin. Setting this value to 0 yields transparent margins.
-
mask_and
(other_clip)¶ Returns the logical ‘and’ (minimum pixel color values) between two masks.
The result has the duration of the clip to which has been applied, if it has any.
- Parameters:
- other_clip ImageClip or np.ndarray
Clip used to mask the original clip.
Examples
>>> clip = ColorClip(color=(255, 0, 0), size=(1, 1)) # red >>> mask = ColorClip(color=(0, 255, 0), size=(1, 1)) # green >>> masked_clip = clip.fx(mask_and, mask) # black >>> masked_clip.get_frame(0) [[[0 0 0]]]
-
mask_color
(color=None, threshold=0, stiffness=1)¶ Returns a new clip with a mask for transparency where the original clip is of the given color.
You can also have a “progressive” mask by specifying a non-null distance threshold
threshold
. In this case, if the distance between a pixel and the given color is d, the transparency will bed**stiffness / (threshold**stiffness + d**stiffness)
which is 1 when d>>threshold and 0 for d<<threshold, the stiffness of the effect being parametrized by
stiffness
-
mask_or
(other_clip)¶ Returns the logical ‘or’ (maximum pixel color values) between two masks.
The result has the duration of the clip to which has been applied, if it has any.
- Parameters:
- other_clip ImageClip or np.ndarray
Clip used to mask the original clip.
Examples
>>> clip = ColorClip(color=(255, 0, 0), size=(1, 1)) # red >>> mask = ColorClip(color=(0, 255, 0), size=(1, 1)) # green >>> masked_clip = clip.fx(mask_or, mask) # yellow >>> masked_clip.get_frame(0) [[[255 255 0]]]
-
mirror_x
(apply_to='mask')¶ Flips the clip horizontally (and its mask too, by default).
-
mirror_y
(apply_to='mask')¶ Flips the clip vertically (and its mask too, by default).
-
multiply_color
(factor)¶ Multiplies the clip’s colors by the given factor, can be used to decrease or increase the clip’s brightness (is that the right word ?)
-
multiply_speed
(factor=None, final_duration=None)¶ Returns a clip playing the current clip but at a speed multiplied by
factor
.Instead of factor one can indicate the desired
final_duration
of the clip, and the factor will be automatically computed. The same effect is applied to the clip’s audio and mask if any.
-
multiply_stereo_volume
(left=1, right=1)¶ For a stereo audioclip, this function enables to change the volume of the left and right channel separately (with the factors left and right). Makes a stereo audio clip in which the volume of left and right is controllable.
Examples
>>> from moviepy import AudioFileClip >>> music = AudioFileClip('music.ogg') >>> audio_r = music.multiply_stereo_volume(left=0, right=1) # mute left channel/s >>> audio_h = music.multiply_stereo_volume(left=0.5, right=0.5) # half audio
-
multiply_volume
(factor, start_time=None, end_time=None)¶ Returns a clip with audio volume multiplied by the value factor. Can be applied to both audio and video clips.
- Parameters:
- factorfloat
Volume multiplication factor.
- start_timefloat, optional
Time from the beginning of the clip until the volume transformation begins to take effect, in seconds. By default at the beginning.
- end_timefloat, optional
Time from the beginning of the clip until the volume transformation ends to take effect, in seconds. By default at the end.
Examples
>>> from moviepy import AudioFileClip >>> >>> music = AudioFileClip('music.ogg') >>> doubled_audio_clip = clip.multiply_volume(2) # doubles audio volume >>> half_audio_clip = clip.multiply_volume(0.5) # half audio >>> >>> # silenced clip during one second at third >>> silenced_clip = clip.multiply_volume(0, start_time=2, end_time=3)
-
property
n_frames
¶ Returns the number of frames of the video.
-
on_color
(size=None, color=(0, 0, 0), pos=None, col_opacity=None)¶ Place the clip on a colored background.
Returns a clip made of the current clip overlaid on a color clip of a possibly bigger size. Can serve to flatten transparent clips.
- Parameters:
- size
Size (width, height) in pixels of the final clip. By default it will be the size of the current clip.
- color
Background color of the final clip ([R,G,B]).
- pos
Position of the clip in the final clip. ‘center’ is the default
- col_opacity
Parameter in 0..1 indicating the opacity of the colored background.
-
painting
(saturation=None, black=None)¶ Transforms any photo into some kind of painting. Saturation tells at which point the colors of the result should be flashy.
black
gives the amount of black lines wanted. Requires Scikit-image or Scipy installed.
-
preview
(*args, **kwargs)¶ NOT AVAILABLE: clip.preview requires importing from moviepy.editor
-
requires_duration
(clip)¶ Raises an error if the clip has no duration.
-
resize
(new_size=None, height=None, width=None, apply_to_mask=True)¶ Returns a video clip that is a resized version of the clip.
- Parameters:
- new_sizetuple or float or function, optional
- Can be either
(width, height)
in pixels or a float representingA scaling factor, like
0.5
.A function of time returning one of these.
- widthint, optional
Width of the new clip in pixels. The height is then computed so that the width/height ratio is conserved.
- heightint, optional
Height of the new clip in pixels. The width is then computed so that the width/height ratio is conserved.
Examples
>>> myClip.resize( (460,720) ) # New resolution: (460,720) >>> myClip.resize(0.6) # width and height multiplied by 0.6 >>> myClip.resize(width=800) # height computed automatically. >>> myClip.resize(lambda t : 1+0.02*t) # slow swelling of the clip
-
rotate
(angle, unit='deg', resample='bicubic', expand=True, center=None, translate=None, bg_color=None)¶ Rotates the specified clip by
angle
degrees (or radians) anticlockwise If the angle is not a multiple of 90 (degrees) orcenter
,translate
, andbg_color
are notNone
, the packagepillow
must be installed, and there will be black borders. You can make them transparent with:>>> new_clip = clip.add_mask().rotate(72)
- Parameters:
- clipVideoClip
A video clip.
- anglefloat
Either a value or a function angle(t) representing the angle of rotation.
- unitstr, optional
Unit of parameter angle (either “deg” for degrees or “rad” for radians).
- resamplestr, optional
An optional resampling filter. One of “nearest”, “bilinear”, or “bicubic”.
- expandbool, optional
If true, expands the output image to make it large enough to hold the entire rotated image. If false or omitted, make the output image the same size as the input image.
- translatetuple, optional
An optional post-rotate translation (a 2-tuple).
- centertuple, optional
Optional center of rotation (a 2-tuple). Origin is the upper left corner.
- bg_colortuple, optional
An optional color for area outside the rotated image. Only has effect if
expand
is true.
-
save_frame
(filename, t=0, with_mask=True)¶ Save a clip’s frame to an image file.
Saves the frame of clip corresponding to time
t
infilename
.t
can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.- Parameters:
- filenamestr
Name of the file in which the frame will be stored.
- tfloat or tuple or str, optional
Moment of the frame to be saved. As default, the first frame will be saved.
- with_maskbool, optional
If is
True
the mask is saved in the alpha layer of the picture (only works with PNGs).
-
scroll
(w=None, h=None, x_speed=0, y_speed=0, x_start=0, y_start=0, apply_to='mask')¶ Scrolls horizontally or vertically a clip, e.g. to make end credits
- Parameters:
- w, h
The width and height of the final clip. Default to clip.w and clip.h
- x_speed, y_speed
- x_start, y_start
- apply_to
-
show
(*args, **kwargs)¶ NOT AVAILABLE: clip.show requires importing from moviepy.editor
-
slide_in
(duration, side)¶ Makes the clip arrive from one side of the screen.
Only works when the clip is included in a CompositeVideoClip, and if the clip has the same size as the whole composition.
- Parameters:
- clipmoviepy.Clip.Clip
A video clip.
- durationfloat
Time taken for the clip to be fully visible
- sidestr
Side of the screen where the clip comes from. One of ‘top’, ‘bottom’, ‘left’ or ‘right’.
Examples
>>> from moviepy import * >>> >>> clips = [... make a list of clips] >>> slided_clips = [ ... CompositeVideoClip([clip.fx(transfx.slide_in, 1, "left")]) ... for clip in clips ... ] >>> final_clip = concatenate_videoclips(slided_clips, padding=-1) >>> >>> clip = ColorClip( ... color=(255, 0, 0), duration=1, size=(300, 300) ... ).with_fps(60) >>> final_clip = CompositeVideoClip([transfx.slide_in(clip, 1, "right")])
-
slide_out
(duration, side)¶ Makes the clip go away by one side of the screen.
Only works when the clip is included in a CompositeVideoClip, and if the clip has the same size as the whole composition.
- Parameters:
- clipmoviepy.Clip.Clip
A video clip.
- durationfloat
Time taken for the clip to fully disappear.
- sidestr
Side of the screen where the clip goes. One of ‘top’, ‘bottom’, ‘left’ or ‘right’.
Examples
>>> clips = [... make a list of clips] >>> slided_clips = [ ... CompositeVideoClip([clip.fx(transfx.slide_out, 1, "left")]) ... for clip in clips ... ] >>> final_clip = concatenate_videoclips(slided_clips, padding=-1) >>> >>> clip = ColorClip( ... color=(255, 0, 0), duration=1, size=(300, 300) ... ).with_fps(60) >>> final_clip = CompositeVideoClip([transfx.slide_out(clip, 1, "right")])
-
subclip
(start_time=0, end_time=None)¶ Returns a clip playing the content of the current clip between times
start_time
andend_time
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.The
mask
andaudio
of the resulting subclip will be subclips ofmask
andaudio
the original clip, if they exist.- Parameters:
- start_timefloat or tuple or str, optional
Moment that will be chosen as the beginning of the produced clip. If is negative, it is reset to
clip.duration + start_time
.- end_timefloat or tuple or str, optional
Moment that will be chosen as the end of the produced clip. If not provided, it is assumed to be the duration of the clip (potentially infinite). If is negative, it is reset to
clip.duration + end_time
. For instance:>>> # cut the last two seconds of the clip: >>> new_clip = clip.subclip(0, -2)
If
end_time
is provided or if the clip has a duration attribute, the duration of the returned clip is set automatically.
-
subfx
(fx, start_time=0, end_time=None, **kwargs)¶ Apply a transformation to a part of the clip.
Returns a new clip in which the function
fun
(clip->clip) has been applied to the subclip between times start_time and end_time (in seconds).Examples
>>> # The scene between times t=3s and t=6s in ``clip`` will be >>> # be played twice slower in ``new_clip`` >>> new_clip = clip.subapply(lambda c:c.multiply_speed(0.5) , 3,6)
-
supersample
(d, n_frames)¶ Replaces each frame at time t by the mean of n_frames equally spaced frames taken in the interval [t-d, t+d]. This results in motion blur.
-
time_mirror
()¶ Returns a clip that plays the current clip backwards. The clip must have its
duration
attribute set. The same effect is applied to the clip’s audio and mask if any.
-
time_symmetrize
()¶ Returns a clip that plays the current clip once forwards and then once backwards. This is very practival to make video that loop well, e.g. to create animated GIFs. This effect is automatically applied to the clip’s mask and audio if they exist.
-
time_transform
(time_func, apply_to=None, keep_duration=False)¶ Returns a Clip instance playing the content of the current clip but with a modified timeline, time
t
being replaced by another time time_func(t).- Parameters:
- time_funcfunction
A function
t -> new_t
.- apply_to{“mask”, “audio”, [“mask”, “audio”]}, optional
Can be either ‘mask’, or ‘audio’, or [‘mask’,’audio’]. Specifies if the filter
transform
should also be applied to the audio or the mask of the clip, if any.- keep_durationbool, optional
False
(default) if the transformation modifies theduration
of the clip.
Examples
>>> # plays the clip (and its mask and sound) twice faster >>> new_clip = clip.time_transform(lambda t: 2*t, apply_to=['mask', 'audio']) >>> >>> # plays the clip starting at t=3, and backwards: >>> new_clip = clip.time_transform(lambda t: 3-t)
-
to_ImageClip
(t=0, with_mask=True, duration=None)¶ Returns an ImageClip made out of the clip’s frame at time
t
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.
-
to_RGB
()¶ Return a non-mask video clip made from the mask video clip.
-
to_mask
(canal=0)¶ Return a mask a video clip made from the clip.
-
transform
(func, apply_to=None, keep_duration=True)¶ General processing of a clip.
Returns a new Clip whose frames are a transformation (through function
func
) of the frames of the current clip.- Parameters:
- funcfunction
A function with signature (gf,t -> frame) where
gf
will represent the current clip’sget_frame
method, i.e.gf
is a function (t->image). Parameter t is a time in seconds, frame is a picture (=Numpy array) which will be returned by the transformed clip (see examples below).- apply_to{“mask”, “audio”, [“mask”, “audio”]}, optional
Can be either
'mask'
, or'audio'
, or['mask','audio']
. Specifies if the filter should also be applied to the audio or the mask of the clip, if any.- keep_durationbool, optional
Set to True if the transformation does not change the
duration
of the clip.
Examples
In the following
new_clip
a 100 pixels-high clip whose video content scrolls from the top to the bottom of the frames ofclip
at 50 pixels per second.>>> filter = lambda get_frame,t : get_frame(t)[int(t):int(t)+50, :] >>> new_clip = clip.transform(filter, apply_to='mask')
-
property
w
¶ Returns the width of the video.
-
with_audio
(audioclip)¶ Attach an AudioClip to the VideoClip.
Returns a copy of the VideoClip instance, with the audio attribute set to
audio
, which must be an AudioClip instance.
-
with_duration
(duration, change_end=True)¶ Returns a copy of the clip, with the
duration
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Also sets the duration of the mask and audio, if any, of the returned clip.If
change_end is False
, the start attribute of the clip will be modified in function of the duration and the preset end of the clip.- Parameters:
- durationfloat
New duration attribute value for the clip.
- change_endbool, optional
If
True
, theend
attribute value of the clip will be adjusted accordingly to the new duration usingclip.start + duration
.
-
with_end
(t)¶ Returns a copy of the clip, with the
end
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Also sets the duration of the mask and audio, if any, of the returned clip.- Parameters:
- tfloat or tuple or str
New
end
attribute value for the clip.
-
with_fps
(fps, change_duration=False)¶ Returns a copy of the clip with a new default fps for functions like write_videofile, iterframe, etc.
- Parameters:
- fpsint
New
fps
attribute value for the clip.- change_durationbool, optional
If
change_duration=True
, then the video speed will change to match the new fps (conserving all frames 1:1). For example, if the fps is halved in this mode, the duration will be doubled.
-
with_is_mask
(is_mask)¶ Says whether the clip is a mask or not.
- Parameters:
- is_maskbool
New
is_mask
attribute value for the clip.
-
with_layer
(layer)¶ Set the clip’s layer in compositions. Clips with a greater
layer
attribute will be displayed on top of others.Note: Only has effect when the clip is used in a CompositeVideoClip.
-
with_make_frame
(mf)¶ Change the clip’s
get_frame
.Returns a copy of the VideoClip instance, with the make_frame attribute set to mf.
-
with_mask
(mask)¶ Set the clip’s mask.
Returns a copy of the VideoClip with the mask attribute set to
mask
, which must be a greyscale (values in 0-1) VideoClip.
-
with_memoize
(memoize)¶ Sets whether the clip should keep the last frame read in memory.
- Parameters:
- memoizebool
Indicates if the clip should keep the last frame read in memory.
-
with_opacity
(opacity)¶ Set the opacity/transparency level of the clip.
Returns a semi-transparent copy of the clip where the mask is multiplied by
op
(any float, normally between 0 and 1).
-
with_position
(pos, relative=False)¶ Set the clip’s position in compositions.
Sets the position that the clip will have when included in compositions. The argument
pos
can be either a couple(x,y)
or a functiont-> (x,y)
. x and y mark the location of the top left corner of the clip, and can be of several types.Examples
>>> clip.with_position((45,150)) # x=45, y=150 >>> >>> # clip horizontally centered, at the top of the picture >>> clip.with_position(("center","top")) >>> >>> # clip is at 40% of the width, 70% of the height: >>> clip.with_position((0.4,0.7), relative=True) >>> >>> # clip's position is horizontally centered, and moving up ! >>> clip.with_position(lambda t: ('center', 50+t) )
-
with_start
(t, change_end=True)¶ Returns a copy of the clip, with the
start
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.These changes are also applied to the
audio
andmask
clips of the current clip, if they exist.- Parameters:
- tfloat or tuple or str
New
start
attribute value for the clip.- change_endbool optional
Indicates if the
end
attribute value must be changed accordingly, if possible. Ifchange_end=True
and the clip has aduration
attribute, theend
attribute of the clip will be updated tostart + duration
. Ifchange_end=False
and the clip has aend
attribute, theduration
attribute of the clip will be updated toend - start
.
-
without_audio
()¶ Remove the clip’s audio.
Return a copy of the clip with audio set to None.
-
write_gif
(filename, fps=None, program='imageio', opt='nq', fuzz=1, loop=0, dispose=False, colors=None, tempfiles=False, logger='bar', pixel_format=None)¶ Write the VideoClip to a GIF file.
Converts a VideoClip into an animated GIF using ImageMagick or ffmpeg.
- Parameters:
- filename
Name of the resulting gif file, as a string or a path-like object.
- fps
Number of frames per second (see note below). If it isn’t provided, then the function will look for the clip’s
fps
attribute (VideoFileClip, for instance, have one).- program
Software to use for the conversion, either ‘imageio’ (this will use the library FreeImage through ImageIO), or ‘ImageMagick’, or ‘ffmpeg’.
- opt
Optimalization to apply. If program=’imageio’, opt must be either ‘wu’ (Wu) or ‘nq’ (Neuquant). If program=’ImageMagick’, either ‘optimizeplus’ or ‘OptimizeTransparency’.
- fuzz
(ImageMagick only) Compresses the GIF by considering that the colors that are less than fuzz% different are in fact the same.
- tempfiles
Writes every frame to a file instead of passing them in the RAM. Useful on computers with little RAM. Can only be used with ImageMagick’ or ‘ffmpeg’.
- progress_bar
If True, displays a progress bar
- pixel_format
Pixel format for the output gif file. If is not specified ‘rgb24’ will be used as the default format unless
clip.mask
exist, then ‘rgba’ will be used. This option is only going to be accepted ifprogram=ffmpeg
or whentempfiles=True
Notes
The gif will be playing the clip in real time (you can only change the frame rate). If you want the gif to be played slower than the clip you will use
>>> # slow down clip 50% and make it a gif >>> myClip.multiply_speed(0.5).to_gif('myClip.gif')
-
write_images_sequence
(name_format, fps=None, with_mask=True, logger='bar')¶ Writes the videoclip to a sequence of image files.
- Parameters:
- name_format
A filename specifying the numerotation format and extension of the pictures. For instance “frame%03d.png” for filenames indexed with 3 digits and PNG format. Also possible: “some_folder/frame%04d.jpeg”, etc.
- fps
Number of frames per second to consider when writing the clip. If not specified, the clip’s
fps
attribute will be used if it has one.- with_mask
will save the clip’s mask (if any) as an alpha canal (PNGs only).
- logger
Either
"bar"
for progress bar orNone
or any Proglog logger.
- Returns:
- names_list
A list of all the files generated.
Notes
The resulting image sequence can be read using e.g. the class
ImageSequenceClip
.
-
write_videofile
(filename, fps=None, codec=None, bitrate=None, audio=True, audio_fps=44100, preset='medium', audio_nbytes=4, audio_codec=None, audio_bitrate=None, audio_bufsize=2000, temp_audiofile=None, temp_audiofile_path='', remove_temp=True, write_logfile=False, threads=None, ffmpeg_params=None, logger='bar', pixel_format=None)¶ Write the clip to a videofile.
- Parameters:
- filename
Name of the video file to write in, as a string or a path-like object. The extension must correspond to the “codec” used (see below), or simply be ‘.avi’ (which will work with any codec).
- fps
Number of frames per second in the resulting video file. If None is provided, and the clip has an fps attribute, this fps will be used.
- codec
Codec to use for image encoding. Can be any codec supported by ffmpeg. If the filename is has extension ‘.mp4’, ‘.ogv’, ‘.webm’, the codec will be set accordingly, but you can still set it if you don’t like the default. For other extensions, the output filename must be set accordingly.
Some examples of codecs are:
'libx264'
(default codec for file extension.mp4
) makes well-compressed videos (quality tunable using ‘bitrate’).'mpeg4'
(other codec for extension.mp4
) can be an alternative to'libx264'
, and produces higher quality videos by default.'rawvideo'
(use file extension.avi
) will produce a video of perfect quality, of possibly very huge size.png
(use file extension.avi
) will produce a video of perfect quality, of smaller size than withrawvideo
.'libvorbis'
(use file extension.ogv
) is a nice video format, which is completely free/ open source. However not everyone has the codecs installed by default on their machine.'libvpx'
(use file extension.webm
) is tiny a video format well indicated for web videos (with HTML5). Open source.
- audio
Either
True
,False
, or a file name. IfTrue
and the clip has an audio clip attached, this audio clip will be incorporated as a soundtrack in the movie. Ifaudio
is the name of an audio file, this audio file will be incorporated as a soundtrack in the movie.- audio_fps
frame rate to use when generating the sound.
- temp_audiofile
the name of the temporary audiofile, as a string or path-like object, to be created and then used to write the complete video, if any.
- temp_audiofile_path
the location that the temporary audiofile is placed, as a string or path-like object. Defaults to the current working directory.
- audio_codec
Which audio codec should be used. Examples are ‘libmp3lame’ for ‘.mp3’, ‘libvorbis’ for ‘ogg’, ‘libfdk_aac’:’m4a’, ‘pcm_s16le’ for 16-bit wav and ‘pcm_s32le’ for 32-bit wav. Default is ‘libmp3lame’, unless the video extension is ‘ogv’ or ‘webm’, at which case the default is ‘libvorbis’.
- audio_bitrate
Audio bitrate, given as a string like ‘50k’, ‘500k’, ‘3000k’. Will determine the size/quality of audio in the output file. Note that it mainly an indicative goal, the bitrate won’t necessarily be the this in the final file.
- preset
Sets the time that FFMPEG will spend optimizing the compression. Choices are: ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo. Note that this does not impact the quality of the video, only the size of the video file. So choose ultrafast when you are in a hurry and file size does not matter.
- threads
Number of threads to use for ffmpeg. Can speed up the writing of the video on multicore computers.
- ffmpeg_params
Any additional ffmpeg parameters you would like to pass, as a list of terms, like [‘-option1’, ‘value1’, ‘-option2’, ‘value2’].
- write_logfile
If true, will write log files for the audio and the video. These will be files ending with ‘.log’ with the name of the output file in them.
- logger
Either
"bar"
for progress bar orNone
or any Proglog logger.- pixel_format
Pixel format for the output video file.
Examples
>>> from moviepy import VideoFileClip >>> clip = VideoFileClip("myvideo.mp4").subclip(100,120) >>> clip.write_videofile("my_new_video.mp4") >>> clip.close()