Today's post is a little geeky. Sorry about that.
I've been working a lot more with video techniques lately. Blender is the tool that I'm finding fascinating because it has just so much raw power and potential, if one can overcome the learning curve. I'm getting there piece by piece. However, the inclusion of python scripting within Blender might mean that as I develop techniques and reuasable scenes that I can make them more user-friendly.
For example, I have a picture in my head on having an animated template that is intended to frame a piece of self-shot video so it looks a little more artistic. It's reasonable to me that I should be able to set up a blend file that will prompt the user for the location of their video, calculate its size and then render a version that has it in the animated frame. That could even be done through a web service of some sort so that the user didn't even need to have Blender installed locally or even know what it is.
As my mind was headed in that direction I started thinking of other ways of automating video production. I mean, video elements should be able to be assembled together as easily as putting together a slide deck. In some ways this is so. There are various editors that will let you pull together a few videos and put little crossfade widgets in between them. That's not too hard to do, but I wanted to consider ways that I might be able to automate the process even more. So, this is a work in progress, but I thought I'd share what I'd learned. Maybe someone out there will have some even better suggestions.
Right tool
My ultimate goal with this is to have something that could run on a server somewhere. I envision an application where I point to various video elements (perhaps with time codes to cut out a specific section) and define the order that I want. The segments are then automatically rendered into a final video. I want to dissolve between some of the scenes to make it smooth. This means that it needs to be something scriptable that can run without a GUI. Now, many applications, including Blender, can run in a "quiet" mode. It might even be that I could do what I want in Blender, but I wanted to start with something more basic.
My first stop was ffmpeg and avconv. (The are forks of the same code. Ubuntu distributes avconv, so it's what I use.) I use avconv constantly to manipulate video from one format to another and do various scaling and other transformations. It works really well and has made it easy for me provide a rendered video in a variety of formats to suit different tastes and needs. The simplest way seemed to be to convert the video to mpeg and then simply concatenate the files. It went something like this:
X=0
mkdir out
for FILE in $(ls *.{mov,mp4})
do
avconv -y -i $FILE -s hd480 -q 1 out/part$X.mpg
X=`expr $X + 1`
done
cat out/part*.mpg > out/final.mpg
The -s hd480 resized all of the videos to be the same. the -q 1 parameter maintains a high quality on conversion. Then, if I wanted to retain the original format I would convert it back:
avconv -i final.mpg -q 1 final.mov
This is tolerable I suppose. I really miss the crossfading and I'm not a fan of all the converting from format to format. It seems ripe for glitches.
melt
Now I find melt, the Media Loving Toolkit. It is included in the Ubuntu repository and is the engine behind KDEnlive and other applications.
It has some facilities to allow for exactly what I want to do. Check out this code:
melt part0.mp4 in=0 out=220 \
part1.mov -mix 25 -mixer luma \
part2.mov -mix 25 -mixer luma \
part3.mov -mix 25 -mixer luma \
-consumer avformat:out.mp4 f=avi acodec=libmp3lame \
vcodec=mpeg4 s=hd480 b=6000k
The in=0 out=220 identifies the frames where I want the video to come in and out. I'm only going to take the first 220 frames of the video. -mix 25 -mixer luma does the magic to create a fade. The -consumer parameters cause the video to be rendered. Without that it would simply play on the screen or could be directed to a live stream (meaning that you could actually assemble video for a live play rather than having to render it out). Very interesting stuff!
Here is a video of a mix I did from the command line using videos I took from the Internet Archive.
Here is the exact code I used to generate it:
melt video_title.png out=75 \
cart1.mp4 in=1225 out=1723 -mix 25 -mixer luma -mixer mix:-1 \
cart2.mp4 in=3100 out=3900 -mix 25 -mixer luma \
cart3.mp4 in=5254 out=5827 -mix 25 -mixer luma \
cart3.mp4 in=9310 -mix 25 -mixer luma \
-consumer avformat:out.mp4 f=mp4 acodec=libmp3lame ab=128k ar=44100 \
vcodec=mpeg4 minrate=0 b=3000k s=480x480 mbd=2 trellis=1 mv4=1 \
subq=7 qmin=10 qcomp=0.6 qdiff=4 qmax=51
How did I select the parameters to render the final output? I borrowed them from the KDEnlive rendering profiles since melt is used on the backend. It's a great starting place while you learn about all of the mysterious little tweaks.
I'll keep you posted as I do more with this.