DIFM (Do It For Me) video tooling - some experiments
cmw.osdude 120000QT77 Comment (1) Visits (544)
Today's post is a little geeky. Sorry about that.
I've been working a lot more with video techniques lately. Blender is the tool that I'm finding fascinating because it has just so much raw power and potential, if one can overcome the learning curve. I'm getting there piece by piece. However, the inclusion of python scripting within Blender might mean that as I develop techniques and reuasable scenes that I can make them more user-friendly.
For example, I have a picture in my head on having an animated template that is intended to frame a piece of self-shot video so it looks a little more artistic. It's reasonable to me that I should be able to set up a blend file that will prompt the user for the location of their video, calculate its size and then render a version that has it in the animated frame. That could even be done through a web service of some sort so that the user didn't even need to have Blender installed locally or even know what it is.
As my mind was headed in that direction I started thinking of other ways of automating video production. I mean, video elements should be able to be assembled together as easily as putting together a slide deck. In some ways this is so. There are various editors that will let you pull together a few videos and put little crossfade widgets in between them. That's not too hard to do, but I wanted to consider ways that I might be able to automate the process even more. So, this is a work in progress, but I thought I'd share what I'd learned. Maybe someone out there will have some even better suggestions.
My ultimate goal with this is to have something that could run on a server somewhere. I envision an application where I point to various video elements (perhaps with time codes to cut out a specific section) and define the order that I want. The segments are then automatically rendered into a final video. I want to dissolve between some of the scenes to make it smooth. This means that it needs to be something scriptable that can run without a GUI. Now, many applications, including Blender, can run in a "quiet" mode. It might even be that I could do what I want in Blender, but I wanted to start with something more basic.
My first stop was ffmpeg and avconv. (The are forks of the same code. Ubuntu distributes avconv, so it's what I use.) I use avconv constantly to manipulate video from one format to another and do various scaling and other transformations. It works really well and has made it easy for me provide a rendered video in a variety of formats to suit different tastes and needs. The simplest way seemed to be to convert the video to mpeg and then simply concatenate the files. It went something like this:
The -s hd480 resized all of the videos to be the same. the -q 1 parameter maintains a high quality on conversion. Then, if I wanted to retain the original format I would convert it back:
avconv -i final.mpg -q 1 final.mov
This is tolerable I suppose. I really miss the crossfading and I'm not a fan of all the converting from format to format. It seems ripe for glitches.
It has some facilities to allow for exactly what I want to do. Check out this code:
melt part0.mp4 in=0 out=220 \
The in=0 out=220 identifies the frames where I want the video to come in and out. I'm only going to take the first 220 frames of the video. -mix 25 -mixer luma does the magic to create a fade. The -consumer parameters cause the video to be rendered. Without that it would simply play on the screen or could be directed to a live stream (meaning that you could actually assemble video for a live play rather than having to render it out). Very interesting stuff!
Here is the exact code I used to generate it:
melt video_title.png out=75 \
How did I select the parameters to render the final output? I borrowed them from the KDEn
I'll keep you posted as I do more with this.