This project is read-only.
 Bitter Coder - Splicer News Feed 
Friday, June 13, 2008  |  From Bitter Coder - Splicer


Version of splicer (the little video/audio composition library that leverages
DirectShow which I started a few years ago) is now available on Codeplex here this
marks a milestone in stability, and probably the main "feature" of this release is
64bit support, something that's been bugging me for ages as I could only work on the
project in a VM!

A quick list of changes since the last release are:

  • Now uses DirectShow.Net 2.0 (thanks to felix, a fellow NZ'r).

  • RenderProgress event.

  • Renderers are disposable.

  • Support for 64bit operating systems.

  • Vista fixes/support.

  • Additional samples (i.e. SampleTimeWatermarkParticipant, and a few others).

  • Tests updated for NUnit 2.4.7.

  • Solution upgraded to VS2008.


With this library
and a little imagination you can:

  • Encode video or audio suitable for use on a website.

  • Create slide shows from images, videos and audio.

  • Apply effects and transitions to audio and video.

  • Grab System.Drawing.Image clips from a video at certain times.

  • Modify individual video frames during encoding via standard C# mage Drawing code.

  • Add new soundtracks to existing video clips.

  • Watermark videos.

  • Build a video editing suite, if you were so inclined.

Monday, October 02, 2006  |  From Bitter Coder - Splicer

Second Alpha

I released the second alpha of the Splicer library last night… you can grab it from the Splicer Codeplex site

Key changes are:
  • It’s been FxCop’d – which in turn has seen some naming issues, misspellings, security issues, incorrect disposal implementations etc being fixed.
  • Implemented some features:
    • Support for adding System.Drawing.Image’s to a track (using overloads of the AddImage(…) methods.
    • Support for shadow copying input files – this will take a copy of a clip file, and use that in the ITimeline, disposing of the copy when the clip is disposed – this is especially useful when dealing with source filters which don’t like reading the same file simultaneously.
    • AbstractProgressParticipant class which can be used to generate your own progress participants, 0 or more participants can be registered for each group.  This allows you to do things like perform screen captures on the output of the timeline.
    • Windows media renderer will now notify you when the selected rendering profile is unsuitable for your timeline.
  • Fixed some bugs
    • The clock is now correctly disabled for the filter graph when rendering to a file.  This will speed up rendering in many scenarios.
    • When supplying an audio encoder for either WAV or AVI output, the format settings for the encoder were not being applied properly (was using defaults, instead of setting the number of channels, khz & kbps).

Though there are also plenty more, I doubt anyone has actually been using this library so far (as it's really only useful in a rather select set of circumstances) - however just to offer some vague hope of encouragement I’ll include some code samples to get you started – as there is no documentation short of the 150 odd unit tests included with the project... and won't be for some time.<o:p></o:p>

<o:p>Example 1 </o:p>

First off, lets imagine you have some web 2.0 website, and you encourage people to upload audio comments, instead of text, yet you don’t know what format they’re going to provide, and you want to keep things standard (say by outputting a certain windows media audio format) that eases pressure on your web server.

using (ITimeline timeline = new DefaultTimeline())


    IGroup audioGroup = timeline.AddAudioGroup();

    ITrack rootTrack = audioGroup.AddTrack();



    using (

        WindowsMediaRenderer renderer =

            new WindowsMediaRenderer(timeline, "output.wma", WindowsMediaProfiles.LowQualityAudio))





Here we are creating a Timeline, this is a container for all the audio and video compositing we are doing, next we create a group for our audio, a group is a top level composition… we then add a track to our audio group, which is a container for our audio clip we wish to add (in this case it’s an mp3, but it could be any audio format that you have a codec installed for).

Once your Timeline is ready to be rendered, we create a renderer, in this case it’s a windows media renderer, which takes your timeline, the name of the output file and the windows media profile we wish to use – profiles are xml strings containing the settings for the windows media encoder – A couple of bundled ones are included, though you would probably want to create your own using the “Windows Media Profile Editor” which is included with Windows Media Encoder 9.

Last of all we invoke the Renderer, generally rendering can take a little while, depening on the size of the content, and the complexity of the encoding process - so I would suggest invoking the Renderer asynchronously (all Renderers support the BeginRender(…) … EndRender(…) methods which should make this trivial).  Our examples will use the blocking Render() method, just because it's more concise.

Example 2

So audio cool, what about video?

Much the same:

using (ITimeline timeline = new DefaultTimeline())



    timeline.AddVideoGroup(24, 320, 240).AddTrack();



    using (

        WindowsMediaRenderer renderer =

            new WindowsMediaRenderer(timeline, "output.wmv", WindowsMediaProfiles.LowQualityVideo))





This time we create two groups, and two tracks, one of each for the audio from our source file, and another of each for the video.

Note we’ve specified the format of the video a little – so it will be 24 bits per pixel, and 320 high by 240 wide.
In this case we use a helpful method on the timeline itself which will Add a video clip to the video group and an audio clip to the audio group at the same time, called AddVideoWithAudio… Again we render it to windows media format, but this time we use a profile which has settings for both audio and video.

The bits per pixel becomes important when using something transitions or effects, as you need an alpha channel for them to work - this is generally achieved but just bumping the BPP from 24 to 32.

Example 3

Right, moving on from here, lets cover some more interesting ideas – first off, lets build a slide show video…

using (ITimeline timeline = new DefaultTimeline(25))


    IGroup group = timeline.AddVideoGroup(32, 160, 100);


    ITrack videoTrack = group.AddTrack();

    IClip clip1 = videoTrack.AddImage("image1.jpg", 0, 2);

    IClip clip2 = videoTrack.AddImage("image2.jpg", 0, 2);

    IClip clip3 = videoTrack.AddImage("image3.jpg", 0, 2);

    IClip clip4 = videoTrack.AddImage("image4.jpg", 0, 2);


    double halfDuration = 0.5;


    group.AddTransition(clip2.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);

    group.AddTransition(clip2.Offset, halfDuration, StandardTransitions.CreateFade(), false);


    group.AddTransition(clip3.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);

    group.AddTransition(clip3.Offset, halfDuration, StandardTransitions.CreateFade(), false);


    group.AddTransition(clip4.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);

    group.AddTransition(clip4.Offset, halfDuration, StandardTransitions.CreateFade(), false);


    ITrack audioTrack = timeline.AddAudioGroup().AddTrack();


    IClip audio =

        audioTrack.AddAudio("soundtrack.wav", 0, videoTrack.Duration);


    audioTrack.AddEffect(0, audio.Duration,

                        StandardEffects.CreateAudioEnvelope(1.0, 1.0, 1.0, audio.Duration));


    using (

        WindowsMediaRenderer renderer =

            new WindowsMediaRenderer(timeline, "output.wmv", WindowsMediaProfiles.HighQualityVideo))





In this case we are producing a slideshow – Note that 25 in the constructor of the timline, this is setting the frames per second, in this case we are saying 25fps…  The renderer has the final say in the “rendered” frames per second, based on it’s settings… but some renderers will use the frame rate they receive, such as the windows renderer (which renders the video into a window on-screen)

The slideshow fades between 4 still images, each lasts for a duration of 2 seconds, we add in a couple of transitions, fading out of one clip and into another – last of all we are using an AudioMixer effect on the audio track to create an Audio envelope, which means it will take one second for the audio to fade in from 0.0 volume to 1.0 volume (0 to 100%) and then play until one second before the end of the clip, at which point it will fade out from 100% volume to 0% volume.

Example 4

Last of all, you can capture rendered frames from a video, this might be handy to display on a web page, with links to the various quality formats beneath – you can grab these frames during encoding, or you can render to “null”, which allows you the grab frames, but wont produce any output video or audio file… Lets take a look at the second idea:

using (DefaultTimeline timeline = new DefaultTimeline())


    timeline.AddVideoGroup(24, 320, 240).AddTrack(); // we want 320x240 24bpp sized images

    timeline.AddVideo("transitions.wmv"); // 8 second video clip


    ImagesToDiskParticipant participant = new ImagesToDiskParticipant(24, 320, 240, Environment.CurrentDirectory, 1, 2, 3, 4, 5, 6, 7);


    using (NullRenderer render = new NullRenderer(timeline, null, new ICallbackParticipant[] { participant }))





Here we add a video group (we don’t need any audio, we’re not capturing sound samples) and then create an ImageToDiskParticipant (this is just an example class, you would probably want to implement your own derived from the AbstractProgressParticipant class).  In this case the format for the video group will be what you end up with, our ImagesToDiskParticipant implements a simple queue where it picks out the nearest frame to the specified times.. so in this case transitions.wmv is an 8 second video clip, and we will take frame "snapshots" at 1,2,3,4,5,6 and 7 seconds aproximately. 

To make use of our participant we use one of the overloads of NullRenderer, and supply it a single-element array containing the image to disk participant.

The end result is 7 jpeg images on disk (frame0.jpg -> frame6.jpg)

Hopefully this has given you some ideas about how Splicer can be used for dealing width audio and video, next time I do an alpha drop I might include some samples which use different container formats (WAV & AVI) - or maybe I'll use splicer as an example of just how not to write an API you wish to expose to IronPython due to it's bloated number of overloads.

Wednesday, September 27, 2006  |  From Bitter Coder - Splicer

Over the past couple of days I've been working on using DirectShow and DES (DirectShow Editing Services) for re-encoding mobile video content suitable for posting on the web... With the goal of including video editing into our unified "media engine" which the Syzmk line of products makes use of... as an off-shoot of that work I decided to refactor the rather ugly code we had for working with DES over the weekend into something a little less ugly, that hid most of the details away, say hello to:

(sorry, couldn't resist the lame web 2.0 logo ;o)

A simple little library for doing most of the useful things in DES, the project is hosted here up on Codeplex.  It's currently in an alpha state, at version - and should get feature drops every week or so until I think it can go into beta.

To give an idea of what I'm up to, it's basically just wrapping up the various DES components and removing alot of the unpleasantness (ie. you get to deal with human-friendly units of time like seconds) .. and seperating the concerns of the timeline, and rendering that timeline into something useful like audio or video container (currently that's just WAV, AVI & Windows Media formats).  It's fully functional, but the interfaces at this point it isn't very stable as I'm likely to refactor the various interfaces mercilessly till I get it working how I want, so if you build something with the library now, it's likely to break in the future (but probably not in major way, it is after all mirroring the functionality available in DES).   The added bonus of developing a set of rich wrappers for a lot of this stuff is that I can mock away the requirement on DES altogether, which is always a bonus considering how long the units tests currently take to run for the Syzmk RMP product.

When I get a chance, I'll put up some code examples to demonstrate how you might use it... At any rate it'll be going through some dog fooding in the next month or so while I integrate it with the Syzmk product, so you'll probably here a little more about in the future.

 Bitter Coder - Splicer News Feed 

Last edited Dec 7, 2006 at 11:16 PM by codeplexadmin, version 1


No comments yet.