Automation in Worship by Kevin Poole

For our workflow, we are using two types of timecode simultaneously. First, DP is very good at handling midi. DP allows multiple tracks of midi, much like audio tracks. They can be assigned to different outputs of an interface (see diagram below). Once I could get midi out, I could then get midi in to other devices, either natively or through an interface. I won’t mention brands of gear because most gear has this ability, and if not, may have a work around.

I set DP to output Midi Time Code (MTC) and connected my midi interface to my lighting desk. I also connected midi thru my audio console. Some audio consoles allow for MTC, but for me, all I needed was midi note values to recall presets in my audio desk. So now as DP tracks along in time, the MTC flows to a timeline in my lighting console which is programmed with cues, and my audio desk listens for midi notes telling it to recall specific presets. My lighting now is automated with my mix/mic cues.

Now lets talk about video. This is a bit more complicated yet easier. Video playback has a frame rate that plays back at a certain specified number of frames per second. In our case, we sync video playback to 30fps. DP allows you to set this playback amount with in the app, so if 29.97 or 24 or 60fps is your workflow, there are ways to do this as well. Finding a video playback device that listens to MTC is difficult. The video world has seemed to standardize on SMPTE as its timecode of choice for this kind of workflow. I’m an audio guy first and foremost, so if I badger anything here, please give me grace.

DP is also capable of outputting SMPTE timecode as an audio source. As an audio guy, I found this very comforting. It was simple to then route audio through XLR cable to a video playback box which the capability of inputting and tracking to SMPTE.

In DP, there is a feature called “Chunks. “ Our workflow uses this feature as discreet multitrack sessions, each being labeled as one song. I can then set each chunk to its own specified timecode hour. I set our video playback box to play the appropriate video at the appropriate timecode hour related to the song in DP. Now, when I press play in DP, my audio desk will fire the appropriate preset for the song, lighting will start its thing, and video playback will start. All three disciplines, one play button.

In this workflow, the only position that was required to make the look happen was a TD switching video. While there are means to automate points of a switcher, we have not taken it that far yet. GPIs are limited in what they can do, and can be complicated to wire. This may be a next stage for us. I’m not saying we would automate every camera dissolve, but perhaps we could automate a cut when video playback starts so that first frame of the video is not missed. We haven’t yet seen a need for this.

There are other ways to fire video playback such as via a midi note. Some would say this is an appropriate method in this kind of workflow. This has one fatal flaw in my mind. Simply starting a video does not ensure frame accurate alignment or constant playback speed. If the video playback jitters, or audio slows, the lip sync, or the alignment of video transitions will not follow. Its perceived as a failure or less than excellent. Purchasing a video playback device capable of SMPTE chase is more money, agreed. Our goal is to not create distractions but eliminate them. The margin of money to the security of not having distractions seemed appropriate. We purchased the more expensive device in this case.

There is one more fun feature to this workflow. As an audio engineer, one task of my role I prefer not to do is cueing and firing playback tracks. There is always that awkward moment if the track fires late, if you miss a cue, or if the device freezes. While DP is remarkably stable in starting and stopping playback, it also allows for remote control of transport via an iPad app.

I have placed an iPad on a stand near our director running DP Control. I have set DP to cue the following chunk at the end of each song, and waits for the play command. My director then presses one button in the course of the service/production: play. The director does not need to know the ins and outs of DP, just when he wants the next element to start.

This workflow will not happen over night. There are many moving parts, and many pieces of gear that must work together in order for the end product to come across as seamless. Once the foundation is set, however, the workflow opens all kinds of possibilities. Remember, midi can control many devices, and DP can have many autonomous midi tracks. I think I once saw a midi controlled coffee pot…maybe that was DMX, but I could still automate through lighting!

Page 2 of 3 | Previous page | Next page