63 points | by kcsaba22 days ago
PTZ - pan/tilt/zoom camera, that much I understood. The rest? Uh… can I get an ELI5 please?
Even though I’m clearly not in the target demographic, I’m eager to learn more..
Edit: ok, clicked through to GitHub, now I (kinda) got what it’s for :)
In this case, there are multiple points of interest on the stage which are sometimes used, and sometimes not. When an area of the stage is unused, the microphone(s) at that location are manually muted to eliminate unwanted noise. The remaining unmuted microphone is at a location of interest, which is also the logical thing for a motorized camera to point toward and zoom onto at that moment.
This project uses the muted/unmuted states of microphones as a cue for camera movement, although it takes some upfront work to set it up. It also could cause trouble for looser or more improvisational shows where such rigidity might actually get in the way.
[1]: https://www.behringer.com/series.html?category=R-BEHRINGER-X...
This allows them to be programmed as general purpose computers.
I have started to mess around with Chataigne, it seems promising, but this far I'm getting stuck at trivial steps (matching a regexp, converting a "contains" to bool, etc). We'll see how I progress with support:)
But that guy is clearly brilliant just like Chataigne.
Thanks for mentioning!
Interestingly, both seem to be projects from French developers, and they look very similar.
I’ve been playing with hooking up a MIDI controller to my OBSBot Tail Air PTZ camera and OBS.
The config and filters and triggers looks similar to my prototypes.
I’ve been wondering if there’s any sort of prior art or standards here from other domains like lighting consoles or workflow systems.
If I understand your prior-art question, I think there is a few: - the OSC protocol for audio gear and some other studio equipment. - In the lighting world I heard DMX is king - OBS has a very extensive Websocket client
For PTZ cameras I'm not sure, our old PTZ camera needs HTTP GET-s to weird urls (RPC).
Thanks again for starring!
Just spitballing.
Also general audio stuff is from 20-20khz, so you don't really have headroom to super(or sub?) impose another signal into it frequency wise, as it would be audible. (Unless you sacrifice some parts with a high/low cut on the original signal.)
Although you could have 48khz sampling rate on most audio gear, so you could do time-multiplexing if you are really desperate, but then all signals need processing before becoming useful/noiseless.
About the cable sheath, of course you can do anything with custom circuits, but general audio stuff will not help you in that as far as i know.
If you ask it because of controlling the cam with a mute status felt out-of-place for you, then: no it's not wasting any channels. As someone above already explained very precisely above, the thing is you don't want to have live and unused microphones on your setup, due to feedback, extra noise, etc.
So if the speaker is speaking, the band is muted, if the band is playing the speaker is muted. And this can be used (in our case, not always for everyone), to track where the event is happening, and the camera can turn at the right position based on this. Therefore we, in our usecase could eliminate an extra step for someone to manage the camera by hand.
By the way, many consoles have extra buttons/knobs that are assignable to random stuff, so through OSC i could query their states as well and i could set up camera movement with those as well if i wanted to.
By the way, we do have a custom keyboard as well, which is basically a separation of concerns/simplified interface for OBS, which is intimidating for most of our crew. But they can handle 10 buttons with nice labels.
For example they have one button that toggles the camera view and the projector view.
Also I scripted OBS to manage camera positions with the scene "Pulpit", "Stage", "Pulpit wide", "Sitting": so they can manually override the automated thing in 1% of the time, but 99% of the time the automation is enough.
Also I have scripted OBS to manage the stream lifecycle based on which scene is active. So: - they can bring up the "Starting soon" scene that would start streaming but not the recording - they can bring up the "Break" scene that would pause(!) the recording but not the stream - they can bring up the "Finished" scene that would stop all with a nice fade out and "thank you for joining" text, etc.
So a control pad can work together with this tool.
I've used it a lot for the original designed use-case (sending parameter updates between controllers and music synths), but also a bunch of other things (sending tracking information from a python computer vision script to a Unity scene).