April, 2005
Bill May
Dave Mackie
User Contributions by:
Dave Baker
Cesar Gonzalez
MP4LIVE
Changes in Version 1.3
Changes in Version 1.1
Hardware Requirements
Software Requirements
Warnings
Tips
Building and Installing
Using mp4live
Text and ISMA Href streams
Use with QT 6.0 and Real One
Network Transmission
Use with Darwin Streaming Server
Sharing Capture Cards
Command Line Options
Known Issues
Unknowns
Links
Configuration Variables
MP4LIVE is a Linux audio/video capture utility that can capture and encode audio and video in real-time. The results can be written to either an .mp4 file, transmitted onto the network via either unicast or multicast, or both simultaneously! The audio is encoded with MP3 or AAC, and the video with MPEG-4 Simple Profile.
Please use the MPEG4IP SourceForge site to report problems, suggest enhancements, ask questions, etc. The URL is http://www.sourceforge.net/projects/mpeg4ip
Do not contact us via email
Mp4live has been rewritten to handle multiple streams from a single audio and video source, as well as being able to generate ISMA href and text streams.
Added H.264 encoding through x264. Added b frame encoding for H.264 and mpeg4 through ffmpeg.
lame and faac are no longer required; a G.711 encoder has been added that will be the default.
If faac is installed, faac encoding will be the default encoding.
Default mpeg4 encoder is no longer included with the package.
Add option to create new file name based on existing name when recording files
Added mpeg2 video and mpeg1 layer 2 audio encoding with ffmpeg.
Added XVID 1.0 API code
Added decimate and deinterlace filters
Note: most of this is outdated; I develop now on a dual-2GHz Pentium IV machine, or a 3GHz machine with at least 512MB ram.
Pentium III class machine of at least 500 MHz (Pentium IV class machine at 2 GHz is very nice.)
Note systems vary quite a bit in their video capture abilities. For instance, I have a name brand 750 MHz PIII that drops frames when pushed to CIF sizes at greater than 15 fps, but I have a no-name clone with a 800 MHz PIII that can encode CIF @ 24 fps no problem.
RAM is not typically an issue. I develop on machines with 128 MB, but I believe much smaller configurations would work fine. The real issue is CPU speed. We recommend at least 256Mb.
A sound card with an OSS compatible driver and capture ability, preferrably at least 3.8.2 that have the SNDCTL_DSP_GETERROR define, and that support the SNDCTRL_DSP_GETISPACE ioctl accurately.
A video device with a video4linux (v4l) compatible driver and memory mapped capture ability. We also support (and recommend) the video4linux2 (v4l2) driver.
Known to work are:
Note on multi-processor machines (SMP): mp4live is multithreaded at a coarse level. Specifically the video encoder, audio encoder, file recorder, network transmitter, and user interface each have their own thread. Unfortunately for owners of multiprocessor machines, the video encoder thread dominates the computational requirements so one CPU will be very busy, and the others will be lightly to moderately loaded. For those looking for a project, a multi- threaded video encoder would certainly provide an interesting challenge.
Again - a bit out-dated; if you're doing multiple encodings, this is no longer true.
Linux with a 2.4 or later kernel.
We recommend a kernel with V4L2 built in. See the instructions on
building it yourself
The 2.6 kernel should have the correct V4L2 interface.
Drivers for sound and video devices
bttv 0.7 or later video capture driver
(Included with RedHat 7.1 and later)
(0.9 required for v4l2)
qce webcam driver
Please see the MPEG4IP README regarding legal issues, and the list of open source packages that are redistributed with this code.
This is a LINUX program! Do not even think about trying to get this to run on Windows! Even moving it to other UNIX systems would require some re-programming since the sound and video capture interfaces are Linux specific.
We no longer include a mpeg-4 encoder native with the package. You will have to download ffmpeg or xvid if you want to encode with mpeg-4. You will also want to include a good audio encoder, like faac, lame or twolame. See the main readme for more information.
By far the easiest route is to use a Linux distribution that already has a 2.6 kernel and the bttv driver, and the associated i2c module built into it.
See the instructions on how to build your own kernel with V4L2 included.
I've had many headaches with sound cards under Linux. Before you start using mp4live, please make sure you're able to playback and record with your sound card!
You should definitely increase the number of capture buffers for the bttv
driver. This reduces the chance of dropping video frames due to transient
delays in the system. By default bttv uses 2 buffers. You can increase this
by editting /etc/modules.conf and adding the line
"add options bttv gbuffers=32"
at the end of the file. The value 32 is my recommendation but you can
experiment with other values if you are so inclined.
Note: with v4l2, we're not sure if this is required any more.
I suggest you disable any fancy, computationally intensive screensavers when using mp4live to capture long programs. Along the same lines, don't run any programs that make large resource demands (CPU, bus, disk, network) while mp4live is running.
If you're capturing large video image sizes, then you may be able to boost the encoded video frame rate by disabling video preview. In general, once you've got the system working, disabling preview is a good idea.
The AAC audio encoder is somewhat slower than the MP3 audio encoder so you may see lower video frame rates and greater sensitivity to CPU load if you are using AAC. (2004 note - not so much any more).
Linux supports the POSIX soft real-time extensions and mp4live will attempt to use these to give it priority over non-real time processes. Typically these calls can only be made by processes with root privileges, so you may want to run mp4live as root for this reason.
If you have the latest version of OSS, you have a chance of detecting audio overruns. That, in combination with the latest version and working on a fast machine will give fairly good lip syncronization up to about the hour mark running 90% of the CPU with V4L. With V4L2, we've had good audio and video sync out beyond 500 hours in our lab.
contributed by Dave Baker.
The bt878 cards (WinTV PCI and family) are not entirely straightforward to capture audio from, and there are a couple of different ways:
The WinTV PCI in particular has an on-board audio device that can be used to capture audio. Simply load the btaudio (or snd-bt87x for ALSA). Beware that if you have a sound card then the WinTV's device will probably have been allocated /dev/dsp1, so you will have to set mp4live to capture from this. It will also have /dev/dsp2, as it registers an analog and digital device - see your /var/log/messages file. Try the digital one first. Not all bt878 boards have this audio device and not all of them have them wired the same, so this may or may not work. Note especially that the WinTV PCI (and possibly others) are able to capture from the tuner using this but will not capture audio from the 3.5mm jack input on the card.
The WinTV cards have two 3.5mm jacks on the back - one input and one output. As mentioned above, the WinTV PCI cannot use the input to record sound itself (other cards may be able to). The idea is that when the card is capturing video from the tuner, the card feeds the tuner's audio out of the output jack, but when it is capturing from the composite or S-Video connectors, it connects the input and output together. Hence, if you'll only ever be capturing from composite and/or S-Video, you may as well connect your audio source directly into the line-in jack on your sound card. Either way, if you want to capture audio that isn't coming from the tuner, you'll need to use a separate sound card.
It is also worth noting that the quality of the sound card used will have a big effect on the lip sync of your captured video. Cheaper sound chips like the ones commonly found on mainboards as on-board audio can have very poor clocks, which causes the audio and video to drift out of sync, often making a noticable lag over only five or ten minutes. Use a proper sound card!
Editor's note - mp4live should compensate for this correctly, if the V4L2 video interface is used, and the sound card correctly returns the value from the SNDCTL_DSP_GETISPACE.
See the MPEG4IP README for general notes about the build environment.
Assuming you've already done a build at the top level of mpeg4ip, and you're on a Linux system then mp4live should be built and waiting for you in this directory. If you've done a top level 'make install', the mp4live will be installed into '/usr/local/bin'. Of course, you can also issue 'make' and 'make install' from this directory as well.
Typically, there is no need for command line options to mp4live. You can just type 'mp4live' and you'll be up and running.
Global configuration settings are stored in your home directory in ~/.mp4live_rc. This file is read when mp4live is started.
A concept that is new for mp4live in 1.3 are the concepts of profiles and streams. A profile can be used to name a set of audio, video, or text parameters. Different profiles can be created, and can be selected from at any time mp4live is run. For example, you could create a video profile that has mpeg-4 with ffmpeg at 320x240, 500Mbps, and another one with mpeg-4 with xvid at 352x288, 750Mbps. You can then switch between these 2 profiles without changing each setting.
Profiles are stored in a user settable directory (by default ~/.mp4live_d), with Audio, Video and Text sub-directories. Profiles can be created with mp4live (use the drop-down menu and select "Add"), or changed (select the profile, then select "Customize"). It is recommended that the profile directory not change; for multiple instances of mp4live, use seperate stream directories.
A stream is an group of a single audio, video, and/or text profiles. A stream can be transmitted, recorded or both.
Mp4live can support as many streams as your CPU processing power can support. Each stream has the same source, but can have different destination parameters.
Mp4Live is smart in creating the streams; for any given audio, video or text profile, the encoding specified by that profile will be done only 1 time, no matter how many streams include that profile.
mp4live will capture video at the largest size specified in the profiles, and will capture audio at the highest sample rate. Video for other profiles is resized; audio is resampled and converted from stereo to mono, if needed.
By creating multiple streams, different combinations of audio and video profiles can be used and saved. For example
The default settings for mp4live are to record 1 minute of audio and video to an mp4 file, ./capture.mp4 The first time you use the program, it's a good idea to just hit the Start button, and see what happens. If all goes well, 1 minute later you have a playable/streamable mp4 file. If you don't get this, then it's time to review this README, and it that doesn't help, then fire off a message on the MPEG4IP SourceForge discussion group.
Assuming things are working you can now use the various controls to adjust things like the video size and frame rate, the audio sampling rate, the encoded bitrates, etc. The UI is hopefully self-explanatory. If not, let us know what's confusing and we'll look at fixing that. (I'm a big believer that if you need to read a document to use a UI, then the UI is broken and should be fixed. Of course, as I've re-learned many times, what is obvious and natural to me, isn't always to other people.)
If you're capturing video that uses "widescreen" or "letterbox" format, it's a big win to change the "Aspect Ratio" in the Video Settings. This will cause the video to be automatically cropped so you don't waste precious CPU time encoding the empty black bars at the top and bottom of the screen.
The capture cards will always try to capture frame rates based on what the setting of the video card is (either NTSC or PAL).
The default is to assume that the video driver is going to capture close to the correct frame rate of 29.97 for NTSC, 25 for PAL. If you don't think that this is working quite right, try the "videoTimestampCache=0" to the .mp4live_rc before you start. (This may be the case with usb web cams, but most likely not regular capture cards). Note: this is for v4l users only.
A user contributed some simple video filters. We have a decimate filter (which captures video at 2 times the resolution, and scales it back down), and a deinterlace filter (which will do a linear blend style of deinterlace on the Y plane before encoding).
The decimate filter is used on the source, while the deinterlace filter is done on a per-profile basis.
These filters do take up some processing power, so be careful.
Along with audio and video, mp4live allows you to transmit plain text, or href streams as defined in ISMA's 2.0 specification.
There are currently 3 sources of getting entries for these streams:
ISMA Href's can be used to either automatically open a URL (also known as auto dispatch), or open the target URL when clicked.
Hrefs have the following format:
[A]<entry url>[T<target element>][E<embeded parameters element>][M<>]
The < and > are mandatory around the entry URL. The remain elements
defined as:
mp4live will automatically generate the A< and > if a < is not
the first character entered. So if http://www.cisco.com
is
entered, A<http://www.cisco.com>
will be generated.
It is possible to create content for QT 6.0 and Real One with the Envivio plug in. You must create with the audio encoding set to AAC for both of these.
Quicktime does not seem to like 7350Hz Sampling frequency; do not use this if you want interoperability with Quicktime.
If you want to stream (using the instructions below), you must have Envivio version 1.2. Download Real One, then download Envivio TV version 1.2 afterwards. The Envivio plugin downloaded with Real One will not play multiple AAC frames in a RTP packet, so your sound will appear to stutter.
If you want to stream mpeg2 to the QT player (with the mpeg2 add-on), you will need to broadcast MPEG2 video and MPEG layer 2 audio. For both of these, ffmpeg is required. Make sure to use the rtpUseMp3RtpPayload14=1 configuration setting.
September, 2005 note: it appears that Quicktime 7.0 will allow streaming of various codecs; I was able to get mpeg-4 video in conjunction with G.711. You can try various codecs yourself.
To use mp4live to transmit live audio/video to the network, follow these directions:
Select the Transmit check box in the Stream Information section of the main window.
By default, mp4live will generate multicast address so that there are no overlaps. The "Edit->Generate Addresses" menu item will pick new, random addresses. Each stream that has a common profile will transmit on the same address:port.
If you wish to change this, select the 'Set Address' button for the stream you wish to change. A dialog will appear that allows the choice between Generated Addresses or Fixed address. To transmit to a unicast address, use the Fixed Address setting.
When you press mp4live's 'Start' button, media transmission to the network will begin. Also a small text file with extension .sdp will be created that describes the media transmission for the player. The player needs the .sdp file to be able to tune into the media streams. The sdp files are generated pretty much any time a configuration item is changed; they can also be generated by using "Edit->Generate SDP Files" menu item, or using the command line argument.
SDP file names will by default contain the stream name, with spaces converted to underlines.
The most convenient way to distribute the .sdp file is to have mp4live write it to a directory that is accessible from a web server (httpd) that is running on the same machine as mp4live. This allows the client to be started with the HTTP URL of the sdp file, and it will download the .sdp file via http and then use the information in the .sdp file to tune into the network transmission. E.g.:
gmp4player http://myserver/myprogram.sdp
For Real One, use the Open command with http://myserver/myprogram.sdp. For QT6.0, use Open URL.
You can of course distribute the .sdp file in a number of other ways, say ftp, or email. You would then start the player with the local file name of the sdp file, e.g.:
gmp4player myprogram.sdp
If you would like to use mp4live in conjunction with the Darwin Streaming Server (DSS), that is easy to do. You can have mp4live both record and transmit live media streams. When you record the .mp4 files, just ensure that they are written to the media directory that is accessible via the Darwin Streaming Server, typically /usr/local/movies. Once the recording is complete, it will be available for on-demand playback.
For example:
gmp4player rtsp://DSS/mymovie.mp4
The Darwin Streaming Server can also be configured to act as a relay agent for the mp4live media streams. Copy the .sdp file generated by mp4live to the media directory of the Darwin Streaming Server (e.g. /usr/local/movies) Players can now request the .sdp file from DSS which will cause DSS to act as a relay between mp4live and the player.
For example:
gmp4player rtsp://DSS/mymovie.sdp
If you're having problems where gmp4player is stopping after 2 minutes when
relaying through a Darwin Streaming Server, add the line rtpNoBRR0=1
to your .mp4live_rc. Darwin is expecting RTCP messages from gmp4player, and the b=RR:0 statement
in the SDP stops gmp4player from sending them.
If you have another program that wants to simultaneously process the raw audio and/or video from the capture cards, there is typically a problem in that many drivers only support one reader at a time. To address this issue, mp4live can be configure to write the raw audio and/or video that it reads from the capture cards to a named pipe (fifo). A named pipe looks like a file, but the data only exists in memory and never goes to disk. This is an efficient way to have the two applications share the media data.
To configure this feaure, add the following to ~/.mp4live_rc (or whatever configuration file you want to use), changing "/dir" to some directory where you want the named pipes to exist:
rawEnable=1
rawAudioFile=/dir/audio_pipe
rawAudioUseFifo=1
rawVideoFile=/dir/video_pipe
rawVideoUseFifo=1
The audio format is 16 bit PCM, the video format is YUV12 (planar 4:2:0 YUV).
There are currently four command line options to mp4live:
--file <config-file>
--automatic
--headless
--sdp
--config-vars
--file <config-file>
allows specification of the mp4live configuration file
to something other than ~/.mp4live_rc. Perhaps you have a several frequently
used configurations. You can save the configuration settings to different
files, and then use this option to choose among them.
--automatic
causes mp4live to act as if the Start button was pressed
immediately upon startup. The program will do whatever the current
configuration instructs it to do. This option can be used in conjunction
with the 'cron' utlity to do scheduled recording and/or transmission.
--headless
causes mp4live to behave in the --automatic
mode AND not display any user interface.
--sdp
causes mp4live to just generate the sdp file based on its configuration file and then exit.
--config-vars
will cause mp4live to display the list of all
possible configuration variables and exit.
In addition, any of the configuration variables can be overwritten by using --<variable>=<value>. Make sure that the case of variable matches the case from the --config-vars display.
Using a system with a PCI instead of an AGP video display card can cause video "tearing" with CIF or larger size images. I.e. the PCI bus quickly gets swamped moving raw video from the video capture card to the CPU, and then from the CPU to the video display card. Having the AGP bus for the CPU to video display card transfer alleviates this problem. If someone is interested one could experiment with the video overlay capabilities of the Bt8x8 to bypass this problem, but it would require some rework of our code with respect to the video preview function.
It took me awhile to figure this out so perhaps I can save some of you some time. If you use the Hauppage WinTV Go card you need to connect the mini-jack on the card to the line-in input on your sound card in order to get the audio signal from the TV tuner.
Note there is currently no support for DV/mini-DV camcorders via FireWire. You can of course still use these via their composite or S-Video outputs.
More recent versions of mp4live add streaming hint tracks as a post-processing step (i.e. after the recording is finished). For long duration recording (1 hour or greater), this step can take a minute or two. I'm hoping to enhance the UI to provide user feedback while this is taking place, but for now the application gets very unresponsive during this period. If this is a big problem for you, there is a configuration option to disable the hinting process, "recordMp4HintTracks=0". The mp4 file can always be hinted later with the mp4creator utility.
The audio and video should be in sync if you're using the latest tools (V4L2 and the latest OSS driver). If you're not, you will have problems in long term (usually an hour or so).
The current audio/video synchronization algorithm in the MP4 File Recorder
starts by dropping video frames until an audio frame is loaded. It then
drops subsequent video frames until the next I video frame is loaded.
This I frame is stretched to the beginning of the first audio frame to
synchronize the video and audio.
Because of this, the first video frame is displayed for a longer duration
before the video starts rolling. This duration is usually of the order of
4 or 5 video frame times and generally unnoticable.
We've done our best to try to start the audio first, but since the video
is already running in the preview (if turned on), the file recorder receives
a bunch of video frames before the first audio frame propagates to the
file recorder. Given this, the above algorithm seems to be a resonable
solution.
Sometimes, you may experience a crash while changing the video parameters such as height/width or aspect ratio. If this occurs, change the parameter in your .mp4live_rc file, and restart mp4live, or just pass the videoRawWidth and videoRawHeight as command line parameters.
H.261 recording does not work
I've only used mp4live with two video capture cards the Viewcast Osprey 100 and the Hauppage WinTV Go. There are many other Bt8x8 based capture cards listed in the bttv driver documentation. Odds are you're using one of these ;-) Reports from initial users suggest though that the bttv driver handles the wide variety of cards gracefully, and mp4live doesn't have card specific issues.
If you do have problems with mp4live, my first suggestion is that you download the latest version of the xawtv package, and try it with your capture card. If it works and mp4live doesn't then I'd be glad to hear from you. If xawtv doesn't work with your capture card, then I can't help you. Something is wrong with your capture card/system/bttv driver/kernel combination. I don't have the capability or inclination to debug that for you!
MPEG4IP | http://www.mpeg4ip.net/ |
bttv driver | http://bytesex.org/bttv/ |
qce driver | http://www.sourceforge.net/projects/qce-ga |
xawtv | http://bytesex.org/xawtv/ |
Xvid | http://www.xvid.org/ |
LAME | http://www.sourceforge.net/projects/lame |
TWOLAME | http://www.twolame.org |
FAAC | http://www.audiocoding.com |
FFMPEG | http://ffmpeg.sourceforge.net |
Global configuration variables are stored in .mp4live_rc file
Application Level | ||||
---|---|---|---|---|
Name | Type | Default | Does | |
useRealTimeScheduler | bool | true | attempts to use real time features of the OS Probably only suceeds as root | |
duration | int | 1 | duration in durationUnits | |
durationUnits | int | 60 | Number of seconds per duration unit (1, 60, 3600, 86400) | |
debug | bool | false | Enable debug output | |
signalHalt | string | sighup | Signal used in no gui mode to stop | |
Global Audio Settings | ||||
audioSourceType | string | OSS | Audio Source Type (only OSS for now) | |
audioDevice | string | /dev/dsp | Audio Device to use | |
audioMixer | string | /dev/mixer | Audio Mixer to use | |
audioInput | string | mix | Audio Input Type to use | |
audioOssUseSmallFrags | bool | true | Enable small fragments size in OSS | |
audioOssFragments | int | 128 | Number of fragments | |
audioOssFragSize | int | 8 | Size of fragments | |
Global Video Settings | ||||
videoSourceType | string | V4L | Video Source to use (V4L is for both V4L and V4L2) | |
videoDevice | string | /dev/video0 | Video Device to use | |
videoInput | int | 1 | Video Input to use (index from V4L) | |
videoSignal | int | 1 | PAL-0, NTSC-1, SECAM-2 | |
videoTuner | int | -1 | Which tuner to use - usually 0 | |
videoChannelListIndex | int | 0 | Which channel list to use for tuner see video_util_tv.cpp | |
videoChannelIndex | int | 1 | 0 based index for channel in above list | |
videoPreview | bool | true | Show Video Preview in Gui | |
videoRawPreview | bool | false | Show Raw Video Preview in Gui | |
videoEncodedPreview | bool | true | Show Encoded Video Preview in Gui | |
videoBrightness | int | 50 | Brightness level (0 to 100) | |
videoHue | int | 50 | Hue level (0 to 100) | |
videoColor | int | 50 | Color level (0 to 100) | |
videoContrast | int | 50 | Contrast level (0 to 100) | |
videoTimestampCache | bool | true | Calculate timestamps, rather than read with timestamp (V4L only) | |
videoFilter | string | none | Video filter to use (none, "deinterlace - blend") | |
Global Recording Options | ||||
recordEnable | bool | true | True to record | |
recordRawInMp4 | bool | true | True to record raw audio and video in MP4 File | |
recordRawMp4Audio | bool | false | True to record raw audio (PCM at encode frequency) | |
recordRawMp4Video | bool | false | True to record raw video (YUV at height/width) | |
recordMp4HintTracks | bool | true | Record hint tracks when recording completed | |
recordMp4Optimize | bool | false | Optimize mp4 file when recording completed | |
recordMp4FileStatus | integer | 1 | What happens to file when restarted: 0 - append, 1 - overwrite, 2 - create new file with timestamp | |
rawEnable | bool | 0 | ouput raw audio/video to file | |
rawAudioUseFifo | bool | 0 | Output to pipe (see Sharing Capture Cards) | |
rawAudioFile | String | capture.yuv | File to store raw PCM | |
rawVideoUseFifo | bool | 0 | Output to pipe (see Sharing Capture Cards) | |
rawVideoFile | String | capture.yuv | File to store raw YUV | |
Global Transmission (RTP) Options | ||||
rtpPayloadSize | int | 1460 | max bytes of audio or video per packet | |
rtpMulticastTtl | int | 15 | Multicast TTl | |
rtpDisableTimestampOffset | bool | false | If true, start RTP timestamps at 0 if false, start at random offset | |
rtpUseSingleSourceMulticast | bool | false | Use SSM multicast | |
rtpNoBRR0 | bool | false | If true, do not include b=RR:0 in SDP |
Stream configuration variables are stored in an individual file per stream in the .mp4live_d/Stream directory.
Stream Configuration | ||||
---|---|---|---|---|
Name | Type | Default | Does | |
name | string | none | Mandatory Name | |
audioEnabled | bool | true | True if audio is enabled | |
videoEnabled | bool | true | Enable Video | |
textEnabled | bool | true | Enable Text | |
recordEnabled | bool | false | True to record | |
recordFile | string | <stream name>.mp4 | MP4 Filename to create | |
transmitEnabled | bool | true | True to transmit over the network | |
sdpFile | string | <stream name>.sdp | Where to store sdp file describing session | |
audioAddrFixed | bool | false | true to fix address; false autogenerates | |
audioDestAddress | string | 224.1.2.3 | Audio Stream destination address | |
audioDestPort | int | 20002 | Audio Stream destination port | |
videoAddrFixed | bool | false | true to fix address; false autogenerates | |
videoDestAddress | string | 224.1.2.3 | Video Stream destination address | |
videoDestPort | int | 20000 | Video Stream destination port | |
textAddrFixed | bool | false | true to fix address; false autogenerates | |
textDestAddress | string | 224.1.2.3 | Text Stream destination address | |
textDestPort | int | 20004 | Text Stream destination port |
Audio profile configuration variables are stored in an individual file per stream in the .mp4live_d/Audio directory.
Audio Profile Settings | ||||
---|---|---|---|---|
Name | Type | Default | Does | |
name | string | none | Mandatory Name | |
audioChannels | int | 2 | Number of Encoded Audio Channels (1 or 2) | |
audioSampleRate | int | 44100 | Audio Frequency Sample Rate | |
audioBitRateBps | int | 128000 | Encoded Audio Bit Rate | |
audioEncoding | string | MP3 | Audio Encoding to use | |
audioEncoder | string | LAME | Audio Encoder to use | |
rtpUseMp3RtpPayload14 | bool | false | if true, use RTP payload 14 and 90000 timescale if false, use dynamic payload and frequency timescale | |
rtpMaxFramesPerPacket | int | 0 | if non-zero, set maximum number of frames per packet |
Video profile configuration variables are stored in an individual file per stream in the .mp4live_d/Video directory.
Video Profile Settings | ||||
---|---|---|---|---|
Name | Type | Default | Does | |
name | string | none | Mandatory Name | |
videoEncoder | string | xvid | Video Encoder Type | |
videoEncoding | string | MPEG4 | Video Encoding Type | |
videoWidth | int | 320 | Width of output frame in pixels | |
videoHeight | int | 240 | Height of output frame in pixels | |
videoCropAspectRatio | float | 1.33 | Aspect ratio | |
videoFrameRate | float | 29.97 | Frame Rate | |
videoKeyFrameInterval | float | 2.0 | Number of Seconds between Key Frames | |
videoBitRate | int | 500 | Encoded Video Bit Rate in 1000 bits per second | |
videoForceProfileId | bool | false | True to force MPEG4 Video Profile to videoProfileId | |
videoProfileId | int | 3 (SP@L3) | MPEG4 Video Profile when forcing | |
videoH261Quality | int | 10 | Starting H.261 Video Quality | |
videoH261QualityAdjFrames | int | 8 | Number of frames to adjust H.261 Quality over | |
videoCaptureBuffersCount | int | 16 | Number of capture buffers to request (V4L2 only) | |
videoFilter | string | none | Video filter to use (none, "deinterlace - blend") | |
videoUseBFrames | bool | 0 | Encode using b-frames (ffmpeg, x264 encoders only | |
videoBFrameNum | int | 2 | Number of b frames (ffmpeg, x264 encoders only | |
Mpeg-4 Video Options | ||||
videoMpeg4ParWidth | int | 0 | Mpeg-4 par width value | |
videoMpeg4ParHeight | int | 0 | Mpeg-4 par height value | |
XVID 1.0 Video Options | ||||
xvid10VideoQuality | int | 6 | Video Quality (0 to 6 values) | |
xvid10UseGMC | bool | false | Use GMC (do not use if using Quicktime Clients | |
xvid10UseQpel | bool | false | Use Quarter Pel (do not use if using Quicktime Clients | |
xvid10UseLumimask | bool | false | Use XVID lumimask filter plugin | |
xvid10UseInterlace | bool | false | Use XVID interlace plugin |
Text profile configuration variables are stored in an individual file per stream in the .mp4live_d/Text directory.
Text Profile Settings | ||||
---|---|---|---|---|
Name | Type | Default | Does | |
name | string | none | Mandatory Name | |
textEncoding | string | href | Text encoding to use | |
textRepeatTime | float | 1.0 | time in seconds to repeat last transmission | |
hrefMakeAutomatic | bool | true | Urls without ISMA href markings get made Automatic |
Dave Mackie
Bill May
Cisco Systems, Inc.