Commit 5f67f5db authored by Marvin Scholz's avatar Marvin Scholz

Add Ices 2.0.3 docs

Just the plain copy of the docs from the repo for now, no
fancy markdown docs.
parent 78e1f116
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>IceS v2.0 Documentation</title>
<link rel="STYLESHEET" type="text/css" href="style.css">
</head>
<body>
<div class=boxtest>
<h1>Basic setup</h1>
<table width="100%"><tr><td bgcolor="#007B79" height="10" align="center"></td></tr></table>
<br>
<h2>What does IceS require?</h2>
<p>
IceS v2 is not a graphical application, it's purpose is to stream
whatever it is given into a stream for feeding to the Icecast streaming
server. It does however require the following:
</p>
<ul>
<li>libogg available at <a HREF="http://www.vorbis.com">www.vorbis.com</a>
<li>libvorbis available at <a HREF="http://www.vorbis.com">www.vorbis.com</a>
<li>libxml2 available at <a HREF="http://xmlsoft.org">xmlsoft</a>
<li>libshout 2 available at <a HREF="http://www.icecast.org">The Icecast site</a>
</ul>
<p>
Please note that in many cases, pre-built packages are split into two,
a run-time package, typically consisting of the actual run-time library
and a development package, consisting of the support files needed to
compile and link the application.
</p>
<p>
The following are optional
</p>
<ul>
<li>ALSA. driver and libs available at <a HREF="http://www.alsa-project.org">The ALSA site</a>
<li>RoarAudio driver and libs available at <a HREF="http://roaraudio.keep-cool.org/">The RoarAudio site</a>
</ul>
<br>
<h2>What does IceS do ?</h2>
<p>
IceS reads audio data from an input and sends the audio data to one or
more files or icecast servers. Before it's actually sent out, some
processing maybe performed, typically resampling and/or downmixing to
produce streams suited to various bandwidth requirements.
</p>
<p>
The streams produced are Ogg Vorbis streams so while icecast 2 is
capable of streaming these streams, other streaming servers may not be.
<p>
<br>
<h2>What input can IceS handle?</h2>
<p>
Several inputs currently exist, but some maybe dependant on certain
platforms or if certain drivers or libraries are available.
</p>
<ul>
<li>OSS - Open Sound System. Typically used on linux based systems to
get live input from a soundcard.
<li>ALSA - Advanced Linux Sound Architecture. Like OSS but with various
improvements for linux based systems.
<li>stdinpcm - Uses standard input to receive PCM audio.
<li>playlist - Uses a playlist to read audio files for processing.
<li>sun - like OSS, but for Sun Solaris, also works for OpenBSD
<li>roar - RoarAudio is a modern, multi OS sound system.
</ul>
<h2>How do you start IceS?</h2>
<p>
The configuration of IceS is done via an XML based config file. Which
you supply as an argument to the program at invocation time. For example
</p>
<BR>
ices /etc/ices.xml
<BR>
</div>
</body>
</html>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>IceS v2.0 Documentation</title>
<link rel="STYLESHEET" type="text/css" href="style.css">
</head>
<body>
<div class=boxtest>
<h1>Config File</h1>
<table width="100%"><tr><td bgcolor="#007B79" height="10" align="center"></td></tr></table>
<p>The ices 2 configuration file is in XML format, which is described in detail below.
There are some sample XML files provided in the distribution under the conf directory
which show the main way ices is used.<p>
</p>
live audio streaming, takes audio from the soundcard which can be captured from say the
Mic, line-In, CD or a combination of these.
<ul>
<li>ices-oss.xml</li>
<li>ices-alsa.xml</li>
<li>ices-roar.xml</li>
</ul>
<p>Playlist audio streaming, takes pre-encoded Ogg Vorbis files and either sends them
as-is or re-encodes them to different settings</p>
<ul>
<li>ices-playlist.xml</li>
</ul>
<h2>General layout</h2>
<pre>
&lt;?xml version="1.0"?&gt;
&lt;ices&gt;
general settings
stream section
&lt;/ices&gt;
</pre>
<h2>General settings</h2>
<p>These apply to IceS as a whole. The example below gives a useful example to work to</p>
<pre>
&lt;background&gt;0&lt;/background&gt;
&lt;logpath&gt;/var/log/ices&lt;/logpath&gt;
&lt;logfile&gt;ices.log&lt;/logfile&gt;
&lt;logsize&gt;2048&lt;/logsize&gt;
&lt;loglevel&gt;3&lt;/loglevel&gt;
&lt;consolelog&gt;0&lt;/consolelog&gt;
&lt;pidfile&gt;/var/log/ices/ices.pid&lt;/pidfile&gt;
</pre>
<h4>background</h4>
<div class=indentedbox>
Set to 1 if you want IceS to put itself into the background.
</div>
<h4>logpath</h4>
<div class=indentedbox>
A directory that can be written to by the user that IceS runs as. This
can be anywhere you want but as log files are created, write access to the
stated must be given.
</div>
<h4>logfile</h4>
<div class=indentedbox>
The name of the logfile created. On log re-opening the existing logfile is
renamed to &lt;logfile&gt;.1
</div>
<h4>logsize</h4>
<div class=indentedbox>
When the log file reaches this size (in kilobytes) then the log file will
be cycled (the default is 2Meg)
</div>
<h4>loglevel</h4>
<div class=indentedbox>
A number that represents the amount of logging performed.
<ul>
<li>1 - Only error messages are logged
<li>2 - The above and warning messages are logged
<li>3 - The above and information messages are logged
<li>4 - The above and debug messages are logged
</ul>
</div>
<h4>consolelog</h4>
<div class=indentedbox>
A value of 1 will cause the log messages to appear on the console instead
of the log files. Setting this to 1 is generally discouraged as logs are
cycled and writing to screen can cause stalls in the application, which
is a problem for timing critical applications.
</div>
<h4>pidfile</h4>
<div class=indentedbox>
State a filename with path to be created at start time. This file will
then contain a single number which represents the process id of the
running IceS. This process id can then be used to signal the application
of certain events.
</div>
<h2>Stream section</h2>
<p>This describes how the input and outgoing streams are configured.<p>
<pre>
&lt;stream&gt;
Metadata
Input
Instance
&lt;/stream&gt;
</pre>
<h3>Metadata</h3>
<pre>
&lt;metadata&gt;
&lt;name&gt;My Stream&lt;/name&gt;
&lt;genre&gt;Rock&lt;/genre&gt;
&lt;description&gt;A short description of your stream&lt;/description&gt;
&lt;url&gt;http://mysite.org&lt;/url&gt;
&lt;/metadata&gt;
</pre>
<p>
This section describes what metadata information is passed in the headers
at connection time to icecast. This applies to each instance defined within
the stream section but maybe overridden by a per-instance &lt;metadata&gt;
section.
</p>
<h3>Input</h3>
<p>
This section deals with getting the audio data into IceS. There are a few
ways that can happen. Typically it's either from a playlist or via a soundcard.
</p>
<p>
The layout is consistent between the different input modules. Within the
input section a module tag is needed to identify the module in question. The
rest are made up of param tags specific to the module. There can be several
param tags supplied to a module. Details of the module parameters are
shown later.
</p>
<h3>Instance</h3>
<p>
Multiple instances can be defined to allow for multiple encodings, this
is useful for encoding the same input to multiple bitrates. Each instance
defines a particular set actions that occur on the passed in audio. Any
modifications to the input data is isolated to the instance.
</p>
<pre>
&lt;instance&gt;
hostname
port
password
mount
tls-mode
ca-directory
ca-file
allowed-ciphers
client-certificate
yp
resample
downmix
savefile
encode
&lt;/instance&gt;
</pre>
<h4>hostname</h4>
<div class=indentedbox>
State the hostname of the icecast to contact, this can be a name or IP
address and can be IPv4 or IPv6 on systems that support IPv6. The default
is localhost.
</div>
<h4>port</h4>
<div class=indentedbox>
State the port to connect to, this will be the port icecast is listening on,
typically 8000 but can be any.
</div>
<h4>password</h4>
<div class=indentedbox>
For providing a stream, a username/password has to be provided, and must
match what icecast expects.
</div>
<h4>username</h4>
<div class=indentedbox>
For providing a stream, a username/password has to be provided, and must
match what icecast expects. Default is "source".
</div>
<h4>mount</h4>
<div class=indentedbox>
Mountpoints are used to identify a particular stream on a icecast server,
they must begin with / and for the sake of certain listening clients should
end with the .ogg extension.
</div>
<h4>tls-mode</h4>
<div class=indentedbox>
Controls if the server should use a encrypted and authenticated connection
to the Icecast server. Supported modes are "auto" (default mode) in which
TLS is used if supported by the server, "auto_no_plain" in which TLS is forced
but modes supported by the server is still autodetected, "RFC2818" which uses
TLS sockets and "RFC2817" to use the HTTP "Upgrade:" mechanism.
<b>This requires TLS support compiled into libshout.</b>
</div>
<h4>ca-directory</h4>
<div class=indentedbox>
This sets the directory holding certificates of certification authorities used
to verify the server. Default is to use operating system's defaults.
<b>This requires TLS support compiled into libshout.</b>
</div>
<h4>ca-file</h4>
<div class=indentedbox>
In contrast to the ca-directory option this allows to set a single file with
certificate(s) used to verify the server. You can use this option to pass IceS
the certificate if you are using a self-signed one on the server also.
Default is to use operating system's defaults.
<b>This requires TLS support compiled into libshout.</b>
</div>
<h4>allowed-ciphers</h4>
<div class=indentedbox>
For encrypted and authenticated connections this controls what ciphers may be used.
Do not set this option if you do not really know what you are doing. This can easily
break setup and security.
<b>This requires TLS support compiled into libshout.</b>
</div>
<h4>client-certificate</h4>
<div class=indentedbox>
On a TLS connection the client can authenticate to the server using a certificate.
This option allows to set such an certificate. This is a file in PEM format
containing both the certificate as well as the private key.
<b>This requires TLS support compiled into libshout.</b>
</div>
<h4>yp</h4>
<div class=indentedbox>
By default streams will not be advertised on a YP server unless this is set,
and only then if the icecast if configured to talk to YP servers.
</div>
<h4>reconnectdelay</h4>
<div class=indentedbox>
If the connection to the server is lost ices2 will wait a given time before reconnecting.
This setting controls how long ices2 will wait before reconnecting.
The value is the time in seconds.
</div>
<h4>reconnectattempts</h4>
<div class=indentedbox>
This setting controls the number of reconnet attempts ices2 will do before considering
the server to be down and stopping the instance.
</div>
<h4>retry-initial</h4>
<div class=indentedbox>
This setting controls if being unabled to connect to the server at startup is considered a fatal error.
The default is to consider this a fatal error and quit making debugging more easy.
</div>
<h4>Resample</h4>
<pre>
&lt;resample&gt;
&lt;in-rate&gt;44100&lt;/in-rate&gt;
&lt;out-rate&gt;22050&lt;/out-rate&gt;
&lt;/resample&gt;
</pre>
<div class=indentedbox>
<p>
When encoding or re-encoding, there is a point where you take PCM audio
and encode to Ogg Vorbis. In some situations a particular encoded stream may
require a lower samplerate to achieve a lower bitrate. The resample will
modify the audio data before it enters the encoder, but does not affect
other instances.
</p>
<p>
The most common values used are 48000, 44100, 22050 and 11025, and is
really only used to resample to a lower samplerate, going to a higher rate
serves no purpose within IceS.
</p>
</div>
<h4>Downmix</h4>
<pre>
&lt;downmix&gt;1&lt;/downmix&gt;
</pre>
<div class=indentedbox>
Some streams want to reduce the bitrate further, reducing the number of channels
used to just 1. Converting stereo to mono is fairly common and when this is set
to 1 the number of channels encoded is just 1. Like resample, this only affects
the one instance it's enabled in.
</div>
<h4>Savefile</h4>
<pre>
&lt;savefile&gt;/home/ices/dump/stream1.ogg&lt;/savefile&gt;
</pre>
<div class=indentedbox>
Sometimes the stream transmitted wants to be saved to disk. This can be useful
for live recordings.
</div>
<h4>encode</h4>
<pre>
&lt;encode&gt;
&lt;quality&gt;0&lt;/quality&gt;
&lt;nominal-bitrate&gt;65536&lt;/nominal-bitrate&gt;
&lt;maximum-bitrate&gt;131072&lt;/maximum-bitrate&gt;
&lt;minimum-bitrate&gt;-1&lt;/minimum-bitrate&gt;
&lt;managed&gt;0&lt;/managed&gt;
&lt;samplerate&gt;22050&lt;/samplerate&gt;
&lt;channels&gt;1&lt;/channels&gt;
&lt;flush-samples&gt;11000&lt;/flush-samples&gt;
&lt;/encode&gt;
</pre>
<p>
Remove this section if you don't want your files reencoded when
using playback or RoarAudio input module.
</p>
<p>quality</p>
<div class=indentedbox>
State a quality measure for the encoder. The range goes from -1 to 10 where -1
is the lowest bitrate selection (default 3), and decimals can also be stated, so
for example 1.5 is valid. The actual bitrate used will depend on the tuning in
the vorbis libs, the samplerate, channels and the audio to encode. A quality of 0
at 44100hz and 2 channels is typically around the 64kbps mark.
</div>
<p>nominal-bitrate</p>
<div class=indentedbox>
State a bitrate that the encoder should try to keep to. This can be used as an
alternative to selecting quality.
</div>
<p>managed</p>
<div class=indentedbox>
State 1 to enable full bitrate management in the encoder. This is used with
nominal-bitrate, maximum-bitrate and minimum-bitrate to produce a stream with
more strict bitrate requirements. Enabling this currently leads to higher CPU
usage.
</div>
<p>maximum-bitrate</p>
<div class=indentedbox>
State bitrate in bits per second to limit max bandwidth used on a stream. Only
applies if managed is enabled.
</div>
<p>minimum-bitrate</p>
<div class=indentedbox>
State bitrate in bits per second to limit minimum bandwidth used on a stream.
Only applies if managed is enabled, this option has very little use so
should not really be needed.
</div>
<p>samplerate</p>
<div class=indentedbox>
State the samplerate used for the encoding, this should be either the same as
the input or the result of the resample. Getting the samplerate wrong will
cause the audio to be represented wrong and therefore sound like it's running
too fast or too slow.
</div>
<p>channels</p>
<div class=indentedbox>
State the number of channels to use in the encoding. This will either be the
number of channels from the input or 1 if downmix is enabled.
</div>
<p>flush-samples</p>
<div class=indentedbox>
This is the trigger level at which Ogg pages are created for sending to the server.
Depending on the bitrate and compression achieved a single Ogg page can contain many
seconds of audio which may not be wanted as that can trigger timeouts.
<p>Setting this to the same value as the encode samplerate will mean that a page per
second is sent, if a value that is half of the encoded samplerate is specified then
2 Ogg pages per second are sent.</p>
</div>
</div>
</body>
</html>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>IceS v2.0 Documentation</title>
<link rel="STYLESHEET" type="text/css" href="style.css">
</head>
<body>
<div class=boxtest>
<h1>Frequently Asked Questions</h1>
<table width="100%"><tr><td bgcolor="#007B79" height="10" align="center"></td></tr></table>
<p>
This for those questions people have asked as it wasn't covered
in the documentation
</p>
<h4>Can ices play mp3 files</h4>
<div class=indentedbox>
<p>
No, there hasn't been much interest in handling MP3 with ices 2. The older version ices
0.x maybe of interest in such cases. If you really want to encode the Vorbis stream from
non-vorbis files then you can play them with an external application, eg xmms, and use
ices 2 to capture from the soundcard, but be aware that any conversion from one lossy format
to another is bad so make sure the original material is high quality.
</p>
</div>
<h4>How do I encode from Line-In</h4>
<div class=indentedbox>
<p>
IceS will read from the DSP on the soundcard, but where that gets the
audio from depends on the mixer settings. Use a utility like aumix/alsamixer to
see the settings and change the capture or recording device. Usually the
default is the Mic
<p>
</div>
<h4>When I start ices 2 it seems to get stuck not doing anything</h4>
<div class=indentedbox>
If you are using live input, check to see if something else is holding the
recording device (typically /dev/dsp) open. A good candidate is esd. What
happens is that the driver only allows one application to have the device
open at any one time, a second attempt will just block.
<p>Some OSS drivers allow multiple opens but on ALSA you can configure a virtual
device in asound.conf, type dsnoop/dmix, which allows access for multiple apps.</p>
</div>
<h4>Ices reports a message about failing to set samplerate on live input</h4>
<div class=indentedbox>
Some hardware/drivers are limited in the settings they support. Sometimes
they only support one samplerate like 48khz. You have to experiment if the
documentation for the device is not specific.
</div>
<h4>Can I do crossfading with the playlist</h4>
<div class=indentedbox>
Ices does not do much in the way manipulating the audio itself, resampling
and downmixing are available as that has a direct effect on encoding an
outgoing stream. Ices can still be used in conjunction with other
applications such as xmms by making ices read from the say the dsp (eg
oss, alsa etc), that way anything that is played by that other application
is encoded and sent to icecast.
</div>
<h4>My player connects but I don't hear anything</h4>
<div class=indentedbox>
If you are getting data through to the player and the settings show like the
samplerate and chnnels then it is probably the mixer settings which are set
incorrectly and ices can only read silence. A common example, ALSA may have
many levels in the mixer and by default they are all muted.
</div>
<h4>My player seems unable to play the stream</h4>
<div class=indentedbox>
If the stream looks to be getting to the player then it will be how the
player is handling it. The usual causes of this are
<ul>missing ".ogg" extension. Both ices and icecast do not care about the
extension however some apps use the extension to determine which plugin to
use.</ul>
<ul>Missing Ogg Vorbis plugin. The winamp lite versions were known for
this.</ul>
<ul>Are you running Winamp 3. This is a discontinued product and had problems
with the vorbis plugin, either use the later v2.9 series or v5.</ul>
</div>
<h4>The sound quality is poor</h4>
<div class=indentedbox>
<p>
Ogg Vorbis is a lossy compression technology, so quality of the sound is
reduced, however with live input the source of audio can be poor depending
on the soundcard and the system it's in. As an initial test just record
a wav file from the DSP (using eg rec, arecord etc) and listen to the
quality of the audio recorded. If the source of audio is poor then encoding
it to Ogg Vorbis is not going to improve it.
</p>
<p>
The reasons for poor audio from the DSP can be difficult to resolve, search
for information on audio quality. It could be driver related or maybe some
interference from some other device.
</p>
<p>
Here are some links where further information can be found:
</p>
<a href="http://www.djcj.org/LAU/guide/index.php">http://www.djcj.org/LAU/guide/index.php</a><BR>
<a href="http://www.linuxdj.com/audio/lad/index.php3">http://www.linuxdj.com/audio/lad/index.php3</a>
</div>
</div>
</body>
</html>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>IceS v2.0 Documentation</title>
<link rel="STYLESHEET" type="text/css" href="style.css">
</head>
<body>
<div class=boxtest>
<h1>IceS v2.0 Documentation<br><br>Table of Contents</h1>
<table width="100%"><tr><td bgcolor="#007B79" height="10" align="center"></td></tr></table>
<ul>
<li><a href="intro.html">Introduction</a>
<li><a href="basic.html">Basic Setup</a>
<li><a href="config.html">Config File</a>
<li><a href="inputs.html">Input modules</a>
<li><a href="faq.html">FAQ</a>
</ul>
</div>
</body>
</html>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>IceS v2.0 Documentation</title>
<link rel="STYLESHEET" type="text/css" href="style.css">
</head>
<body>
<div class=boxtest>
<h1>Available input modules</h1>
<table width="100%"><tr><td bgcolor="#007B79" height="10" align="center"></td></tr></table>
<p>
Several input modules are available, depending on the platform, drivers
and libraries available. The general layout is defined as
</p>
<pre>
&lt;input&gt;
&lt;module&gt;module name&lt;/module&gt;
&lt;param name="name1"&gt;value&lt;/param&gt;
&lt;param name="name2"&gt;value&lt;/param&gt;
&lt;param name="name3"&gt;value&lt;/param&gt;
&lt;/input&gt;
</pre>
<p>
For live input you may want to look into various resources on the web for
information on sound input. You may find that ALSA for instance supports
a particular soundcard better than the Open Sound System.
</p>
<h2>Open Sound</h2>
<pre>
&lt;module&gt;oss&lt;/module&gt;
&lt;param name="rate"&gt;44100&lt;/param&gt;
&lt;param name="channels"&gt;2&lt;/param&gt;
&lt;param name="device"&gt;/dev/dsp&lt;/param&gt;
&lt;param name="metadata"&gt;1&lt;/param&gt;
&lt;param name="metadatafilename"&gt;/home/ices/metadata&lt;/param&gt;
</pre>
<p>
This module is for reading live input from the Open Sound System drivers,
often found on linux systems but are available on others. This will read audio
from the DSP device in a format specified in the parameters provided.
</p>
<p>
The following can be used to configure the module
</p>
<h4>rate</h4>
<div class=indentedbox>
The value is in hertz, 44100 is the samplerate used on CD's, but some drivers
may prefer 48000 (DAT) or you may want to use something lower.
</div>
<h4>channels</h4>
<div class=indentedbox>
The number of channels to record. This is typically 2 for stereo or 1 for mono
</div>
<h4>device</h4>
<div class=indentedbox>
The device to read the audio samples from, it's typically /dev/dsp but
there maybe more than one card installed.
</div>
<h4>metadata</h4>
<div class=indentedbox>
Check for metadata arriving, if any are present then the data is marked
for an update. The metadata is in the form of tag=value, and while Ogg
Vorbis can handle any supplied tags, most players will only do anything
with artist and title.
</div>
<h4>metadatafilename</h4>
<div class=indentedbox>
<p>
The name of the file to open and read the metadata tags from, with this
parameter missing standard input is read. Using a file is often the better
approach. When using the file access the procedure is usually to populate
the file contents then send a SIGUSR1 to the IceS process.
</p>
<p>
The format of the file itself is a simple one comment per line format,
below is a trivial example of the file, other tags can be used but players
tend to only look for artist and title for displaying. The data must be in
UTF-8 (this is not checked by ices, however).
</p>
</div>
<pre>
artist=Queen
title=We Will Rock You
</pre>
<h2>ALSA</h2>
<p>
The Advanced Linux Sound Architecture (ALSA) is a completely different
sound system on linux but provides OSS compatability so the OSS driver
should work with it as well. To use ALSA natively a separate module is used
</p>
<pre>
&lt;module&gt;alsa&lt;/module&gt;
&lt;param name="rate"&gt;44100&lt;/param&gt;
&lt;param name="channels"&gt;2&lt;/param&gt;
&lt;param name="device"&gt;hw:0,0&lt;/param&gt;
&lt;param name="periods"&gt;2&lt;/param&gt;
&lt;param name="buffer-time"&gt;500&lt;/param&gt;