AT&T :
http://www.naturalvoices.com/demos/index.html
-Nice one, but not as much choice (male/female) Voices sound pretty natural.
Bell Labs :
http://www.bell-labs.com/project/tts/voices.html
-Sounds from this site sound a bit robotty, but a bit better than your standard soundblaster textspeech.
class voicesample
{
name = "voicesample";
sound[] = {"voicesample.ogg", db-40, 1.0};
titles[] =
{
0, $STRM_Voice
};
};
class voicesample2
{
name = "voicesample2";
sound[] = {"voicesample2.ogg", db-20, 1.0};
titles[] =
{
0, $STRM_Voice2
};
};
};
With this, you just declared two new sounds for OFP. First, in sounds[] , you told what sounds were going to
be declared. Then you created the new sound classes with that name, class "voicesample" and
"voicesample2"
Then you will see a line called :
name = "voicesample";
This is used so you can find the name in the "voice" section, after you press the effects button.
The following line declares what sample is going to be used, such as "voicesample.ogg" by this line :
sound[] = {"voicesample.ogg", db-40, 1.0};
the db-40 refers to the volume of the sound.
the 1.0 refers to the pitch of the sound.
There is also a line titles :
titles[] =
{
0, $STRM_Voice
};
This defines what the subtitle will say when it is used. So if you hear the sound of voicesample.ogg and you
want to see on your screen : "war is boring, anyone else bored ?" The important piece here is :
$STRM_Voice
It is called a string and now it all comes together : you need a file called "stringtable.csv"
So at this stage you have done your work in your description.ext and you can save it.
Step 8 : Stringtable.CSV
Stringtable.csv is just like description.ext , it is a text document and notepad should open it without
problems. Since I am using samples from a real OFP mission, here is the stringtable.csv as BIS uses it :
LANGUAGE,English,French,Italian,Spanish,German,Comment
STRM_voice,War's boring. Anyone else bored?,"La guerre, c'est ennuyeux. Y'en a-t-il d'autres qui s'ennuient ?",Che
noia la guerra. C'è nessun altro annoiato?,Vaya aburrimiento de guerra. ¿Alguien más está aburrido?,Krieg ist
langweilig. Langweilt sich noch jemand?,Kozlowski
STRM_voice2,"Heads up, guys. Here comes Berghof.",Attention les gars ! Voilà Berghof.,"Attenzione, gente. Arriva,
Berghof.",Atención. Aquí viene Berghof.,"Haltung, Jungs. Hier kommt Berghof.",Bormioli
Here the STRM_voice comes back, but without the '$' sign in front of it. After that all the different languages
are there (if you have a language specific version of OFP) Every language is seperated by a comma ( , )
If you only want english, look at the following example by LustyPooh, so you don't have to spend weeks in
your foreign dictionaries :)
This is how LustyPooh did it :
LANGUAGE,English,Comment
STRM_message,Welcome\n\nCheck out all the scripts, Comment
The \n symbol is to define a 'return' (end of line) and to start a new one. It works similar to a BR command in
HTML.
Also note I used a ,comment after the line, I noticed that if I didn't, lines sometimes got scrambled.
This concludes all you need to know about the stringtable.csv file. As said, all can be reviewed in the
example mission, so check it out.
Step 9 : Radio Chat and Music
I will now simply insert the proper format to include radio and a music class, it works the same as
CfgSounds, so it shouldn't be a problem to see what is going on.
class CfgRadio
{
sounds[] =
{ radiosample, radiosample2 };
class radiosample
{
name = "radiosample2";
sound[] = {"radiosample.ogg", db-40, 1.0};
title = $STRM_radiosample;
};
class radiosample2
{
name = "radiosample2";
sound[] = {"radiosample2.ogg", db-40, 1.0};
title = $STRM_radiosample;
};
};
class CfgMusic
{
tracks[]={mymusic,mymusic2};
class mymusic
{
name = "tracknumber1";
sound[] = {\music\mymusic.ogg, db+10, 1.0};
};
class mymusic2
{
name = "tracknumber2";
sound[] = {\music\mymusic2.ogg, db+10, 1.0};
};
};
Step 9.5 : Additional classes, advanced sound.
There are a few more classes, the first I will cover here is the CfgSFX class. Keep in mind that I assume
you already know the deal with voices, radio and music classes, because the following is a bit harder.
The CfgSFX is the sound class that is referred to in the trigger effect box under "trigger" The beauty of this
class is that it repeats sound, and can repeat a number of sounds. Very usefull if you want to have, for
instance, ambient bird or animal sounds. The sound can be random. What it does not do is mix sounds.
You have to do this with a soundeditor , and then use the CfgEnvSounds class.
class CfgSFX
{
sounds[] = { examplesound };
class examplesound
{
name = "example sound";
sounds[]={sound1,sound2};
sound1[]={"soundname.ogg", db-0,1,0.3,5,1,10};
sound2[]={"soundname2.ogg", db-0,1,0.3,5,1,10};
empty[]= {, , , , 1 , 5, 20};
};
};
This line : sounds[]={sound1,sound2}; defines what sounds are in your loop. In this case sound1 and
sound2, which are defined in the lines ahead. Lets take a look at the hard part, this line :
sound1[]={"soundname.ogg", db-0,1,0.3,5,1,10};
What are these numbers ?
sound1[]={"soundname.ogg", db-0,1,0.3,5,1,10};
db-0 defines the sound volume
sound1[]={"soundname.ogg", db-0,1,0.3,5,1,10};
this 1 defines the pitch of the sample.
sound1[]={"soundname.ogg", db-0,1,0.3,5,1,10};
0.3 is the chance this sound will play. If chance is 1 than this will be the only sound that plays
sound1[]={"soundname.ogg", db-0,1,0.3,5,1,10};
I really don't know what this setting does.
sound1[]={"soundname.ogg", db-0,1,0.3,5,1,10};
This number defines the minimum time it waits before moving on (randomly)
sound1[]={"soundname.ogg", db-0,1,0.3,5,1,10};
This number defines the maximum time it waits before moving on. (randomly)
And than you got this line :
empty[]= {, , , , 1 , 5, 20};
I have absolutely no idea what this does as of yet. I did notice that taking it away crashes OFP, so tuck it in.
Perhaps it is some sort of order of playing ? Or to define the end of this sound class ? I don't know....
----
Ok, the next class is CfgEnvSounds.
class CfgEnvSounds
{
sounds[]={mortarambient,rain};
class mortarambient
{
name="mortarambient";
sound[]={"mortarambient.ogg",db-0,0,1};
soundNight[]={"mgunambient2.ogg",db-0,0,1};
};
class rain
{
name="rain";
sound[]={"rainyday.ogg",db-0,0,1};
soundNight[]={"rainynight.ogg",db-0,0,1};
};
};
This class is very usefull, because you can define the sounds you hear around you. The nice thing is you
can create a daytime sound, and a nighttime sound. This class makes sounds that are all around you, like
wind or rain sounds (without any fixed positioning) You Can use stereo ogg files, but I only succeeded in
playing 22050 Khz stereo sounds succesfully (higher rates cut the sample off) The OGG is looped
automaticly.
I don't think I have to explain further how to get it working, because the class pretty much speaks for itself
when you are used to the CfgSounds and CfgRadio class.
Step 10 : The mission editor
In this part we will take a look at the sound commands and how to use them . I will add some comments, but
for detailed descriptions download the "command reference", a document with all commands and a
detailed description of how they work...get it at the editing site.
EnableRadio TorF - turns the ability to receive radio messages on or off
FadeMusic - the ability to change the volume of music
FadeSound - the ability to change the volume of sound
PlayMusic - plays a music file
PlaySound - plays a sound
GroupRadio - only members of your group receive what you transmit
VehicleRadio - all units in your vehicle receive what you transmit
SideRadio - all units on your side receive what you transmit
GlobalRadio - all units in the game receive what you transmit
Say - allows a unit to say something
SetMimic - This is usefull for cinematics, it sets the expression on the face of a unit.
SetIdentity - Gives a unit a name like "James Kowzolowski" that shows up in group messages. It is NOT
necessary to define CfgIndentities in the description.ext, but it is possible.
Most commands will work straight from the editor. Init fields usually don't work to let a unit speak, but
waypoints work great. Also note you can let object use the say command.
I have had one case were a script refused to play my voicesample, but this could be due to the complexity
of the script. If you use simple scripts everything works fine.
An example of a simple script :
unitname say "voicesample1"
~4.3
commandunit say "voicesample2"
and so on.
I have taken a good look at how BIS did it and they use either simple scripts similar to the example above
or waypoints to talk. There is a waypoint mode called "talk" but I haven't been able to see a link with any of
the commands or any advantage to using the talk waypoint. Anyone knows what it does ?
Step 11 : The LIP file (sound/mouth synchronised speech)
Bis released a utility called "wav2lip" which you can use to make a soldier move his lips while the sound
plays with the "say" command. This utility is also available at the editing site.
To use it, simply use a wave on the utility, by dragging and dropping your wave file onto wav2lip and the
result will be a .lip file, which you can save in your SOUND directory. Now any unit who uses the "say"
command will move his lips to the voice-sample. So you need no extra programming, just that file. Make
sure that :
-The LIP file is in your SOUND directory
-The LIP file has the exact same name as your sample, only the extension should be different.
Step 12 : The End.
This tutorial should have covered all you need to know about adding sound. If you are still unsure about
things, download the example mission.
get the Example Mission here.
Good luck and have fun !
|