I have always been sort of terrified when adding sound to my games. I have even considered to make the game without sounds and ran a poll about it. Results were approximately 60:40 for sound. Bottom line, you should have your games with sound.
While working with sound, obviously, you have to make sure that sound will play together with the main loop and will be correctly synced. For this, threads will probably be your first thought. Hovewer, you can go without them as well, but it will have some disadvantages (will be mentioned later).
First important question is: "What library to use?" There are many libraries around and only some of them are free. I was also looking for a universal library, that will run on a desktop and on a mobile device as well. After some googling, I have found OpenAL. It's supported by iOS and by desktop (Windows, Linux, Mac) as well. OpenAL is a C library with an API similar to the one used by OpenGL. In OpenGL, all functions start with a gl prefix, in OpenAL there is an al prefix. If you read some other articles, you may come across the "alut“ library. That is something similar to "glut", but I am not going to use it.
For Windows OpenAL, you have to use a forked version of the library. OpenAL was created by Creative. For now it is an outdated library and no longer updated by Creative (latest API version is 1.1 from 2005). Luckily, there is an OpenAL Soft implementation (fork of the original OpenAL), that uses the same API as the original OpenAL. You can find source and windows precompiled libraries here.
On Apple devices running iOS, there is a far better situation. OpenAL is directly supported by Apple, you don't need to install anything, just add references to your project from Apple's libraries. See Apple manual.
One of OpenAL's biggest disadvantages is there is no direct support for Android. Android is using OpenSL (or something like that :-)). With a little digging, you can find "ports“ of OpenAL for Android. What they are doing, is to map an OpenAL function call to an OpenSL call, so it is basically a wrapper. One of them can be found here (GitHub). It uses previously mentioned OpenAL Soft, only built with different flags. Hovewer, I have never tested this, so I don't know if and how it works.
After library selection, you have to choose supported sound formats you want to play. Favorite MP3 is not the best choice, the decoder is a little messy and there are some patents laying around. OGG is a better choice. Decoder is easy to use, open and OGG files often have smaller size than MP3 with the same settings. It is also a good decision to support uncompressed WAV.
Lets start with sound engine design and what exactly you need to get it working.
As I mentioned before, you will need the OpenAL library. OpenAL is a C library. I want to add some object oriented wrapper for easier manipulation. I have used C++, but a similar design can be used in other languages as well (of course, you will need an OpenAL library bridge from C to your language).
Apart from OpenAL, you will also need thread support. I have used the pthread library (Windows version). If you are targeting C++11, you can also go with native thread support.
For OGG decompression, you will need the OGG Vorbis library (download parts libogg and libvorbis).
WAV files aren't use very often, more for debugging, but it's good to have a support for that format too. Simple WAV decompression is easy to write from scratch, so I have used this solution, instead of a 3rd party library.
My design is created from two basic classes, one interface (pure virtual class) and then one class for every supported audio format (OGG, WAV…).
Code described in this section can be found in class SoundManager. Full source with header is in the article attachment. We start with a code snippet for an OpenAL initialization.
Once initiated, we won't need device and context variables any more, only in the destruction phase. OpenAL holds it's initiated state internally.
You may see AL_CHECK around the alcMakeContextCurrent function. This is a macro I am using to check for an OpenAL errors in debug mode. You can see its code in the following snippet
I am using this same macro for every OpenAL call everywhere in my code.
Next thing you need to initialize are sources and buffers. You can create those later, when they are really needed. I have created some of them now and if more will be needed, they can always be added later.
Buffers are what you probably think – they hold uncompressed data, that are played by OpenAL. The source is basically the sound that is played. It is loading a sound from buffers associated to it. There are certain limits to the number of buffers and sources. Exact value depends on your system. I have chosen to pregenerate 512 buffers and 16 sources (that means I can play 16 sounds at once).
You may notice, that alGen* function has a second parameter pointer to an unsigned int, which is the id of the created buffer or sound. I have wrapped this into a simple struct, that has the id and boolean indicator, if it is free or used by a sound.
I have created a list of all sources and buffers. Apart from this list, I have a second one, that holds only those resources that are free (not connected to any sound).
If you are using threads, you will also need to initialize them as well. Code for this can be found in source attached to this article.
Now, you have prepared all you need to start adding some sounds to your engine.
Before we start with some details and code, it is important to understand how sounds are managed and played. There are two solutions in how to play sounds.
In the first one, you will load the whole sound data into a single buffer and just play them. It's an easy and a fast way to listen to something. As usual, with simple solutions there is a problem. The uncompressed files are way bigger than the compressed ones. Imagine, you will have more than one sound. The size of all buffers can easily be bigger than your free memory. What now?
Luckily, there is a second approach. Load only small portion of a file into a single buffer, play it, than load another portion. It sounds good, right? Well, actually it is not. If you do it this way, you may hear pauses at the end of buffer playback, just before the buffer is filled again and played. We solve this by having more than one buffer filled at a time. Fill more buffers (I am using three), play the first one and if its content is played, we will play the second one immedietaly and in the "same" time, fill the finished buffer with the new data. We cycle this, until we reach the end of the sound.
The number of used buffers may vary, depending your needs. If your sound engine is updated from a separate thread, the count is not such a problem. You may choose almost any number of buffers and it will be just fine. Hovewer, if you use update together with your main engine loop (no threads involved), you may have problems with a low count of buffers. Why? Imagine you have a Windows application. Now, you drag the window around your desktop. On Windows (I have not tested it on other systems), this will cause the main thread to be suspended and wait. Sound will play (because OpenAL itself has its own thread to play sounds), but only until you have buffers in a queue, that can be played. If you exhaust all of them, sound will stop. This is because your main thread is blocked and buffers are not updated any more.
Each buffer has its byte size (we will set its size during sound creation, see next section). To compute duration of a sound in a buffer, you can use this equation:
Note: If you want to calculate the current playback duration, you have to take the buffers in mind, but its not that straightforward. We will take a look at this in one of the later sectionss.
Ok, enough of theory, let's see some real code and how to do it. All the interesting stuff can be found in class SoundObject. This class is responsible for a single sound management (play, update, pause, stop, rewind etc.).
Before we can play anything, we need to initialize the sound. For now, I will skip the sound decompression part and just use ISoundFileWrapper interface methods, without background knowledge.
First of all, we obtain free buffers from our SoundManager (notice that we are using a singleton call on SoundManager to get its instance). We need to get as many free buffers as we want to have preloaded. Those free buffers are put into the array in our sound object.
We need to get the sound info from our file (or memory, depending where your sound is stored). In that information, we need to have at least:
As a next step, we fill those buffers with initial data. We could do this later as well, but it must always be before we start playing sound.
Now, do you remember how we generated buffers in the initialization section? They had no size set. It will change now.
We decompress data from an input file/memory, using ISoundFileWrapper interface methods. Single buffer size is passed to the constructor and used in the DecompressStream method.
The flag setting: loop is used to enable/disable continuous playback. If we enable looping, after the end of the file is reached the rest of the buffer is filled with a content of a file that has been reset to the initial position.
Once we have prepared everything, we can finaly play our sound.
Each sound has three states - PLAYING, PAUSED, and STOPPED. In a STOPPED state, sound is reset to the default configuration. Next time we play this sound, it will start from the beginning.
Before we can actually play the sound, we need to obtain a free source from our SoundManager.
If there is no free source, we can't play the sound. It is important to release the source from the sound once the sound has stopped or finished playing. Do not release the source from a paused sound, or you will loose the progress and the settings.
Next, we set some additional properties for the source. We need to do this everytime after the source is bound to the sound, because a single source can be attached to a different sound after it has been released and that sound can have different settings.
I am using these properties, but you can set some other informations as well. For the complete list of possibilities, see OpenAL guiode (page 8).
There is an important thing: We have to set AL_LOOPING to false. If we set this flag to be true, we will end up with looping of a single buffer. Since we are using multiple buffers, we are managing loops by ourselves.
Before we actually start playback, buffers need to be set to the source buffer's queue. This queue is processed and played during playback.
Finally, we can start the sound playback:
For now, our sound should be playing, and we should hear something (if not, it seems, there may be a problem :-)). If we do nothing more, our sound will end after some time, depending on the size of our buffer. We can calculate the length of a playback with the equation given earlier and multiply this time by our buffer count.
To ensure continuous playback, we have to update our buffers manually. OpenAL won't do this for us automatically. This is where the threads or main engine loop comes to the game. Update code is called from this separate thread or from main engine loop in every turn. This is probably one of the most important parts of the code.
Last thing that needs to be said, is stopping the sound playback. If the sound is stopped, we need to release its source and reset everything to the default configuration (preload buffers with the beginning of the sound data again).
I had a problem here. If I just removed buffers from the source's queue, refill them and put them back to the queue in next playback, there was an annoying glitch at the beginning of the sound. I have solved this by releasing buffers from sound and acquiring them again.
Inside the SoundManager::GetInstance()->FreeBuffer method, I am deleting and regenerating the buffer to avoid a glitch in the sound. Maybe it's not a correct solution, but it was the only one that worked for me.
During playback we often need some other information. The most important of them is probably the playback time. OpenAL doesn't have any solution for this kind of task (at least not directly and not with more than one buffer at play).
We have to calculate the time by ourselves. For this, we need OpenAL and file information as well. Since we are using buffers, this is a little problematic. Position in the file doesn't correspond directly with a currently playing sound. "File time" is not in synchronization with the "playback time". Second problem is caused by looping. At some point, "file time" is again at the beginning (eg. 00:00), but the playback time is somewhere at the end of the sound (eg. 16:20 from total length of 16:30).
We have take in mind all of this. First of all, we need to get time of the remaining buffers (that is a sum of all of the buffers that hasn't been played yet). From a sound file, we get the current time (for an uncompressed sound, it is a pointer into the file indicating current position). This time is, hovewer, not correct. It is a time containing all preloaded buffers, even the ones that haven't been played yet (and that is our problem). We subtract buffered time from file time. It will give us "correct" time, at least, in most of the cases.
As always, there are some "special cases" (very often called problems or any other not suitable words), that can cause some headache. I have already mentioned one of them – looping sound. If you are playing sound in a loop, you are listening to sound from a buffer that contains data from the end of a file, but a pointer in the data may already be at the beginning. This will give you negative time. You can solve this by taking the duration of an entire sound and subtract the absolute value of a negative time from it.
Second headache may be caused if a file is not looping or is short enough, to be kept in buffers only. For this, you take the duration of an entire sound and subrtact prebuffered time from it.
The time we have calculated so far, is not the final one yet. It is a time of a currently playing buffer's start.
To get current time, we have to add a current buffer time offset. For this, we have to use OpenAL to get buffer offset. Let's see, if we wouldn't use multiple buffers and have the whole sound in a big one, this would give us a correct time of a playback and no other tricks would be needed.
As always, you can review what has been written in code snippets to get a better understanding of the problem (or if you don't understant exactly my attempt to explain the problem :-)). Total time of the sound is obtained from an opened sound file via ISoundFileWrapper interface method.
Now it seems to be a good time to look at actual sound files. I have used OGG and WAV. I have also added support for RAW data, which is basically the same as WAV without headers. WAV and RAW data are helpful during debugging, or if you have some external decompressor, that gives you uncompressed RAW data instead of compressed ones.
OGG
Decompression of OGG files is straightforward with a vorbis library. You just have to use their functions providing full functionality for you. You can find the whole code in the class WrapperOGG.
The most interessting part of this code is the main part for filling OpenAL buffers. We have an OGG_BUFFER_SIZE variable. I have used a size of 2048 bytes. Beware, this value is not the same as OpenAL buffer size! This value indicates how many bytes do we read in a single call from the ogg file. Those buffers are then appended to our OpenAL buffer. The size of our OpenAL buffer is stored in variable minDecompressLengthAtOnce. If we reach or overflow (should not happen) this value, we stop reading and return back.
Otherwise, there will be a problem, because we read more data than the buffer can hold and our sound would be skipping some parts. Of course, we can update pointers or move them "back" to read missing data again, but why? Simple solution with modulo test is enough and produce cleaner code. There is no need to have some crazy buffer sizes, like for example 757 or 11243 bytes.
WAV
Processing a WAV file by yourself may be seen as useless by many people ("I can download library somewhere“). In some ways it is and they are correct. On the other hand, doing this you will get a little better understanding of how things work under the hood. In the future, you can use this knowledge to write streaming of any kind of uncompressed data. Solution should be very similar to this one.
First, you have to calculate duration of your sound, using the equation we have already seen.
In a code snippet below, you can see the same functionality as in the OGG section sample. Again, we use modulo for RAW_BUFFER_SIZE (this time, hovewer, it is possible to avoid this, but why to use a different approach?).
The code in the attachement can not be used directly (download – build – run and use). In my engine, I am using VFS (virtual file system) to handle file manipulation. I left it in the code, because it's built around it. Removing would have required some changes I don´t have time for :-)
On some places, you may found some math structures, functions (eg. Vector3, Clamp) or utilities (Logging system). All of these are easy to understand from a function or a structure name.
I am also using my own implementation of String (MyStringAnsi), but yet again, method names or usage is easy to understand from the code.
Without the knowledge of the mentioned files, you can study and use code to learn some tricks. It is not difficult to update or rewrtite the code to suit your needs. If you have any problems, you can leave me info in the article discussion, or contact me directly via email: info (at) perry.cz.
Keep a running log of any updates that you make to the article. e.g.
19 Aug 2014: Initial release
While working with sound, obviously, you have to make sure that sound will play together with the main loop and will be correctly synced. For this, threads will probably be your first thought. Hovewer, you can go without them as well, but it will have some disadvantages (will be mentioned later).
First important question is: "What library to use?" There are many libraries around and only some of them are free. I was also looking for a universal library, that will run on a desktop and on a mobile device as well. After some googling, I have found OpenAL. It's supported by iOS and by desktop (Windows, Linux, Mac) as well. OpenAL is a C library with an API similar to the one used by OpenGL. In OpenGL, all functions start with a gl prefix, in OpenAL there is an al prefix. If you read some other articles, you may come across the "alut“ library. That is something similar to "glut", but I am not going to use it.
For Windows OpenAL, you have to use a forked version of the library. OpenAL was created by Creative. For now it is an outdated library and no longer updated by Creative (latest API version is 1.1 from 2005). Luckily, there is an OpenAL Soft implementation (fork of the original OpenAL), that uses the same API as the original OpenAL. You can find source and windows precompiled libraries here.
On Apple devices running iOS, there is a far better situation. OpenAL is directly supported by Apple, you don't need to install anything, just add references to your project from Apple's libraries. See Apple manual.
One of OpenAL's biggest disadvantages is there is no direct support for Android. Android is using OpenSL (or something like that :-)). With a little digging, you can find "ports“ of OpenAL for Android. What they are doing, is to map an OpenAL function call to an OpenSL call, so it is basically a wrapper. One of them can be found here (GitHub). It uses previously mentioned OpenAL Soft, only built with different flags. Hovewer, I have never tested this, so I don't know if and how it works.
After library selection, you have to choose supported sound formats you want to play. Favorite MP3 is not the best choice, the decoder is a little messy and there are some patents laying around. OGG is a better choice. Decoder is easy to use, open and OGG files often have smaller size than MP3 with the same settings. It is also a good decision to support uncompressed WAV.
Sound engine design
Lets start with sound engine design and what exactly you need to get it working.
As I mentioned before, you will need the OpenAL library. OpenAL is a C library. I want to add some object oriented wrapper for easier manipulation. I have used C++, but a similar design can be used in other languages as well (of course, you will need an OpenAL library bridge from C to your language).
Apart from OpenAL, you will also need thread support. I have used the pthread library (Windows version). If you are targeting C++11, you can also go with native thread support.
For OGG decompression, you will need the OGG Vorbis library (download parts libogg and libvorbis).
WAV files aren't use very often, more for debugging, but it's good to have a support for that format too. Simple WAV decompression is easy to write from scratch, so I have used this solution, instead of a 3rd party library.
My design is created from two basic classes, one interface (pure virtual class) and then one class for every supported audio format (OGG, WAV…).
- SoundManager – main class, using the singleton pattern. Singleton is a good choice here, since you probably have only one instance of an OpenAL initiated. This class is used for controlling and updating all sounds. References to all SoundObjects are held there.
- SoundObject – our main sound, that will be accessible and has methods such as: Play, Pause, Rewind, Update…
- ISoundFileWrapper – interface (pure virtual class) for different file formats, declaring methods for decompression, filling buffers etc.
- Wrapper_OGG – class that implements ISoundFIleWrapper. For decompression of OGG files
- Wrapper_WAV – class that implements ISoundFIleWrapper. For decompression of WAV files
OpenAL Initialization
Code described in this section can be found in class SoundManager. Full source with header is in the article attachment. We start with a code snippet for an OpenAL initialization.
alGetError(); ALCdevice * deviceAL = alcOpenDevice(NULL); if (deviceAL == NULL) { LogError("Failed to init OpenAL device."); return; } ALCcontext * contextAL = alcCreateContext(deviceAL, NULL); AL_CHECK( alcMakeContextCurrent(contextAL) );
Once initiated, we won't need device and context variables any more, only in the destruction phase. OpenAL holds it's initiated state internally.
You may see AL_CHECK around the alcMakeContextCurrent function. This is a macro I am using to check for an OpenAL errors in debug mode. You can see its code in the following snippet
const char * GetOpenALErrorString(int errID) { if (errID == AL_NO_ERROR) return ""; if (errID == AL_INVALID_NAME) return "Invalid name"; if (errID == AL_INVALID_ENUM) return " Invalid enum "; if (errID == AL_INVALID_VALUE) return " Invalid value "; if (errID == AL_INVALID_OPERATION) return " Invalid operation "; if (errID == AL_OUT_OF_MEMORY) return " Out of memory like! "; return " Don't know "; }
inline void CheckOpenALError(const char* stmt, const char* fname, int line) { ALenum err = alGetError(); if (err != AL_NO_ERROR) { LogError("OpenAL error %08x, (%s) at %s:%i - for %s", err, GetOpenALErrorString(err), fname, line, stmt); } };
#ifndef AL_CHECK #ifdef _DEBUG #define AL_CHECK(stmt) do { \ stmt; \ CheckOpenALError(#stmt, __FILE__, __LINE__); \ } while (0); #else #define AL_CHECK(stmt) stmt #endif #endif
I am using this same macro for every OpenAL call everywhere in my code.
Next thing you need to initialize are sources and buffers. You can create those later, when they are really needed. I have created some of them now and if more will be needed, they can always be added later.
Buffers are what you probably think – they hold uncompressed data, that are played by OpenAL. The source is basically the sound that is played. It is loading a sound from buffers associated to it. There are certain limits to the number of buffers and sources. Exact value depends on your system. I have chosen to pregenerate 512 buffers and 16 sources (that means I can play 16 sounds at once).
for (int i = 0; i < 512; i++) { SoundBuffer buffer; AL_CHECK( alGenBuffers((ALuint)1, &buffer.refID) ); this->buffers.push_back(buffer); } for (int i = 0; i < 16; i++) { SoundSource source; AL_CHECK( alGenSources((ALuint)1, &source.refID)) ; this->sources.push_back(source); }
You may notice, that alGen* function has a second parameter pointer to an unsigned int, which is the id of the created buffer or sound. I have wrapped this into a simple struct, that has the id and boolean indicator, if it is free or used by a sound.
I have created a list of all sources and buffers. Apart from this list, I have a second one, that holds only those resources that are free (not connected to any sound).
for (uint32 i = 0; i < this->buffers.size(); i++) { this->freeBuffers.push_back(&this->buffers[i]); } for (uint32 i = 0; i < this->sources.size(); i++) { this->freeSources.push_back(&this->sources[i]); }
If you are using threads, you will also need to initialize them as well. Code for this can be found in source attached to this article.
Now, you have prepared all you need to start adding some sounds to your engine.
Sound playback logic
Before we start with some details and code, it is important to understand how sounds are managed and played. There are two solutions in how to play sounds.
In the first one, you will load the whole sound data into a single buffer and just play them. It's an easy and a fast way to listen to something. As usual, with simple solutions there is a problem. The uncompressed files are way bigger than the compressed ones. Imagine, you will have more than one sound. The size of all buffers can easily be bigger than your free memory. What now?
Luckily, there is a second approach. Load only small portion of a file into a single buffer, play it, than load another portion. It sounds good, right? Well, actually it is not. If you do it this way, you may hear pauses at the end of buffer playback, just before the buffer is filled again and played. We solve this by having more than one buffer filled at a time. Fill more buffers (I am using three), play the first one and if its content is played, we will play the second one immedietaly and in the "same" time, fill the finished buffer with the new data. We cycle this, until we reach the end of the sound.
The number of used buffers may vary, depending your needs. If your sound engine is updated from a separate thread, the count is not such a problem. You may choose almost any number of buffers and it will be just fine. Hovewer, if you use update together with your main engine loop (no threads involved), you may have problems with a low count of buffers. Why? Imagine you have a Windows application. Now, you drag the window around your desktop. On Windows (I have not tested it on other systems), this will cause the main thread to be suspended and wait. Sound will play (because OpenAL itself has its own thread to play sounds), but only until you have buffers in a queue, that can be played. If you exhaust all of them, sound will stop. This is because your main thread is blocked and buffers are not updated any more.
Each buffer has its byte size (we will set its size during sound creation, see next section). To compute duration of a sound in a buffer, you can use this equation:
duration = BUFFER_SIZE / (sound.freqency * sound.channels * sound.bitsPerChannel / 8) (eq. 1)
Note: If you want to calculate the current playback duration, you have to take the buffers in mind, but its not that straightforward. We will take a look at this in one of the later sectionss.
Ok, enough of theory, let's see some real code and how to do it. All the interesting stuff can be found in class SoundObject. This class is responsible for a single sound management (play, update, pause, stop, rewind etc.).
Creating sound
Before we can play anything, we need to initialize the sound. For now, I will skip the sound decompression part and just use ISoundFileWrapper interface methods, without background knowledge.
First of all, we obtain free buffers from our SoundManager (notice that we are using a singleton call on SoundManager to get its instance). We need to get as many free buffers as we want to have preloaded. Those free buffers are put into the array in our sound object.
#define PRELOAD_BUFFERS_COUNT 3 .... for (uint32 i = 0; i < PRELOAD_BUFFERS_COUNT; i++) { SoundBuffer * buf = SoundManager::GetInstance()->GetFreeBuffer(); if (buf == NULL) { MyUtils::Logger::LogWarning("Not enought free sound-buffers"); continue; } this->buffers[i] = buf; }
We need to get the sound info from our file (or memory, depending where your sound is stored). In that information, we need to have at least:
struct SoundInfo { int freqency; //sound frequency (eg. 44100 Hz) int channels; //nunber of channels (eg. Stereo = 2) int bitrate; //sound bitrate int bitsPerChannel; //number of bits per channel (eg. 16 for 2 channel stereo) };
As a next step, we fill those buffers with initial data. We could do this later as well, but it must always be before we start playing sound.
Now, do you remember how we generated buffers in the initialization section? They had no size set. It will change now.
We decompress data from an input file/memory, using ISoundFileWrapper interface methods. Single buffer size is passed to the constructor and used in the DecompressStream method.
The flag setting: loop is used to enable/disable continuous playback. If we enable looping, after the end of the file is reached the rest of the buffer is filled with a content of a file that has been reset to the initial position.
bool SoundObject::PreloadBuffer(int bufferID) { std::vector<char> decompressBuffer; this->soundFileWrapper->DecompressStream(decompressBuffer, this->settings.loop); if (decompressBuffer.size() == 0) { //nothing more to read return false; } //now we fill loaded data to our buffer AL_CHECK( alBufferData(bufferID, this->sound.format, &decompressBuffer[0], static_cast<ALsizei>(decompressBuffer.size()), this->sound.freqency) ); return true; }
Playing the sound
Once we have prepared everything, we can finaly play our sound.
Each sound has three states - PLAYING, PAUSED, and STOPPED. In a STOPPED state, sound is reset to the default configuration. Next time we play this sound, it will start from the beginning.
Before we can actually play the sound, we need to obtain a free source from our SoundManager.
this->source = SoundManager::GetInstance()->GetFreeSource();
If there is no free source, we can't play the sound. It is important to release the source from the sound once the sound has stopped or finished playing. Do not release the source from a paused sound, or you will loose the progress and the settings.
Next, we set some additional properties for the source. We need to do this everytime after the source is bound to the sound, because a single source can be attached to a different sound after it has been released and that sound can have different settings.
I am using these properties, but you can set some other informations as well. For the complete list of possibilities, see OpenAL guiode (page 8).
AL_CHECK( alSourcef(this->source->refID, AL_PITCH, this->settings.pitch)) ; AL_CHECK( alSourcef(this->source->refID, AL_GAIN, this->settings.gain) ); AL_CHECK( alSource3f(this->source->refID, AL_POSITION, this->settings.pos.X, this->settings.pos.X, this->settings.pos.X) ); AL_CHECK( alSource3f(this->source->refID, AL_VELOCITY, this->settings.velocity.X, this->settings.velocity.Y, this->settings.velocity.Z) );
There is an important thing: We have to set AL_LOOPING to false. If we set this flag to be true, we will end up with looping of a single buffer. Since we are using multiple buffers, we are managing loops by ourselves.
AL_CHECK( alSourcei(this->source->refID, AL_LOOPING, false) );
Before we actually start playback, buffers need to be set to the source buffer's queue. This queue is processed and played during playback.
this->remainBuffers = 0; for (int i = 0; i < PRELOAD_BUFFERS_COUNT; i++) { if (this->buffers[i] == NULL) { continue; //buffer not used, do not add it to the queue } AL_CHECK( alSourceQueueBuffers(this->source->refID, 1, &this->buffers[i]->refID) ); this->remainBuffers++; }
Finally, we can start the sound playback:
AL_CHECK( alSourcePlay(this->source->refID) ); this->state = PLAYING;
For now, our sound should be playing, and we should hear something (if not, it seems, there may be a problem :-)). If we do nothing more, our sound will end after some time, depending on the size of our buffer. We can calculate the length of a playback with the equation given earlier and multiply this time by our buffer count.
To ensure continuous playback, we have to update our buffers manually. OpenAL won't do this for us automatically. This is where the threads or main engine loop comes to the game. Update code is called from this separate thread or from main engine loop in every turn. This is probably one of the most important parts of the code.
void SoundObject::Update() { if (this->state != PLAYING) { //sound is not playing ÃÃÃÃâ PAUSED / STOPPED ÃÃÃÃâ do not update return; } int buffersProcessed = 0; AL_CHECK( alGetSourcei(this->source->refID, AL_BUFFERS_PROCESSED, &buffersProcessed) ); // check to see if we have a buffer to deQ if (buffersProcessed > 0) { if (buffersProcessed > 1) { //we have processed more than 1 buffer since last call of Update method //we should probably reload more buffers than just the one ÃÃÃÃâ not supported yet MyUtils::Logger::LogInfo("Processed more than 1 buffer since last Update"); } // remove the buffer form the source uint32 bufferID; AL_CHECK(alSourceUnqueueBuffers(this->source->refID, 1, &bufferID) ); // fill the buffer up and reQ! // if we cant fill it up then we are finished // in which case we dont need to re-Q // return NO if we dont have more buffers to Q if (this->state == STOPPED) { //put it back ÃÃÃÃâ sound is not playing anymore AL_CHECK( alSourceQueueBuffers(this->source->refID, 1, &bufferID) ); return; } //call method to load data to buffer //see method in section ÃÃÃÃâCreating soundÃÃÃÃâ if (this->PreloadBuffer(bufferID) == false) { this->remainBuffers--; } //put the newly filled buffer back (at the end of the queue) AL_CHECK( alSourceQueueBuffers(this->source->refID, 1, &bufferID) ); } if (this->remainBuffers <= 0) { //no more buffers remain ÃÃÃÃâ stop sound automatically this->Stop(); } }
Last thing that needs to be said, is stopping the sound playback. If the sound is stopped, we need to release its source and reset everything to the default configuration (preload buffers with the beginning of the sound data again).
I had a problem here. If I just removed buffers from the source's queue, refill them and put them back to the queue in next playback, there was an annoying glitch at the beginning of the sound. I have solved this by releasing buffers from sound and acquiring them again.
AL_CHECK( alSourceStop(this->source->refID) ); //Remove buffers from queue for (int i = 0; i < PRELOAD_BUFFERS_COUNT; i++) { if (this->buffers[i] == NULL) { continue; } AL_CHECK( alSourceUnqueueBuffers(this->source->refID, 1, &this->buffers[i]->refID) ); } //Free the source SoundManager::GetInstance()->FreeSource(this->source); this->soundFileWrapper->ResetStream(); //solving the ÃÃÃÃâglitchÃÃÃÃâ in the sound ÃÃÃÃâ release buffers and aquire them again for (uint32 i = 0; i < PRELOAD_BUFFERS_COUNT; i++) { SoundManager::GetInstance()->FreeBuffer(this->buffers[i]); SoundBuffer * buf = SoundManager::GetInstance()->GetFreeBuffer(); if (buf == NULL) { MyUtils::Logger::LogWarning("Not enought free sound-buffers"); continue; } this->buffers[i] = buf; } //Preload data again ...
Inside the SoundManager::GetInstance()->FreeBuffer method, I am deleting and regenerating the buffer to avoid a glitch in the sound. Maybe it's not a correct solution, but it was the only one that worked for me.
AL_CHECK( alDeleteBuffers(1, &buffer->refID) ); AL_CHECK( alGenBuffers(1, &buffer->refID) );
Additional sound info
During playback we often need some other information. The most important of them is probably the playback time. OpenAL doesn't have any solution for this kind of task (at least not directly and not with more than one buffer at play).
We have to calculate the time by ourselves. For this, we need OpenAL and file information as well. Since we are using buffers, this is a little problematic. Position in the file doesn't correspond directly with a currently playing sound. "File time" is not in synchronization with the "playback time". Second problem is caused by looping. At some point, "file time" is again at the beginning (eg. 00:00), but the playback time is somewhere at the end of the sound (eg. 16:20 from total length of 16:30).
We have take in mind all of this. First of all, we need to get time of the remaining buffers (that is a sum of all of the buffers that hasn't been played yet). From a sound file, we get the current time (for an uncompressed sound, it is a pointer into the file indicating current position). This time is, hovewer, not correct. It is a time containing all preloaded buffers, even the ones that haven't been played yet (and that is our problem). We subtract buffered time from file time. It will give us "correct" time, at least, in most of the cases.
As always, there are some "special cases" (very often called problems or any other not suitable words), that can cause some headache. I have already mentioned one of them – looping sound. If you are playing sound in a loop, you are listening to sound from a buffer that contains data from the end of a file, but a pointer in the data may already be at the beginning. This will give you negative time. You can solve this by taking the duration of an entire sound and subtract the absolute value of a negative time from it.
Second headache may be caused if a file is not looping or is short enough, to be kept in buffers only. For this, you take the duration of an entire sound and subrtact prebuffered time from it.
The time we have calculated so far, is not the final one yet. It is a time of a currently playing buffer's start.
To get current time, we have to add a current buffer time offset. For this, we have to use OpenAL to get buffer offset. Let's see, if we wouldn't use multiple buffers and have the whole sound in a big one, this would give us a correct time of a playback and no other tricks would be needed.
As always, you can review what has been written in code snippets to get a better understanding of the problem (or if you don't understant exactly my attempt to explain the problem :-)). Total time of the sound is obtained from an opened sound file via ISoundFileWrapper interface method.
//Get duration of remaining buffer float preBufferTime = this->GetBufferedTime(this->remainBuffers); //get current time of file stream //this stream is "in future" because of buffered data //duration of buffer MUST be removed from time float time = this->soundFileWrapper->GetTime() - preBufferTime; if (this->remainBuffers < PRELOAD_BUFFERS_COUNT) { //file has already been read all //we are currently "playing" sound from cache only //and there is no loop active time = this->soundFileWrapper->GetTotalTime() - preBufferTime; } if (time < 0) { //file has already been read all //we are currently "playing" sound from last loop cycle //but file stream is already in next loop //because of the cache delay //Sign of "+" => "- abs(time)" rewritten to "+ time" time = this->soundFileWrapper->GetTotalTime() + time; } //add current buffer play time to time from file stream float result; AL_CHECK(alGetSourcef(this->source->refID, AL_SEC_OFFSET, &result)); time += result; //time in seconds
Sound file formats
Now it seems to be a good time to look at actual sound files. I have used OGG and WAV. I have also added support for RAW data, which is basically the same as WAV without headers. WAV and RAW data are helpful during debugging, or if you have some external decompressor, that gives you uncompressed RAW data instead of compressed ones.
OGG
Decompression of OGG files is straightforward with a vorbis library. You just have to use their functions providing full functionality for you. You can find the whole code in the class WrapperOGG.
The most interessting part of this code is the main part for filling OpenAL buffers. We have an OGG_BUFFER_SIZE variable. I have used a size of 2048 bytes. Beware, this value is not the same as OpenAL buffer size! This value indicates how many bytes do we read in a single call from the ogg file. Those buffers are then appended to our OpenAL buffer. The size of our OpenAL buffer is stored in variable minDecompressLengthAtOnce. If we reach or overflow (should not happen) this value, we stop reading and return back.
minDecompressLengthAtOnce % OGG_BUFFER_SIZE must be 0!
Otherwise, there will be a problem, because we read more data than the buffer can hold and our sound would be skipping some parts. Of course, we can update pointers or move them "back" to read missing data again, but why? Simple solution with modulo test is enough and produce cleaner code. There is no need to have some crazy buffer sizes, like for example 757 or 11243 bytes.
int endian = 0; // 0 for Little-Endian, 1 for Big-Endian int bitStream; long bytes; do { do { // Read up to a buffer's worth of decoded sound data bytes = ov_read(this->ov, this->bufArray, OGG_BUFFER_SIZE, endian, 2, 1, &bitStream); if(bytes < 0) { MyUtils::Logger::LogError("OGG stream ov_read error - returned %i", bytes); continue; } // Append data to the end of buffer decompressBuffer.insert(decompressBuffer.end(), this->bufArray, this->bufArray + bytes); if (static_cast<int>(decompressBuffer.size()) >= this->minDecompressLengthAtOnce) { //buffer has been filled return; } } while (bytes > 0); if (inLoop) { //we are in look ÃÃÃÃâ we have reached end of the file ÃÃÃÃâ go back to the beginning this->ResetStream(); } if (this->minDecompressLengthAtOnce == INT_MAX) { //read entire file in a single call return; } } while(inLoop);
WAV
Processing a WAV file by yourself may be seen as useless by many people ("I can download library somewhere“). In some ways it is and they are correct. On the other hand, doing this you will get a little better understanding of how things work under the hood. In the future, you can use this knowledge to write streaming of any kind of uncompressed data. Solution should be very similar to this one.
First, you have to calculate duration of your sound, using the equation we have already seen.
duration = RAW_FILE_SIZE / (sound.freqency * sound.channels * sound.bitsPerChannel / 8) (eq. 1) RAW_SILE_SIZE = WAV_FILE_SIZE ÃÃÃÃâ WAV_HEADERS_SIZE
In a code snippet below, you can see the same functionality as in the OGG section sample. Again, we use modulo for RAW_BUFFER_SIZE (this time, hovewer, it is possible to avoid this, but why to use a different approach?).
bool eof = false;int curBufSize = 0; do{ do { curBufSize = 0; while (curBufSize < WAV_BUFFER_SIZE) { uint64 remainToRead = WAV_BUFFER_SIZE - curBufSize; if (this->curChunk.size <= 0) { //need load chunk info this->ReadData(&this->curChunk, sizeof(WAV_CHUNK)); } // Check for .WAV data chunk if ( (this->curChunk.id[0] == 'd') && (this->curChunk.id[1] == 'a') && (this->curChunk.id[2] == 't') && (this->curChunk.id[3] == 'a') ) { uint64 readSize = std::min(this->curChunk.size, remainToRead); //how many data can we read in current chunk this->ReadData(this->bufArray + curBufSize, readSize); curBufSize += readSize; //buffer filled from (0...curBufSize) this->curChunk.size -= readSize;//in current chunk, remain to read } else { //not a "data" chunk - advance stream this->Seek(this->curChunk.size, SEEK_POS::CURRENT); } if (this->t.processedSize >= this->t.fileSize) { eof = true; break; } } // Append to end of buffer decompressBuffer.insert(decompressBuffer.end(), this->bufArray, this->bufArray + curBufSize); if (static_cast<int>(decompressBuffer.size()) >= this->minProcesssLengthAtOnce) { return; } } while (!eof); if (inLoop) { this->ResetStream(); } if (this->minProcesssLengthAtOnce == INT_MAX) { return; } } while (inLoop);
Conclusion and Attached Code Info
The code in the attachement can not be used directly (download – build – run and use). In my engine, I am using VFS (virtual file system) to handle file manipulation. I left it in the code, because it's built around it. Removing would have required some changes I don´t have time for :-)
On some places, you may found some math structures, functions (eg. Vector3, Clamp) or utilities (Logging system). All of these are easy to understand from a function or a structure name.
I am also using my own implementation of String (MyStringAnsi), but yet again, method names or usage is easy to understand from the code.
Without the knowledge of the mentioned files, you can study and use code to learn some tricks. It is not difficult to update or rewrtite the code to suit your needs. If you have any problems, you can leave me info in the article discussion, or contact me directly via email: info (at) perry.cz.
Article Update Log
Keep a running log of any updates that you make to the article. e.g.
19 Aug 2014: Initial release