Part 19 - Audio
April 11, 2019

In this part we will add sound effects and music. Now there is a huge disclaimer here. While I do have some background in music I'm absolutely far from being capable of producing some music for anything, games included. Yet I decided to venture into this mostly because I needed some very specific requirements from the music itself. When we get to the topic, I will mention and this will probably be clear.

Nevertheless, audio is one of the most important aspects of games. Even when you create space shooters, if you decide to go with full physics simulation accuracy, there won't be any explosions or shooting sounds, with a very high chance of making the game very boring! Now notice that I'm not saying it will definitely be so. Maybe someone, very good at game design can come up with something that will be incredible fun to play even without any sound!

And then, for the sound effects, I add another disclaimer: I don't own any proper equipment to perform the recordings, yet I did produce my own sound effects mostly because I wanted to experiment with it. That said, if you directly use the assets from my GitHub repository , be aware that the quality may not be the ideal for games.

In this project we will associate the sound effects with the selected theme, which will help create not only visual variety, but also some extra sound. On the other hand, the music will have to be associated with the game modes. Granted, once we actually implement those it should be clear the reasoning for this separation.

There is another very important aspect related to the sound assets used in this project. With the exception of the music, all assets are mono. Although we won't use any positional sound, mono assets are recommended if we were going to use any kind of sound spatialization.

Finally, I do recommend reading the documentation for some insight and more detailed explanation on some of the things we are going to do in this part.

The Sound System

Before even beginning to produce the assets, we have to prepare the project to receive them. With the sound system provided by the engine we can separate the sound files into what is commonly called as "channel" in games, much like voice, effects, music and so on. And then, later we can provide game settings to change the volume independently. A "channel" in Unreal Engine is called Sound Class and we create a "master" sound class then each of the separate "channels" we want are added as children of that. We can easily change the volume of each sound class, however the means to do that are not exposed to blueprints, so we will add some functions into our blueprint library just for this task. We will need this so the UMG widget can manipulate that!

To begin preparing our project, let's add the sound classes. We will want one master channel and three children, music, user interface ans sound effects. This means, in the audio directory add four Sound Class assets, SC_master, SC_UI, SC_SFX and SC_Music (the sound class is under the Sound category in the drop down menu). We have to "tell" the SC_Master about its children, so open it for editing. This is a pretty simple editor, containing a Details panel as well as a blueprint graph panel. In the details we have the Child Classes property, which is an array. In this property add 3 new entries and select, in each one, the other sound classes. The blueprint should be automatically updated to this (you will probably need to rearrange the nodes):

If you expand the Properties property, it's possible to set the default value for a few setting as well as changing some flags. For the master sound class I only changed the default volume to 0.25. If you look into the flags, you probably noticed one Is UISound and another one called Is Music. Select the SC_UI node and then enable the IS UISound property. Then, select the SC_Music node and enable the IS Music property.

Now that we have the necessary sound classes, we need means to access the volume property from blueprint. As mentioned, we will add those functions in the blueprint function library, so declare them:

ColBPLibrary.h
// Retrieve the volume property from the specified sound class
UFUNCTION(BlueprintPure, Category = "Audio")
static float GetSoundClassVolume(class USoundClass* SoundClass);

// Change the volume property in the specified sound class
UFUNCTION(BlueprintCallable, Category = "Audio")
static void ChangeSoundClassVolume(class USoundClass* SoundClass, float Volume);

In order to implement those functions we will need a few includes (in the code shown, listing the complete includes of the file):

ColBPLibrary.cpp
#include "ColBPLibrary.h"
#include "Engine.h"
#include "ColGameInstance.h"
#include "ColPlayerController.h"
#include "ThemeData.h"
#include "PaperSprite.h"
#include "slate/SlateBrushAsset.h"
#include "Brushes/SlateNoResource.h"
#include "Classes/Sound/SoundClass.h"
#include "Components/AudioComponent.h"
#include "AudioThread.h"
#include "ActiveSound.h"
#include "PlayField.h"

On both implementations we first have to check if the provided sound class is valid. The rest of the code should be really self explanatory, really! We are simply accessing the Properties.Volume in the sound class objects:

ColBPLibrary.cpp
float UColBPLibrary::GetSoundClassVolume(class USoundClass* SoundClass)
{
   // If the sound class is not valid, return 0.
   return (SoundClass ? SoundClass->Properties.Volume : 0.0f);
}

void UColBPLibrary::ChangeSoundClassVolume(class USoundClass* SoundClass, float Volume)
{
   if (SoundClass)
   {
      SoundClass->Properties.Volume = Volume;
   }
}

The project is now ready to receive the sound assets!

UI Sound

The idea is to have a "click" sound effect played at each button click and a different sound effect whenever the mouse hovers the button. Now you may be tempted to just edit the Style property within the text and icon buttons we have created and set the Pressed Sound and Hovered Sound. For the pressed sound it will work without any problems. However, when we use the gamepad/keyboard navigation we will not get any sound, while we would expect the hovered one. For that we will have to manually play the sound effect. Shortly I will show exactly what must be done. Anyway, the two sound effects I have recorded (and edited with Audacity):

Loading

0:00.000

  / 0:00.000

Loading

0:00.000

  / 0:00.000

Once the files are imported into the project, we have to change the sound class they are part of. In this case, SC_UI. Although it's possible to do so using the bulk editor, this specific property will require a path rather than an easy drop down menu like the one you get if editing a single asset. Nevertheless, once that is done, we have to update the text and icon button widgets.

The first change is the sound that will be played when the button is pressed. In this case, all that must be done is change the stlSelected style variable. Remember that it's the style used when the button gains focus, which is the case whenever we activate it, be through mouse click, touch or gamepad/keyboard. Nevertheless, within the style's default value, locate the Pressed Sound property and then select the sound asset meant for this event.

Next, we want a sound effect to be played whenever the button is "selected", which happens when the mouse hovers it or we navigate using the keyboard/gamepad. As it turns out, from the Event On Added to Focus Path, after setting the button style we just play the sound effect, using the Play Sound2D node. This will work for all of the described cases:

Still, if you have opted to add the is hovered -> true = enforce focus path, the the "play sound 2d" will be called twice, very close one to another. This wil result in the sound effect being played twice. Unfortunately there isn't much that can be done about that, although the sound can be used as a cue that the menu did indeed receive some input.

Interestingly, those changes can be performed on all buttons, UIC_TextButton, UIC_IconButton and UIC_ToggleButton! Well, almost. In the UIC_ToggleButton, when changing the stlSelected style variable, we don't have the Pressed Sound event. Instead, Checked Sound and Unchecked Sound. On both cases I have used the same asset, the one for the pressed.

Piece Landing Sound

When the player piece lands, we want to play one sound effect. And then, when blocks finish relocating after those bellow are removed, play the same effect again. As mentioned at the beginning of this part, we bring the possibility of some extra variety by specifying the sound asset through the theme data. In other words, this means we have to update the UThemeData class. In this case we are associating the sound effect with the theme rather than the block itself. If you want a different sound effect per block, then the property must be moved into the FBlockData struct.

ThemeData.h
// This is the sound effect that will be played when the player piece lands or blocks finish relocating
UPROPERTY(EditAnywhere, BlueprintReadWrite)
USoundWave* LandingBlockSound;

And we have to update the constructor to initialize this property:

ThemeData.h
UThemeData()
      : BlockSprite(nullptr)
      , BackgroundSprite(nullptr)
      , GridTileSet(nullptr)
      , LandingBlockSound(nullptr)
   {}

Now let's play the specified sound effect when the player piece lands. We are testing if the player piece has landed from the StatePlayTime state function. From there, right before firing up the OnPlayerPieceLanded event, we can play the sound effect. The necessary tasks for that are obtain the theme data, check if it's valid, then check if the sound effect is valid and, if so, play it using the PlaySound2D() function, that is part of UGameplayStatics class. This is the code:

GameModeInGame.cpp
AGameModeInGame::StateFunctionProxy AGameModeInGame::StatePlaytime(float Seconds)
{
   ...
   if (mPlayerPiece.HasLanded())
   {
      ...
      // Cleanup the player piece's internal data
      mPlayerPiece.Clear();

      // Play the sound effect associated with the player piece landing
      if (UThemeData* theme = UColBPLibrary::GetGameTheme(this))
      {
         if (theme->LandingBlockSound)
         {
            UGameplayStatics::PlaySound2D(GetWorld(), theme->LandingBlockSound);
         }
      }
      ...
   }
   ...
}

While the process to play this sound effect when the blocks finish relocating is the same, we have a bit of a different situation here. Chances are high multiple blocks will need relocation. And also, there is a chance some blocks will need to fall down a few more rows than the others. If we just play the sound effect once the last block finishes relocation, then we will probably miss some sound effects due to the greater distances that some blocks may need to cover. On the other hand, if we play the sound effect for each block that finishes the movement, then we will probably call the PlaySound2D() multiple times in a single loop iteration. The strategy we can use here is very similar to the one used to detect if we are ready to move from the StateRepositioning. That is, we set a flag that is false by default and if at least one block finishes relocation we change this flag to true. After iterating through all relocated blocks we check this flag and if it's true the we play the sound effect. The code:

GameModeInGame.cpp
AGameModeInGame::StateFunctionProxy AGameModeInGame::StateRepositioning(float Seconds)
{
   bool finished = true;      // Assume all blocks have finished the repositioning
   bool play_sound = false;   // Assume no block has finished relocation

   for (FRepositioningBlock& rep_block : mRepositioningBlock)
   {
      if (!rep_block.RepositionFinished)
      {
         ...
         if (rep_block.RepositionFinished)
         {
            ...
            // We have to play the sound effect
            play_sound = true;
         }
      }
   }

   if (play_sound)
   {
      if (UThemeData* theme = UColBPLibrary::GetGameTheme(this))
      {
         if (theme->LandingBlockSound)
         {
            UGameplayStatics::PlaySound2D(GetWorld(), theme->LandingBlockSound);
         }
      }
   }
   ...
}

Now the project can be built and the next step is to import the new sound effect into the project. For this specific theme we have been using, I have used this sound:

Loading

0:00.000

  / 0:00.000

Once the asset is added, we have to change its sound class to SC_SFX. After that, we update the theme data to use this new asset. Then, we get a sound effect every time a block touches the floor!

Block Shatter Sound

Now that we have a sound effect playing every time a block touches the floor, when blocks are removed from the grid and the particle effect is being animated we kind of miss a sound there, probably even more than before! At least that's what happened to me! In any case, we will want a new sound effect to be played, this time right before destroying the block. In other words, somewhat at the same time we are spawning the particle effect.

The first thing, we add the property into our theme data:

ThemeData.h
// Sound effect played when blocks are removed from the grid
UPROPERTY(EditAnywhere, BlueprintReadWrite)
USoundWave* RemovingBlockSound;

Of course, we initialize this with nullptr in the constructor. I think I don't need to add the code for it, right? Next, import the sound effect into the project. Again, this must be set to the UI_SFX sound class:

Loading

0:00.000

  / 0:00.000

And then, we have to play it. From the StateRemovingBlock state, we perform the retrieve game data, sound effect property and play it tasks. Not inside the loop where we call the OnBeingDestroyed() event, rather, before or after the loop itself so we play the sound effect only once. The exact moment in this case doesn't matter. Although, for the sake of this tutorial, the code I'm using plays the sound before the loop:

GameModeInGame.cpp
AGameModeInGame::StateFunctionProxy AGameModeInGame::StateRemovingBlock(float Seconds)
{
   ...
   if (alpha >= 1.0f)
   {
      if (UThemeData* theme = UColBPLibrary::GetGameTheme(this))
      {
         if (theme->RemovingBlockSound)
         {
            UGameplayStatics::PlaySound2D(GetWorld(), theme->RemovingBlockSound);
         }
      }
      ...
   }
   ...
}

And once the project is built, we can update the theme data to use the new sound effect. And that's it! We have some sound effects. Granted, those are not the prettiest ones produced and the quality is far from the desired one. If you want to follow this route of recording the effects yourself, good equipment is definitely necessary, not to mention a silent room! And if that's not an option, then it's better to actually buy sound packs or find free sounds with licenses that allow usage in your projects. In any case, the sound files will have to be adapted in one way or another, mostly to cut unwanted "silence" or even to make them mono rather than stereo.

Music - Traditional Game Mode

In the traditional game mode, the speed in which the player piece falls increase as the game progress. Why not also increase the tempo of the music? Unfortunately we don't have easy means to do that. Instead we have to create the "same music" with various tempos and then intercalate them through the game. In order to properly point to the correct track to be played we can sub-divide our speed progress and think about some "key moments", in which we transition from one "tempo to another". Shortly we will see it more detailed.

With all that in mind, I have used LMMS to generate the audio track aiming at getting 1 minute of, err, "music" at tempo = 80. From there, I have generated 10 wave files with the same composition, variating only the tempo between each generated file (60, 65, 70, 75, 80, 85, 90, 100 and 110). Once the files are imported into the project, all of them have to be set to the correct sound class, SC_Music, and group, Music. As a side note here, I really wish Unreal Engine supported importing other formats besides WAV, like OGG and/or FLAC.

The first thing we need for this to work is a sound cue . In its editor we can setup a parameter that will be used to select which audio file we want to play. That said, create a new Sound Cue asset named SCue_MusicTraditional. Once the editor is opened, make sure the Volume Multiplier property is set to 1.0. You can select the sound class, SC_Music if desired but it's not entirely necessary since we will make sure the output will be remapped to this class through a node in the blueprint.

Then, drag all of the audio tracks into this blueprint editor, which will automatically create the Looping node with the correct wav files set in them. Still, the Looping property of each of the ten nodes must be enabled and, by default, it's set to false (it can be done all at once if all noes are selected). Next we setup a parameter that will be used to tell which track will be played. This is done by adding the Switch node and, once selected, set the Int Parameter Name property to AudioSpeed. Effectively from outside we will reference this parameter using this name, AudioSpeed. Still with the Switch node selected, click the Add Input Pin until there are 10 numbered pins, ranging from 0 to 9 and then connect each audio track into the input pin. Make sure the order is correct (tempo60 ↔ 0, tempo65 ↔ 1 etc). Then we add a Sound Class node, which will ensure the audio wil be remapped to the desired sound class. Once the node is added, select it and then set the SC_Music sound class, finally connecting it into the output of the blueprint:

When we begin the playback, we set the AudioSpeed parameter of this sound cue to 0, which will make it switch to the first audio track (the one connected to the input pin 0). Later, when we have to transition into the next tempo, we change the AudioSpeed parameter to the next value. The way we will calculate the value of this parameter will be a simple multiplication of the speed progress, SpeedProgress * 10. Yes, when the speed progress reaches 1.0, we will get 10 as a result, which is not a valid value for the parameter based on how we have setup the sound cue. Instead of creating yet a new track speed, we just clamp the result of the multiplication, effectively making the last track (tempo 110) be potentially played for a longer time. With this calculation we are essentially dividing our speed progress into 10 discreet "key tempos".

But the question now is, how and when will we playback the sound cue as well as update the AudioSpeed parameter? Ok, it's quite simple, really. We add a new Audio component into our relevant game mode class and within it we select the sound cue. Since we will want some kind of music within all of the in game modes, let's add the component in our AGameModeInGame C++ class. First we declare it in the private section:

GameModeInGame.h
// Allows selection of music our sound cue that can then be used to playback audio during the game
UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = "Audio", meta = (AllowPrivateAccess = "true"))
class UAudioComponent* AudioComponent;

Then, in the constructor we have to initialize this component, within the constructor. Basically, we have to create it and then attach into the root component of the class. The code should be self explanatory (bellow some of the old code is shown just for reference):

GameModeInGame.cpp
AGameModeInGame::AGameModeInGame()
{
   PrimaryActorTick.bCanEverTick = true;

   static ConstructorHelpers::FClassFinder<AColPlayerController> bp_pc(TEXT("/Game/Blueprints/BP_PCInGameDefault.BP_PCInGameDefault_C"));
   PlayerControllerClass = bp_pc.Class;

   AudioComponent = CreateDefaultSubobject<UAudioComponent>("Audio");
   AudioComponent->AttachToComponent(RootComponent, FAttachmentTransformRules::KeepRelativeTransform);

   ...
}

Once the code is compiled, opening the BP_GMTraditional should provide an AudioComponent within the component list tab. Select it so we can set the necessary property, Sound, which should be changed to SCue_MusicTraditional. Another property that we will change is the Auto Activate, which should be disabled. For the moment this auto activation will not make much difference since we will immediately start the music playback. But later on, we might need to delay the playback, meaning that auto activate must be disabled. The way the sound cue blueprint was created requires us to setup a parameter value otherwise nothing will be played. So, we have to update the BeginPlay and frmo it use the Set Integer Parameter node that is pulled from the audio component:

If you play the game now in the traditional game mode, there should be some music. However, very slow and with fixed speed. Granted, we didn't add any logic to update the audio track! We already know how to set the desired sound cue parameter and somewhat how to calculate the parameter's value. The thing now is when to update. As it turns out, we only change the speed progress when the player piece lands, making it a perfect place to calculate the cue's parameter. With that in mind, we can implement the OnPlayerPieceLanded event in the blueprint class. As mentioned, we will calculate the parameter's value by a simple multiplication, meaning the graph is pretty straightforward:

With this, the audio track is indeed changed during the game, however we have 3 problems:

  1. When we change to the next audio track, the music goes back to the beginning and it may be somewhat distracting.
  2. When the game is over, the music continues to be played
  3. Restarting the game will not reset the sound cue's parameter to 0.

Solving problem 1 is a bit involved and will require some detailed explanation. Problems 2 and 3 are very easy and we will do so after solving 1. Ok, actually there is a fourth "problem" with the graph. We are setting the cue parameter every time the player piece lands. This will be "fixed" soon as part of the solution to the mentioned issues.

The audio component provides a Play node which we haven't used yet. This node contains an interesting input, StartTime. We can, and will, definitely use it to specify the position we want the playback to begin at. In order to do that, we have to know how much has been played from the track being played before transitioning. If we select the audio component there will a few events that can be handled, one of them called On Audio Playback Percent. It would be absolutely fantastic if this event were consistent! The problem is that often it does not work, returning 0 all the time, specially after reopening the project.

We need a different approach to retrieve how much of the track has been played. For that we will need to use C++. The difference in this case is that we will get the amount of time that has been elapsed rather than a supposed percent value. We will still calculate a percent value shortly but from blueprint. There are a few reasons we will not compute the percent value from C++. The most important is that we need the duration of the track being played and, for some reason, from the native code the wrong value was returned. The second reason relates to the fact that when we update the sound cue's parameter the elapsed time goes back to 0 no matter the position we specify in the Play node. This will force us to calculate an "offset" for the next audio track.

Let's first create the C++ function that will return the elapsed time, in our blueprint library:

ColBPLibrary.h
// Return the elapsed playback time of the specified audio component
UFUNCTION(BlueprintPure, Category = "Audio")
static float GetPlaybackTime(class UAudioComponent* AudioComponent);

To implement this function we will first obtain the component's audio device with GetAudioDevice, which provides us the active sound through the FindActiveSound function. With the active sound we can access the PlaybackTime property. The caveat here is that we have to run the FindActiveSound on a separate thread, more specifically the audio one. For that we have FAudioThread::RunCommandOnAudioThread. It requires a function object and we will create a lambda to make things easier. The code looks like this:

ColBPLibrary.cpp
float UColBPLibrary::GetPlaybackTime(class UAudioComponent* AudioComponent)
{
   float retval = 0.0f;

   if (AudioComponent)
   {
      if (FAudioDevice* device = AudioComponent->GetAudioDevice())
      {
         const uint64 component_id = AudioComponent->GetAudioComponentID();

         FAudioThread::RunCommandOnAudioThread([device, component_id, &retval]()
         {
            if (FActiveSound* active = device->FindActiveSound(component_id))
            {
               retval = active->PlaybackTime;
            }
         });
      }
   }

   return retval;
}

With this we obtain the playback elapsed time, in seconds. However we want a percent value so we can calculate the starting time of the next audio file using:

S=Ti×pS = T_i \times p

Where S\scriptsize S is the start time, Ti\scriptsize T_i the duration of the audio track to be played and pp is the playback percent. In other words, we need Ti\scriptsize T_i. Unfortunately we can't obtain this value from the sound cue (the audio component, that is), because the reported value will be from the audio track that is being played and we want the next one. Because of that we have to create an array variable, of Sound Wave type, named MusicList. After compiling the blueprint we can set the "default value" of this array, that is, populate it. So, add in 10 elements and set each one of the audio tracks. After that, we can just Get the necessary element from the array, with index being the exact same value we use to set the sound cue parameter (AudioSpeed), and then Get Duration.

We still have to calculate the elapsed percent. This is as easy as taking the elapsed time and dividing by the duration of the current audio.

Because the playback time goes back to 0 when we change the cue's parameter, we have to compensate by keeping track of how much has been elapsed and add to the calculated percent. For this we create new float variable (remember, we are working on the BP_GMTraditional blueprint) named PercentOffset. Then we can just take elapsed percenter + percent offset in order to calculate the percent value that will be used now and the offset for the next track. But then, what to do if the percent goes beyond 100%? Since our audio tracks are looping we can just check if the new percent value is bigger than 1.0 and, if so, we subtract 1 from it.

Once all that calculation is done we can update the audio component parameter and then call its Play node specifying the starting time. To make things simpler we create a new function named ChangeMusicTempo. In this function we add a Sequence node and perform the mentioned tasks:

Next we have to update the OnPlayer Piece Landed event handler. Instead of directly setting the audio component parameter, we check if there is any need to change the audio file. If that's the case, we call the ChangeMusicTempo function we have just created. The check itself is basically comparing the result of the GetSpeedProgress * 10 and the current Playback Index. If they are different then the playback index is behind the progress and we have to update the component parameter:

Ok, now we have solved the problem of the "fixed music speed" and we are not constantly setting the audio component parameter anymore. Next we have to stop the music when the game is over and reset the playback index to 0 when restarting the game from the game over menu. Let's begin with stopping the music when the game is over. For that, we handle the OnGameOver event and call the audio component's Stop function node:

We still have one issue to fix. Yet, this last change incorporated a new one while "hiding" the old one. The fact is that when we restart the game now from the game over, no music will be played at all until the speed progress requests a new audio track. And, of course, not only the incorrect file will be played, the position will not be the right one also. All of that is because we have to reset the PlaybackIndex as well as the PercentOffset, making sure the audio component parameter is correctly set. We want those reset to occur every time the game starts and we have an event meant exactly for that, the Custom Game Init! That said, add it to the event graph, reset the internal data and then play the audio:

Music - Timed Game Mode

For this game mode the strategy will be a little different. The thing is, each game will have a time limit, pre-defined by user selection from a list we have arbitrarily set. What all of this means is that having a music that increases its tempo as the speed progress will not work very well. The idea here is to "remove" instruments from the playback as the game progress.

Basically, after the music was created within LMMS, I individually exported each of the tracks to combine them using the sound cue blueprint. Initially I had this "clever strategy" to save up storage space by exporting just a tiny bit of each track, the one that is repeated over and over. That indeed saved quite a bit of space (10 MB vs 413 KB per instrument). The problem is that every time the audio component's parameter were changed, the synchronization between the instruments broke since their track durations are were not equal. Of course that doesn't sound good and unfortunately I wasn't able to find a proper solution for this. The result is that now I have each instrument's track using the entire duration of the music.

After the audio tracks are imported into the project, don't forget to set the Sound Group = Music, Sound Class = SC_Music and Looping = True. After that we can work on the sound cue blueprint. So, create a new cue asset named SCue_MusicTimed. In it we will also use the SoundClass node to ensure the sound will be mapped by this class. To combine the instruments we use the Mixer node. A few of them will be needed, each one outputting into a Switch node, which will handle our parameter (name it Level). The idea is to place one of the audio tracks directly into the 0 of the parameter while combining that same track with another, through the Mixer, into the 1 of the switch. The idea goes on until 5 audio tracks are combined into the 4 of the switch node. Lastly, we combine all of that with the melodic track (in the case of the screenshot, the 6_Mallets asset):

With the sound cue in place we can edit the BP_GMTimed blueprint and change the Sound property of the audio component to SCue_MusicTimed asset. Just like in the traditional game mode, we will use the sound parameter to control how the music will be played. In this case the parameter is named Level and we need it to be 4 in order to play every single instrument (if using the same assets from the tutorial).

Since we already know that we will need to stop the music when the game is over, let's first implement the OnGameOver event handler, from the BeginPlay:

Following some logic similar to the one we have used within the traditional game mode, we will hold a variable that will be directly used to set the audio component parameter. That said, create a new integer variable named SoundLevel. We want its value to be 4 every time the game starts, meaning we set it from the custom game init event handler. Of course, we then call the SetIntegerParameter and the Play nodes from the audio component:

Then we have to work on the logic to change the internal Level variable and pass in to the audio component parameter. The idea is to subdivide the duration of the game into 5 "levels". Because we have 4 different game durations as "difficulty" options, we have to work with percent values so we can spread the instrument removal process evenly no matter the chosen duration. The idea would be to follow something like the following table:

PercentLevel
[0.0 .. 0.2)0
[0.2 .. 0.4)1
[0.4 .. 0.6)2
[0.6 .. 0.8)3
[0.8 .. 1.0]4

What this means is that from percent 0 to anything bellow 0.2 we want the result to be level = 0, while if we have exactly percent 0.2 or anything bellow 0.4 then the result should be level = 1. That's somewhat simple since we all we have to do is take the percent value we have and multiply by 5, rounding down. Now notice the last row of the table, the fact that both sides of the range are inclusive. With the math (level = floor(percent * 5)), when the time reaches the end we will get a level = 5. Our sound cue blueprint will not break with that, but having a level with that value may make some minds (including mine) a bit unease. So, we will clamp the level computation.

But, how about the percent? It's also very easy to compute! All we do is take the current game elapsed time and divide by its initial value. To make the blueprint graph a bit easier to work with, we first create a new function named ChangeMusic. In it we compute the current level using the level = (floor(TimeCount / InitialTime * 5)) and if its result is different from the SoundLevel variable we have created early, then we update the stored value and then change the Level parameter within the audio component. If have found that pausing, changing the parameter and then unpausing gave more consistent results. By consistent I mean, no "glitches" when removing an instrument. Those will still occur and to be very honest, I gave up on trying to find the perfect solution because every single one I tried gave terrible results! Nevertheless, the one shown here is at least acceptable (although sometimes it's noticeable). Anyway, the blueprint for this new function looks like this:

The last thing now is to update the tick event handler. From it we just add a call to this new function:


This was probably the shortest part in the entire tutorial. Granted, I couldn't talk anything about the asset creation process, mostly because I'm learning them and shamelessly sharing the result within this tutorial (and the Github project). As I have mentioned through the text, the result I obtained is far from the ideal but at least they serve the main purpose of the tutorial: how to use the assets in the game. Another thing to keep in mind is the fact that we have structured the sound usage in a way that we will be able to easily change the volume settings from UMG widget. We will do so in the next part, where we focus on more polishing.

Introduction
Previous123456789101112131415161718
19
20212223Next