This addon directory (
keh_general) is meant to hold general use case scripts. In a way those scripts can be considered "sub-addons". What is available:
This script was born from the desire to automate the encoding and decoding of variant data into packed low level byte arrays (
PoolByteArray). Now, why all this trouble? The thing is that variables in GDScript take more bytes than we normally expect. Each one contains a 4 bytes "header" indicating what type is actually stored within. When dealing with networks, this extra data may not be desireable.
So basically, the main reason for this addon is to simplify as much as possible the task of stripping out the variant headers and ultimately reduce the require bandwidth when dealing with networked games.
The basic usage of this class is to instance it then initialize the internal
PoolByteArray with an initial data, either to be filled (encoding) or extracted (decoding).
This addon specific demo is found in the
demos/general/edbuffer.tscn scene, which contains an attached script.
In order to begin using this, an object must be created:
func some_function() -> void: # some code var encdec: EncDecBuffer = EncDecBuffer.new() # Initialize with some empty buffer so it can be filled encdec.buffer = PoolByteArray()
From there, it becomes possible to fill the internal buffer using several of the provided functions, which will be described shortly. There is one thing to keep in mind though. When the object is created, it performs some calculations in order to initialize some internal values that are necessary to properly deal with the data. Ideally this information would be static, that is, initialized once and preferably shared between each instance of this class (if you know C++ then you understand what I'm talking about here). Unfortunately that's not possible with GDScript, so when profiling your code keep an eye on the instantiation of this class. What I would suggest if this becomes a problem is to put this object as a property of your script (instead of a function local) and reuse the internal buffer when necessary. In other words, avoid creating an instance of this class at every single loop iteration, otherwise it should be fine.
Following the previous snippet, the created object,
encdec, is ready to be filled with data, which can be done through the various
write_*() functions. The available options are:
As you can see, just a subset of the Godot types can be stored within the buffer. But to be honest, there is no need to expand on to other types, really! Nevertheless, calling one of those functions will basically append the relevant data into the internal
encdec.buffer. There is another thing to note here. Two of those functions corresponds to types that are not exactly present in Godot,
write_ushort(). In the how it works tab I talk a little more about those.
Anyway, to exemplify those functions, let's encode into the object one of each, assuming we have the provided variables and assuming there is a function named
send_data() that receives one
PoolByteArray as argument (be careful with the fact that byte arrays are passed as value, not as reference, meaning that they may become expensive to be given as arguments):
func some_function() -> void: # some code, which may include the variables getting the desired values. encdec.write_bool(some_bool) encdec.write_int(some_int) encdec.write_float(some_float) encdec.write_vector2(some_vec2) encdec.write_rect2(some_rect2) encdec.write_vector3(some_vec3) encdec.write_quat(some_quat) encdec.write_color(some_color) encdec.write_uint(some_hash) encdec.write_byte(some_byte) encdec.write_ushort(some_16bitnum) # Send this encoded data somewhere else send_data(encdec.buffer)
See how simple it was to just pack the data into the buffer? In the demo project there is a comparison of the data usage of this encoding and directly using
var2bytes() function to store the same values. For further testing, the four compression methods (FastZL, Deflate, Zstd and GZip) provided by
PoolByteArray were used to also compare the
EncDecBuffer packing against the normal
Now, how about obtaining back those values? Those are done by the various
read_*() functions and there is one corresponding to each of the
write_*() functions. When one of those functions is called, the internal reading index is incremented by the corresponding amount of bytes, meaning that if you call the reading functions in the exact same order of the writing, you get everything back, as shown in the following snippet
func receive_data(data: PoolByteArray) -> void: # Create the encoder/decoder object var encdec: EncDecBuffer = EncDecBuffer.new() # Initialize it's buffer with the received data encdec.buffer = data # Extract the values - it must be in the same order it was encoded var some_bool: bool = encdec.read_bool() var some_int: int = encdec.read_int() var some_float: float = encdec.read_float() var some_vec2: Vector2 = encdec.read_vector2() var some_rect2: Rect2 = encdec.read_rect2() var some_vec3: Vector3 = encdec.read_vector3() var some_quat: Quat = encdec.read_quat() var some_color: Color = encdec.read_color() var some_hash: int = encdec.read_uint() var some_byte: int = encdec.read_byte() var some_16bitnum: int = encdec.read_ushort() # Do some stuff with those extracted values
There is also a set of
rewrite_*() functions, which gives the possibility to overwrite specific bytes within the buffer. As one use case example, suppose we want to store an arbitrary amount objects. When reading back we obviously need to know how many are there. So, before packing the objects we first write this amount into the buffer. But what if we only get the actual number of objects after iterating through a list? One possible solution would be to first iterate through the list, write the object count then iterate through the list again storing the objects in the process. For a rather small amount of objects it should not be a problem. Still, with the rewrite functionality, the solution becomes to first obtain the byte "address" where the object count begins, write a "dummy amount" into the buffer, iterate through the objects, storing them, and finally rewrite the object count, using the obtained "address". The following snippet showcases this:
# ... some code # Obtain the "address" of the object count var count_address: int = encdec.get_current_size() # Write the dummy object count encdec.write_uint(0) # Use this to hold the object count var obj_count: int = 0 # Iterate through the objects for obj in object_list: # ... some code # Assume must_store() performs some tests and return true if the object is meant to be stored if (must_store(obj)): obj_count += 1 # Store the object within the encdec ... # Now rewrite the object count encdec.rewrite_uint(obj_count, count_address)
While this functionality is not used in the example code, the network addon does use it. Nevertheless, that all! This addon is that simple to use!
When the range of a floating point number is known, it becomes possible to quantize it into integers using smaller number of bits. This is a lossy compression as it's basically reducing the precision. On many occasions the incorporated error is small enough to be acceptable.
As an example, colors (Color) store the values in four floating point components but the values are always in the range [0..1]. Very small discrepancies in the color may very well be completely unnoticeable and thus, maybe acceptable to be compressed into a rather small number of bits per component. Another use case for this is the compression of rotation quaternions, with some extra techniques, which will be described in the how it works.
The entire functionality given by the class is done through static functions, meaning that an instance of the class is not needed to use anything in this addon.
To quantize a floating point number in the range [0..1], the function
quantize_unit_float() is given. It requires two arguments. The value to be compressed and the number of bits which must be between 1 and 32. Note that 32 bits is not exactly useful in this case as it uses the same amount of bits of floating point numbers. An integer will be returned. Note that in this case it will still be a full 32 bit integer + the variant header. In this state it's not exactly useful, but the remaining bits can be safely discarded or maybe used to incorporate additional quantized floats.
To restore the float (still in the [0..1] range) there is the
restore_unit_float() function. It requires the quantized integer data and the number of bits used to compress the original float. The return value is a float that should be close to the original one.
In practice it may look like this:
# Quantize a float in range [0..1] using 10 bits var quantized: int = Quantize.quantize_unit_float(0.45, 10) # ... some code # Restore the quantized float var restored: float = Quantize.restore_unit_float(quantized, 10)
In this case,
restored = 0.449658. It does incorporate an error, which becomes smaller if the number of bits is increased.
What about different ranges? For that, there is the
quantize_float() function, which requires 4 arguments. The first one is the float to be compressed. Then there are minimum and maximum values, respectively. Finally, the number of bits. It will then return an integer value, much like the
To restore the float with arbitrary range there is the
restore_float() function, which requires 4 arguments. First, the integer containing the quantized float. Then the minimum and maximum values, respectively. Finally, the number of bits used to compress the float.
So, suppose we want to quantize a float that goes in the range [-1..1], using 16 bits this time:
# Quantize a float in range [-1..1] using 16 bits var quantized: int = Quantize.quantize_float(-0.35, -1.0, 1.0, 16) # ... some code # Restore the quantized float var restored: float = Quantize.restore_float(quantized, -1.0, 1.0, 16)
This should result in
restored = -0.349996.
That is basically all for simple floating point quantization! And as mentioned before, those functions can be used to compress components of rotation quaternions. The exact way this is done is explained in the how it works tab. The basic knowledge needed to understand the rest of the text in this tab is the fact that the compression here finds the largest component, drops it and compress the remaining components (hence the name smallest three). Nevertheless, a few functions are provided in this script just to make things easier.
compress_rotation_quat() is a function provided to perform the rotation quaternion compression in a "general way". It requires two arguments, the quaternion itself and the number of bits per component. The return value is a
Dictionary containing a few fields:
c: those are the remaining quantized components, basically the return value of
quantize_float()for each of those components.
index: indicate which of the quaternion component was dropped (
0 = x,
1 = y,
2 = zand
3 = w).
sig: while not entirely necessary, indicate the original signal of the dropped component (
1 = positive,
0 = negative).
Then there is the
restore_rotation_quat() function, that requires two arguments. The first one is a
Dictionary in the exact same format of the one returned by
compress_rotation_quat(). The second one is the number of bits used per component. It will then return the restored
As an example, suppose we have a rotation quaternion named
rquat and want to compress it using 10 bits per component:
# Compress a rotation quaternion using 10 bits per component var compressed: Dictionary = Quantize.compress_rotation_quat(rquat, 10) # ... in here we could pack the components of the returned dictionary into a single integer # Restore the quaternion var restored: Quat = Quantize.restore_rotation_quat(compressed, 10)
That's the basic idea. However, the returned dictionary may not be very useful. Indeed, each field of it is using a full variant object! Just the 3 components would be using 24 bytes (3 * 4 for the variant headers, plus 3 * 4 for the integers). This dictionary is meant to serve as an intermediary data and the compression only becomes useful when the result is packed into integers (as mentioned in the comment in the previous snippet).
To facilitate a little bit, there are 3 "wrappers" to compress rotation quaternions. Those use 9, 10 or 15 bits per component. The first two cases result in data that can be packed into a single integer, which is the return value of those two cases. 15 bits per component requires more than the 32 bits of a single integer. To that end, the function for this case returns a
PoolIntArray containing two integers, one that is fully used and the other that can discard 16 bits. The functions in question are
compress_rquat_15bits(). In all cases only the rotation quaternion is required as sole argument.
To restore those quaternions 3 functions are provided,
restore_rquat_15bits(). In the first two cases only a single integer is required as argument, which should match the return value of the corresponding compress functions. The 15 bits case, however, requires two integers, which should match those returned in the array of corresponding function.
Because dealing with the 9 and 10 bits cases are pretty straightforward, I will show only the 15 bits in the following snippet. Regardless, suppose we have a rotation quaternion named
rquat to be compressed using 15 bits per component:
# Compress a rotation quaternion using 15 bits per component var compressed: PoolIntArray = Quantize.compress_rquat_15bits(rquat) # In here compressed can discard 16 bits # Restore the quaternion var restored: Quat = Quantize.restore_rquat_15bits(compressed, compressed)
That's basically how to use the rotation quaternion compression! Still there are two very important facts that must be kept in mind:
encdecbuffer.gd), you will probably notice how these two scripts can complement each other rather well.