by

Virtual File System + zlib integration

I’m currently trying to optimize resource loading before starting a full level design swing on my 3D shooter. The main problem is that file disk access is slow. This is true for virtually all platforms, including my platforms of interest: iPhone and Windows. This is due to several factors: disk memory is typically abundant but slow, and it can be shared by several processes which makes it a costy resource to access.

Data packing and partitioning

So in order to optimize the resource loading, I made sure the disk is accessed in a managed and efficient way. Firstly, the game data is now packed into a single “data.fs” file instead of a bunch of file free-for-all. Secondly, this data file is organized into “partitions”. The main idea is to be able to pre-load a bunch of files into RAM memory, and access them later in a super fast way.But since RAM memory is itself a scarce resource, partitions must be carefully managed, loaded, and disposed of when not needed. In particular, partitions should not be too big so they don’t bloat the RAM and take ages to load. Also, the first partition to be loaded must be small so that the game boots quickly, which give an impression of speed. Here is a quick desc of my partitions:

  • Boot Partition: data needed to boot the game: (fonts, main menu graphics, ..) is loaded in the foreground thread as soon as the game starts.
  • Common Partition: data common to all game levels (Player data, HUD data, ..), is loaded in the background as soon as the game starts.
  • Partition1: data for Level1, is loaded in the background when Level1 is requested 
  • Partition2: data for Level2, is loaded in the background when Level2 is requested 
  • Partition3: data for Level3, is loaded in the background when Level3 is requested
  • etc.

To facilitate development, the data files are accessed directly in Debug mode (the data.fs pack is not used.) However, the data files need to be reorganized into sub-folders, each one representing a partition.

In Release mode, the data.fs is used, so I have a build step that prepares it. I wrote a tool that reads the structure of the data folder, and outputs the corresponding pack in a binary format. At the beginning of the pack, there is a meta information header about the files: The offset of each partition in the pack, and the offset of each file with respect to its partition. After the header, the content of each partition is appended.

Reading files from memory

The last step is to actually use this file system :). In Release mode, everytime the engine requests file data, it is transparently redirected to the file data in RAM (that is part of a previously loaded partition) instead of the data read from disk. I have a file map that is loaded at the beginning (from the data.fs header), that is used to match the file offset in RAM from the file path.

This integration is completely transparent to engine code, so when libpng or tinyxml call a shoot::File::Read (which is the Shoot equivalent of fread), they are given the corresponding data from RAM really quickly.

Compression using zlib

Since I have a lot of data in xml format (Levels, Resource descriptors, Entity templates, ..), I decided to optimize things further and compress each partition using zlib. Basically zlib was already there as part of the libpng integration I just had to use it. Zlib is so easy to use, here is a quick example:

unsigned long compressedDataSize = unsigned long(originalDataSize * 1.1) + 12;

compress(dataCompressed, &compressedDataSize , data, originalDataSize);

uncompress(data, &originalDataSize, dataCompressed, compressedDataSize);

Write a Comment

Comment