New Orbiter Beta Released (r.56, Jun 30 2016)
Bit of a gap since last beta - sorry for that. Anyway, here is a new one:
Change log:
- TileLoader: now loads multiple tiles per mutex lock to better utilise the tile loader thread. Should result in faster scene buildup.
- Bug fix: made GraphicsClient::TexturePath threadsafe (caused visual artefacts in planet surface rendering of D3D7 client)
- TileManager2: added support for loading tiles from compressed archive files
- API: added oapiDeflate and oapiInflate
- texpack: utility for packing the directory tree contents of a given layer into a compressed archive file
- Subsystem: added clbkConsumeBufferedKey and clbkConsumeDirectKey methods to allow subsystems to process their own key events. Updated DeltaGlider code accordingly.
- Scenarios: Fixed DeltaGlider/DG and DG-S scenario (moved the two vessels out of the woods)
- Reduced default ambient level from 5 to 2
New OVP commit to go with this beta: r.51.
The main new feature in this beta is added support for loading planetary tiles from a compressed archive. This was the last component I wanted to implement before a release.
I experimented with various options here, including an interface to a RAR decompressor kindly provided and tested by Doug. In the end in the interest of decompression performance I decided on a more homespun approach. Instead of using a standard multi-file compression format, I am compressing the input files individually (using zlib) and concatenate the result into an achive file after adding a table of contents for the tree structure.
The TOC is essentially a linked list similar to a FAT allocation table, except it's a quadtree list instead of a linear one. The advantage is fast searching through the tree (instead of having to go through a hashtable for 100,000's of file entries), and the TOC is also fairly compact to be kept in memory.
The disadvantage of this approach is the less efficient compression since each file is compressed individually, instead of the input stream as a whole. But I think the tradeoff is worth it.
I will update the texture download packages soon to provide these archive files instead of the individual tile directory trees, but for now you can create the archive files yourself (if you have already installed the planet texture packs for the previous betas):
I have included Utils\texpack.exe which takes the current directory tree for a layer in a planet's texture directory, and converts it into an achive file. Run "texpack -h" for a help page.
For example, the syntax for compressing Earth's surface layer is
Code:
cd utils
texpack ..\Textures\Earth Surf
This will produce the compressed archive in ..\Textures\Earth\Archive\Surf.tree, which is the correct location for orbiter to pick it up.
Do the same for the other layers, for all planets supporting the new tile format. Be warned that the larger trees (Earth, Moon, Mars) can take a long time (several hours) to archive.
Note that it is not necessary to generate the archives to use this beta. The old format of individual files is still supported.
You can set Orbiter's behaviour in searching for tiles under Extra | Visualisation parameters | Planet rendering options | Tile sources.
- Load from tile cache only: Ignore the archives, and load from individual files as before
- Load from compressed archive only: Ignore the individual files, and load only from the archive files
- Try cache first, then archive: what it says. This option allows to subsitute individual tiles without the need of repacking the archive, but comes with a performance penalty since Orbiter may have to search both the cache and the archive.
Note that loading from archives only works for the inline client and the D3D7 client at the moment. (The D3D7 client has a similar option in its configuration page). But the D3D9 client will need Jarmo to add support before the archives can be loaded.
The D3D7 code in the new OVP commit demonstrates how to add support.
I would be interested in users' performance comparisons between the three loading options. Do you get similar loading times from the archive as from the individual files? Any other observations/problems with this feature?