Not really: first of all, showing surrounding terrain realistically means you need to show things at roughly constant angular resolution. if you have 30 m resolution at the end of your 30 km tile, you need the same resolution for the following 8 tiles. But for the next distance, you are already 3 times further away from the center of your current tile than the edge of your tile, so you can use 3 times lower resolution. instead of 3 x 3 * 8 tiles at 30 m, you can use 8 tiles of 90 meter resolution. so, you have 9 tiles at 30m and 8 tiles at 90 m (all together 17 tiles with 34 MB of data if you use 16 bit data), and cover a distance of about 45 + 90 km = 135 km. That would be enough coverage for flying in about 2000 meter altitude (above the average tile altitude, not MSL).
If you fly in 30 km altitude, you don't really need 30 meter resolution for covering your place, 90 meter would already be more than enough. This permits you to cover 3 times the terrain with the same number of data tiles loaded: about 450 km distance.
What you need is intelligent preloading: which tiles are more likely to be used in the future. These are the tiles in your current flight path, while tiles right behind you would not be needed once out of sight. At higher speeds, you also can't see much detail, so you don't need 30 m resolution around you as well, you only need to be prepared for rendering them as soon as speed drops.
(Of course, the 30 m resolution is only approximate - in reality you would use angle as unit for covering a sphere surface)
By having enough memory available to hold all height data in memory, you can of course have the highest speed with that algorithm. But you could still scale the algorithm down for weaker machines. You could reduce the number of tiles that are shown by having lower base visibility ranges. You could use lower minimum resolution for the tiles.
Now, you could also have the special case of flying with a telescope sight... but then you would need less tiles, since your most likely line of travel produces a very narrow cone of high-resolution tiles.
You could also make better use of multi-cores by such a divide and conquer approach. While I/O would still be limited by the single HDD that your data is on, you could calculate the tile priorities faster if you have a tree-like structure, compared to a pure iterative approach which would not allow good task stealing.
I doubt you will fly at more than 3000 m/s at less than 30 km altitude. so, you would need a new set of full tile data every 10 seconds at 30 meter resolution. ten seconds, for loading 12 tiles. If you reduce resolution by speed to 90 meters, you would only need to load 4 tiles in the same period of time.
Also, you don't need a large overhead of lower resolution tiles on your HD for the process, you can skip some of the lower resolutions. After 270 meter resolution (270 km tile width) and 810 m (810 km tile width) there won't be much demand for Earth or Venus (about 90 810 km tiles would be needed for covering one hemisphere of Earth, or ten 2430 m tiles). Even less on Moon or Mars.
And you could even do more optimization
At lower viewing angles, you only see every tile from the side and with low detail. you could replace the terrain in rendering by one of 8 simple rectangles (N,E,S,W, NE, SW, NW, SE) with prerendered textures. And this case could happen at pretty low distances already. if you assume 10° as criteria, you would get the chance for reduction at:
Altitude (km)|Limit(10°, km)|Limit(5°, km)
1| 5.6 | 11.4
2|11.2 | 22.8
5|28 | 57
10|56 | 114
20|112 | 228
30|168 | 342
50|280 | 570
100|560| 1140
Or already at the next or second next ring of tiles, depending on altitude and base resolution.