1 

Implementation and testing of variable scale topological data structures: Experiences with the GAPface tree and GAPedge forest
With the increase of the availability of largescale geographic data set and the rise of widespread computer networks, such as the Internet, the need has arisen to be ableto transfer this data by means of these networks. The networks form the basis for a Geographic Information Infrastructure (GII), in which data users, data providers and data producers are connected with each other.There also exists the need to offer this data on several scales to end users, for example to get an overview of an area first. Because the geographical information are now sent by the computer networks and largescale geographic information brings many data, data reduction must take place. This is to prevent that sending of the information takes too much time. Generalization of geographical information is a possible means to let this reduction take place.Generalization is the selection and simplification of detail appropriate to the scaleand or purpose of the map. The appliance of generalization demands that choices mustbe made with respect to which geographical objects are selected and simplified and how this selection and simplification must take place. Moreover, also the surroundings of the objects to be generalized, are often taken into account in the generalization process, which makes that the complete process even requires more time. This way, the complete process can not be carried out in real time.Earlier, reactive data structures, in which geographical information is stored in the computer with several levels of detail, have been proposed as a solution to allow the use of generalized largescale geographical information in real time. So far, these data structures were using redundancy with respect to geometry. For this reason a new conceptual model has been developed, where a number of existing data structures have been combined into two new data structures, namely, the GAP face tree and the GAP edge forest (described in Van Oosterom, 2005). The complete structure is termed tGAP structure, inwhich tGAP stands for topological Generalized Area Partitioning.The tGAP structure has not been theoretically verified, nor implemented or tested. Therefore, the objective of this research is to theoretically verify the data structures and to test the data structures considering requirements such as loading time and storage capacity.To reach the objective literature study has been performed in the field of generalization, database management systems and the data structures. Moreover, a prototype has been built, with which the data structures have been implemented in a mainstream database management system (DBMS) with spatial data types. Literature study has shown that generalization is a key issue in the complete process of obtaining and processing geoinformation and that using reactive data structures is asuitable option to offer the results of generalization within a GII in real time.The implementation of a prototype has shown, that it is possible to implement thedata structures in a mainstream DBMS. The data structures are implemented in Oracle Spatial, the DBMS, and by means of Apache, a web server, opened up to Google Earth, a geographical viewer. The data structures in the prototype make it possible to view thegeographical data interactively within the viewer independent from the size of the area to be loaded.The final conclusion must be, that with some workarounds and with some changes tothe proposed conceptual model, it is possible to implement the model as described in (Van Oosterom, 2005). With an implementation it becomes possible to show geographical data on a variable number of detail levels and the implementation shows that the data structures can provide the desired data reduction within a GII in real time.

[PDF]
[Abstract]

2 

Cachefriendly progressive data streaming with variablescale data structures
In this paper, we will give a description of a design of a fat client and study some implementation issues, to use the tGAP structures for progressive data streaming. This is an experiment to validate the theory of Haunert et al. (2009). Furthermore this theory is extended and a solution is proposed to make the progressive data streaming more cachefriendly by means of a Fieldtree.

[PDF]
[Abstract]

3 

Variablescale Geoinformation
The use of geoinformation is changing by the advent of new mobile devices, such as tabletpc's that harness a lot of computing power. This type of information is more and more applied in mainstream digital consumer products, in a netcentric environment (i.e. dissemination takes place via the Internet) and the advances in mobile hardware also have changed the way people can interact with the geographic information at hand, compared to `oldfashioned' paper maps.
However, current stateoftheart solutions for storing, maintaining and disseminating digital maps still mimic the analogue mapseries concept in the sense that for every map scale in the serie (e.g. 1:25K, 1:50K, 1:250K) a different digital copy with independent data is kept and maintained at the producers site. The challenge of this work was to get to a representation of the real world with gradually changing level of detail, instead of representations with discrete levels of detail (organised in multiple, independent layers, each layer representing only one resolution level).
Varioscale data structures try to avoid this redundancy of the geometric description of the map by storing references to composing map elements of the highest level of detail for any other element of a lower level of detail. An example of variablescale data structures are the tGAP data structures. In addition to the geometry and references, an importance value for every object is stored and based on this importance value different representations (where the level of detail is gradually changing) can be derived on the fly from these structures according to the needed level of detail.
The overall aim of this research has been to investigate variablescale geoinformation, by defining theoretical underpinnings of varioscale geoinformation and improving the initial tGAP structures. The objective we had with this research is expressed in the main question, which was formulated as:
How can we realise improved varioscale geoinformation having minimal redundancy?
The overall outline of the research design draws heavily upon the paradigm of design research. In an iterative fashion we performed theory building, prototype developments and experiments with real world data sets. Over the course of this research, we have made the following main contributions to the design of a varioscale geoinformation environment. We have:
 formalised the concept of variablescale data as a conceptual 3D model (the spacescale cube, SSC), where 2D space and 1D scale is integrated;
 shown for the tGAP data structures how minimal data redundancy can be obtained when applying a merge operation, how to perform a parallel simplification of lines, without introducing unwanted topological errors and proposed a split operation, for which it was analysed what the impacts are on
the designed data structures;
 shown how to derive a 2D map from the structures with a particular number of objects, as well as investigated progressive data streaming;
 proposed an improved way of generating data so that even smoother graphic transitions can be derived for visualisation.
The main conclusions that can be drawn from these contributions:
 With the concept of the proposed spacescale cube (SSC) we have formalised what varioscale vector data entails. In a sense, the improved design of the tGAP data structures can be seen as a lossless encoding of the data that is captured for a ssc;
 To make varioscale geoinformation operational, we need specific generalisation operations. These varioscale generalisation operations should be designed carefully to be able to give guarantees on the amount of data to be stored and output topologically consistent varioscale data;
 Although the improved tGAP structures are capable of providing a smooth zooming end user experience, we still store and visualise discrete steps  albeit smaller and more local than is common with current state of the art solutions. Therefore we proposed how smoothness of the varioscale data can be improved (where the smooth SSC taking a small step in scale leads to a small change in the 2D derived map). A novelty of this approach is that, as it is one integrated spacescale partition, using a nonhorizontal slice plane leads to a valid, mixedscale planar partition: this is useful for use in 3D computer graphics (far away from an observer having less detail than close by).
Although this research has generated some knowledge for a varioscale environment, it also paves the way for future research. The main recommendations for future work are:
 Investigate how to deal with very large data sets that do not fit in main memory (during the generalisation process or during visualisation) deserves attention;
 The smooth encoding of the SSC has the same building challenge as the classic tGAP with respect to applying the right sequence of generalisation operators (remove or merge, collapse or split, simplify) to obtain maps with sufficient cartographic quality;
 Another point for further research is the smooth interactions: it is of importance to know how users perceive these. The same holds for mixedscale slices (in a 3D world);
 Focus of this research has been mostly on obtaining and viewing varioscale data. Performing analysis with varioscale data is another interesting aspect that deserves attention, e.g. varioscale data could be of help in data integration;
 Investigate how to make the structures dynamic: currently the tGAP structure (including the new smooth variant) is a static structure and has to be rebuilt if the source data changes. Being able to perform incremental updates (partially regeneralising data for a new situation) would be beneficial if the data volume increases.
Related to this is higher dimensionality of smooth, varioscale data (e.g. 3D data) leading to integrated 5D data management (integrating dimensions of space (2D or 3D), time (updates, 1D) and scale (level of detail, 1D).

[PDF]
[PNG]
[PDF]
[Abstract]

4 

Extruding building footprints to create topologically consistent 3D city models
One of the simplest methods to construct a 3D city model is to extrude building footprints, to obtain "blockshaped" buildings. While the method is wellknown and easy to implement, if the topological relationships between the footprints are not taken into account, the resulting city models will not necessarily be topologically consistent. As a result, the model will be of little use for most applications, besides visualisation that is. In this paper, we present a new extrusion algorithm to construct topologically correct 3D city models. It is conceptually simple and permits us to create city models in different formats (e.g. CityGML). We have implemented the algorithm, tested it for the creation of the model of our university campus and validated it by constructing the constrained Delaunay tetrahedralization.

[PDF]
[Abstract]

5 

Topologically consistent 3D city models obtained by extrusion
One of the simplest methods to construct a 3D city model is to extrude building footprints to obtain \blockshaped" polyhedra representing buildings. While the method is wellknown and easy to implement, if the 2D topological relationships between the footprints are not taken into account, the resulting 3D city models will not necessarily be topologically consistent (i.e. primitives shared by 3D buildings will be duplicated and/or intersect each others). As a result, the model will be of little use for most applications, besides visualisation that is. In this paper, we present a new extrusion procedure to construct topologically correct 3D city models. It is based on the use of a constrained triangulation, is conceptually simple, and offers great exibility to create city models in different formats (e.g. CityGML or a surfacebased representation). We have implemented the procedure, tested it with realworld datasets, and validated it.

[PDF]
[Abstract]

6 

Validation of planar partitions using constrained triangulations
Planar partitions—full tessellations of the plane into nonoverlapping polygons—are frequently used in GIS to model concepts such as land cover, cadastral parcels or administrative boundaries. Since in practice planar partitions are often stored as a set of individual objects (polygons) to which attributes are attached (e.g. stored with a shapefile), and since different errors/mistakes can be introduced during their construction, manipulation or exchange, several inconsistencies will often arise in practice. The inconsistencies are for instance overlapping polygons, gaps and unconnected polygons. We present in this paper a novel algorithm to validate such planar partitions. It uses a constrained triangulation as a support for the validation, and permits us to avoid different problems that arise with existing solutions based on the construction of a planar graph. We describe in the paper the details of our algorithm, our implementation, how inconsistencies can be detected, and the experiments we have made with realworld data (the CORINE2000 dataset).

[PDF]
[Abstract]

7 

The spacescale cube: An integrated model for 2D polygonal areas and scale
This paper introduces the concept of a spacescale partition, which we term the spacescale cube – analogous with the spacetime cube (first introduced by Hägerstrand, 1970). We take the view of ‘map generalization is extrusion of 2D data into the third dimension’ (as introduced by Vermeij et al., 2003). An axiomatic approach formalizes the validity of the partition of space in three dimensions (2D space plus 1D scale). Furthermore the paper provides insights in how to: 1. obtain valid data for the cube, 2. obtain a valid 2D polygonal map at variable scale from the cube and 3. which other possibilities the cube brings for obtaining maps having different map scales over their domain (which we term mixedscale maps).

[PDF]
[Abstract]

8 

A VoronoiBased Approach to Generating DepthContours for Hydrographic Charts
We introduce a new approach for the generation and the generalisation of visually smooth depthcontours for hydrographic charts. Unlike most current approaches, it strictly respects the safety constraint that dictates that the resulting chart may not indicate a depth shallower than originally measured. The main idea is to construct a smooth surface using a Voronoibased interpolation method. This surface is represented using a triangulation, modified using a series of generalisation operators, and ultimately depthcontours are extracted directly from this surface. We report on experiments made with realworld datasets, and we compare our results with existing approaches

[PDF]
[Abstract]

9 

Varioscale data structures supporting smooth zoom and progressive transfer of 2D and 3D data

[PDF]

10 

Varioscale data structures for 2D and 3D geoinformation

[PDF]

11 

Method and system for generating maps in an Ndimensional space
A method for generating a varioscale visual representation of ndimensional objects (together forming a space partition) is presented comprising the steps of  generating a higher and a lower detailed ndimensional object representation each comprising digital data representing objects by zones in said ndimensional object representations, said zones being delimited by (n1)dimensional boundaries having at least one boundary segment,  positioning the higher and lower detailed object representation in an (n+1)dimensional space, having in addition to the dimensions of the ndimensional object representations an additional dimension, wherein the higher and the lower detailed ndimensional object representations are assigned a first and a second value for said additional dimension respectively,  constructing an (n+1); dimensional object representation by creating transscale boundary segments between mutually corresponding boundary segments in the higher detailed and the lower detailed ndimensional representation,  determining an intermediate ndimensional representation by calculating a crosssection between an ndimensional slicing object and the constructed transscale boundaries.

[PDF]
[Abstract]

12 

Towards a true varioscale structure supporting smoothzoom
This paper presents the first true varioscale structure for geographic information: a delta in scale leads to a delta in the map (and smaller scale deltas lead to smaller map deltas until and including the infinitesimal small delta) for all scales. The structure is called smooth tGAP and its integrated 2d space and scale representation is stored as a single 3d data structure: spacescale cube (ssc). The
polygonal area objects are mapped to polyhedral representations in the smooth tGAP structure. The polyhedral primitive is integrating all scale representations of a single 2d area object. Together all polyhedral primitives form a partition of the spacescale cube: no gaps and no overlaps (in space or scale). Obtaining a single scale map is computing an horizontal slice through the structure. The structure can be used to implement smooth zoom in an animation or morphing style. The structure can also be used for mixedscale representation: more detail near to user/viewer, less detail further away by taking nonhorizontal slices. For all derived representations, slices and smoothzoom animations, the 2d maps are always perfect planar partitions (even mixedscales objects fit together and form a planar partition). Perhaps mixedscale is not very useful for 2d maps, but for 3d computer graphics it is one of the key techniques. Our approach does also work for 3d space and scale integrated in one 4d hypercube.

[PDF]
[Abstract]

13 

3D visualisation of underground pipelines: Best strategy for 3D scene creation
Underground pipelines pose numerous challenges to 3D visualization. Pipes and cables are conceptually simple and narrow objects with clearly defined shapes, spanned over large geographical areas and made of multiple segments. Pipes are usually maintained as linear objects in the databases. However, the visualization of lines in 3D is difficult to perceive as such lines lack the volumetric appearance, which introduces depth perception and allows understanding the disposition and relationships between the objects on the screen. Therefore the lines should be replaced by volumetric shapes, such as parametric shapes (cylinders) or triangular meshes. The reconstruction of the 3D shape of the pipes has to be done on the fly and therefore it is important to select a 3D representation which will not degrade the performance. If a reconstruction method provides a good performance, the visualization of pipes and cables is guaranteed to provide a smooth experience to the final user, enabling richer scenes but also establishing the visualization requirements in terms of hardware and software to display underground utilities.
This paper presents our investigations on a strategy for creating a 3D pipes for 3D visualisation. It is assumed that the pipelines are stored in a database and portions of them are retrieved for 3D reconstruction and 3D visualization. Generally, the reconstruction of underground utilities can be performed in different ways and should lead to realistic appearance, produce visual continuity between segments, include objects depicting specific connections and even consider buffer volumes displaying the uncertainty and the security distance between objects. The creation of such visually pleasing reconstructions may require very detailed shapes, which will increase the complexity of the scene and degrade the performance. This research has identified four criteria to measure the complexity of the scene and conclude on a 3D reconstruction strategy: number of scene graph nodes, number of triangles and vertices on the screen, needed transformations and appearance options. On the basis of these criteria a testing framework is developed. Ten different strategies for 3D reconstruction are defined and tested for X3D, X3DOM and WebGL. The paper analyses the results of the tests and concludes on the best strategy.

[PDF]
[Abstract]

14 

Generation and generalization of safe depthcontours for hydrographc charts using a surfacebased approach
Depthcontours are an essential part of any hydrographic charta map of a waterbody intended for safe ship navigation. Traditionally these were manually drawn by skilled hydrographers from a limited set of surveyed depth measurements. Nowadays this process of map making is shifted towards the digital domain, not in the last place because of the sheer size of the point clouds that result from modern surveying techniques such as Multi Beam Echo Sounding (MBES) (see Figure 1a). This is no trivial task, since the the produced depthcontours need to comply with the four hydrographic generalization constraints of safety, legibility (smoothness), topology and waterbody morphology. The challenge is to solve the generalization problem for these four constraints.

[PDF]
[Abstract]

15 

2D varioscale representations based on real 3D structure
This paper focuses on 3D data structures supporting an alternative approach for creating 2D varioscale maps. The smooth animated zooming functionality have lead us to investigate a volumetric representation of gradually changing varioscale objects. In this paper, the principle of varioscale maps with theoretical knowledge about 3D representation (2D space plus 1D scale) is briefly introduced, followed by a description of our initial implementation efforts for creating a development environment for our research, which is based on the conversion from the classic tGAP data structure through tetrahedralization to a volumetric representation. The current results and examples of this volumetric representation with real world data and plans for future improvements are presented.

[PDF]
[Abstract]

16 

Parallel creation of varioscale data structures for large datasets
Processing massive datasets which are not fitting in the main memory of computer is challenging. This is especially true in the case of map generalization, where the relationships between (nearby) features in the map must be considered. In our case, an automated map generalization process runs offline to produce a dataset suitable for visualizing at arbitrary map scale (varioscale) and efficiently enabling smooth zoom user interactions over the web. Our solution to be able to generalize such large vector datasets is based on the idea of subdividing the workload according to the Fieldtree organization: a multilevel structure of space. It subdivides space regularly into fields (grid cells), at every level with shifted origin. Only features completely fitting within a field are processed. Due to the Fieldtree organization, features on the boundary at a given level will be contained completely in one of the fields of the higher levels. Every field that resides at the same level in the Fieldtree can be processed in parallel, which is advantageous for processing on multicore computer systems. We have tested our method with datasets with upto 880 thousand objects on a machine with 16 cores, resulting in a decrease of runtime with a factor 27 compared to a single sequential process run. This more than linear speedup indicates also an interesting algorithmic sideeffect of our approach.

[PDF]
[Abstract]

17 

Large scale road network generalization for varioscale map
The classical approach for road network generalization consists of producing multiple maps, for a different scale or purpose, from a single detailed data source and quite often roads are represented by line objects. Our target is the generalization of a road network for the whole scale range from large scale, where roads are represented as area objects, to mid and small scales, where roads are represented as line objects. Crucial in our approach is that we use the specific data structure where for all map objects a range of valid map scales is stored. Instead of targeting predefined discrete map scales, we offer a whole range of map scales (where the process of map generalization was captured for). So far the structure has been used exclusively for area features. The exception to this was recent research where line feature (roads) have been included in the implementation for the first time. However, this was applied to smaller scales only. Therefore in this paper we will address the first part of the process, the generalization of large scale road objects, when (road) features change there representation from areal to linear segment. In our suggested gradual approach this representation change may happen at different ‘scale’ moments for individual roads (even of the same road element), in contrast to current practices. This is the first time ever that this aspect is discussed in more detail.

[PDF]
[Abstract]

18 

Continuous road network generalization

[PDF]

19 

Research challenges in automated generalisation and cartographic modelling

[PDF]

20 

Applying DLM and DCM concepts in a multiscale data environment

[PDF]
