Published using Google Docs
Seamless Terrain (world terrain mosaics) mind mapping notes working with Unity
Updated automatically every 5 minutes

Sort of popped up on an idea, for terrain mosaics.  I know theres already cheap software, but sort of inspired to explore the idea of seaming terrain topology here.  The problem appears that heightmap raw importation has either some roundoff error...here I’ve noticed one program for instance, yields z coordinate data given to normalization which means that some numerical roundoff leads to lack of precision between terrain maps when raw heightmap data is imported.  Here, the idea is simply to reconfigure the terrain data so that the edges and some subsequent threshold of interior points are smoothed in the process of ‘seaming’ the data.  Basically this means, I’d be using some interpolating or curve smoothing process.  I’ve considered either a least squares (non linear) or spline interpolation process to do this.   I’ve additionally considered noising the data, for instance noise fitting between most ideal and original values additionally so that it appears the ‘seam’ weld is not so obvious at the seams...meaning we have something of an obvious presence of smooth curvature along these seam contours, but for now the first step is to see what a cubic spline or what non linear least squares can provide here.

The rough idea of the process, is pretty simple at this point.  I’d likely be choosing a curve fit process on a given normal axis relative the weld...thus if the weld is yz planar we’d choose a contour on the xz plane, or conversely yz plane if the weld is on the xz plane.   For cubic interpolation, the process is basically building a cubic function both from the corresponding x or y axis, and the having computed slopes from using these and z values, and then building the matrix as necessary by this process, the algorithms I imagine simply compute the inverse of this built matrix applying this as necessary.  Once having the cubic equation solved we then re apply the given x or y values back into this equation for the fitting cubic equation.  We could choose for the cubic spline a method which smooths the incongruities by row alternating on the sets of parameter points, for instance.  In this case, if we choose say every odd or even row, we fill in the corresponding point to the skipped row...Generally with randomized enough data, I hadn’t for seen problems like terrace banding effects where slope values are the same on non consecutive rows but interior slope values are not, for instance, but with highly regularized data, this could lead to some obvious visual presence?!  The other is least squares which is basically more of a denoising and smoothing polynomial approximation process.  The other thing, is that with least squares I would like to apply this to interior points but not necessarily at a given weld or at least on a given weld it seems like jump discontinuities would still need to be addressed either on the neighboring terrain computed simultaneously for least squares, or at least leading to a bit of seam abruptness where a weld point were forced.  Spline has the advantage of preserving end points of a given curvature for approximation.

It seems the ideal threshold for welding generalizes well enough the smoothing process as I visualize so that there isn’t some ‘banding’ region along the interconnect between terrains.  Noisier jumps may still be fine along the weld, so long as they are random in nature.  Which leads to the other process of curve approximation.  We can also apply curve approximation in a way so that it is alternated in some way, maybe through randomized weighting if more advanced application of least squares or spline fitting approaches are taken?

Start stage workflow process.  The easiest I’ve found is to work from the case of a specific example, testing stages, like ensuring that interpolation methods are in working order with test cases, starting from the simple case, then working a batch case example.  Then working say to connect a two terrain region on a specific test case.  Then generalizing these results for all other cases.  

So the start here.

1. Implement cubic spline on a test case.

   a.  Preprocessing:  Extract first or second derivative data on such case.  If using first derivative data...we can approximate first derivative data from our node points, the approximation will include an average slope computed from a neighboring node.  In this case we can choose the slope as computed from the interior nodal point approach to a given node end point.  These computed slopes will represent our given end point boundary conditions.  If we use a higher order derivative we would I believe need to extend our computations to any adjacent nodal point beyond say one node neighboring point, where we need compute the average change in slope.  The spline function will require 4 arguments, namely a (x,z) coordinate tuple for either endpoints, and boundary condition arguments for both node endpoints.  Build a function which includes cubic spline preprocessing from our chosen coordinate end points.

   b.  Implement the spline function.

   c.  Obtain poly coefficients from the spline functions return.

   d.  Compute interior node point z positions from the returned spline function (using these coefficients...remember the coefficients should be computed for a cubic polynomial which is the form of something like ax^3+bx^2+cx+d, so we’ll supply (a,b,c,d) the returned value from the spline function to this poly form.

   e.  Record and make graphically visual this data to confirm the spline function is working.  

2. Build the batch test case.

   a.  Sequentially we can apply the computational process from (1), as we work with data in the batch case.  Ideally choosing a set of 10 nodes(?), 100(?), or whatever that should seem sufficient.  It might help to have some visualization aids to see that any data (with noise) is confirming results. This node batch is implemented for a given single planar yz dimension, or in other words, performed on the single contour.  So for our given node endpoints, how many interior node points do we have?  Do we notice any problems arising say leaving 1 interior node point, versus say having 2 interior node points?  How do we handle computation on the one dimensional contour?  Answering this question next.

   b.  Lets iterate the contour of points as we are procedurally assumed iterated on the contour.  This should handle implementation on the next step.

3.  Implement the multi dimensional array.

   a.  We need to compute across contours.  This should be fairly straight forward, we are simply iterating across the ‘width’ of the terrain, and sequentially applying (2) along each contour as we move across the width of the terrain.

This is a good first several days foundation for now.

Challenges…on second thought (not experimentally derived).  I am thinking about the banding problem in a number of ways here.   First to provide enough node distance in determining an adequate smoothing cubic poly that would sufficiently provide for smooth slope gradation between conforming the terrain map on an edge boundary and a given threshold boundary for falloff of which edge leads to existing terrain z point values (unchanged).  Likely as I am mentally visualizing this problem we conform a given terrain edge boundary to a neighboring region within a translated region up to a falloff threshold.  Then we apply a c spline interpolated region with defined falloff parameters.  These parameters consisting of on a given node edge of of the original terrain z points, and on the opposite edge consisting of points from the translated conformal region of points.  All of this coupled with supplied with boundary conditions supplied to the each node edge.  Technically we need not supply all inter nodal points between the threshold edge points, but find an approximating c spline poly for an adequate defining falloff distance.  The c spline poly approximates a smooth curve along this falloff region.  The biggest problem with increased distance, however is curve stability (i.e., where cubics conform both first, second, or third derviative conditions alongside point values, but have a marked oscillatory instability as distance between node approximating conditions increases).  Thus to ensure curve stability we may need an added approach which approximates falloff region internodal points better to ensure smooth conform to a given curve falloff type.  Unfortunately we are working with intranodal threshold region unknowns which are that we don’t want to use the original z point values in this threshold region, but wish to restrict them so that they are defined to adequate falloff parameters.   If we have a falloff curve, in mind however, in terms of shape, part of our problem may be solved.  For instance, could we supply something like a search algorithm, which quickly finds points that maintain boundary conditions?!  In this case, something like Newton’s method, or searching method to conform points to a given curve type?!  That is identically we know that a matched point fits the criteria of matching the slope of our defined curve, along the nodal scale of our curve falloff type.  The other approach, on the intranodal threshold region could be supplied through something like a least square algorithm, smoothing out variational differences between with a defining curve of a specified curve order type.  Other possibilities?!  On the given threshold region, we provide a smoothing translation of points, say scaling our translation of points in a similar way preserving original contour curvature, but creating a falloff along translation curvature.  This means where along a given threshold region there is likely something of an obvious step curve, we grade translation along falloff cubic or quadratic type falloff types (ideally these aren’t linear or step in nature)...or linear may work possibly?  In this case the problem is working along the premise of a given translation factor, instead of the points themselves along such region.  Obviously for a given translation curvature, the problem then is reduced more likely to a set of knowns for our translation factors.   Since we can have prescribed, points, and boundary conditions (i.e., first, second, or third derivative) conditions set in place forming the constraints for each nodal point on the translation falloff curvature type.  The scale of such falloff is fairly easy to determine here.  Where for a series of node points  ,  has represents the origin (or zero translation), while  z translation is given to the step distance between the distance of terrain point position values .  One method that I could think of geometrically involves a scale invariant transformation on fall off curve using a prescribed point value known curve fall off type.  In this case, however, performing the transformation both on the set of point interpolating poly coefficients coupled, obviously while leaving the curve’s shape preserved, however, leaves the nodal points changed, but this isn’t exactly a problem if we have the approximating curves between re scaled nodal points on the prescribed constraints given above, since we may find the interpolating curves between nodal points, we may simply compute from a given rescaled poly z position values to the defined node position spacing needed.  

So here is the idea:  I want to do an invariant point to point transformation of a given curve, on a normalized curve set (by normalized, this is my own definition meaning a base curve that works from a prescribed condition that points [0,0] and [1,1] are the boundary conditions of the curve, and that a user supplies boundary condition first derivative conditions (slopes)...I won’t be rescaling the polynomial in this case, but a set of points, so that geometry of the curve for the given points are preserved (meaning its exactly the same from point to point) under scaling transformation.  Ideally a geometry is preserved for instance using a matching scaling parameter for both x and y values (or similarly in 3 dimensions).  Now The problem is that I am not transforming for a scaling parameter and determining a new cubic polynomial which provides these rescaled points, but instead I am rescaling points on the curve which ideally provide enough information in preserving geometry of the curve, so there is a potential problem, when the points are rescaled, they may not match along our terrains node point geometry.  This ostensibly means that we need to find a c spline approximation from rescaled point to point, and from these c spline equations we can find for a corresponding terrain node position its z position on the approximating c spline curve.  But the nice thing here, is that since geometry is preserved, are first derivative values from point to point preserve first derivative geometry.  Meaning that prior to scaling we can use for a given point set prior to scaling transformation, a given first derivative slope value and apply this in c spline approximation.   So once having node points and corresponding z position values in a given intra nodal threshold region, we can then determine our z position translation along the given node region.  Why the fuss in using a ‘normalized curve’ , it appears potentially much easier in a restricted node region to control curve behavior, as opposed to dealing with curve variations over larger regions, interpolating with higher than cubic orders a given curve which can lead to curves oscillating up and down more between node points while matching node point conditions

, and potentially we deal with things (under large distance spacing) like curve flattening problems I imagine over more extreme distances as opposed to more tightly controlling curve behavior with smaller node spacing.  Here preserving geometry where node spacing is controlled, we can translate this to larger or smaller node distances.  

Next problem:  Just occurred to me.  Terrain points are given in normalized values ranging between 0 and 1 but terrain coordinates are not.  This is problematic, since a set of spline functions will interpolate between a range of node points only between 0 and 1 with invariant scaling on geometry of the threshold curve.  Hmm…this generally means that we’d need to rescale the geometry of the threshold curve so that it is matched to a given x or y terrain node set, and differently for z position values.  Unfortunately this leaves the threshold curve variant under these conditions.  So this potentially means, that I’d likely consider a base threshold whose x node range is some n greater than 1, using a n cspline function set, then with this n cspline function set we can leave scaling invariant under transformation.  The problem, however, appears to have some added conditions.  Namely, that we’d like the threshold boundaries to match (on the given terrain coordinate), and with a scaling factor determined along the contours where scaling factors are highly probable to vary, we are left with scaling on the x or y terrain node coordinate sets varying under scaling length wise.

This leads me to think that instead it may be better simply to dispense away with any notion of scaling a curve using the given methods above, and likely creating a necessary n node cspline set interpolated for each contour using a contour defined maximum as an end point parameter.  Avoiding scaling, we ensure that threshold curve is matched to the n node x or y contour, and that matching boundary conditions are met for curve generation.

and  

8/28/ notes:  Today revisiting the idea of warp scale transformation here.  While this changes the curve shapes point geometry warping along the threshold curve along a length dimension.  This still seems to be satisfactory as an interpolating method for one particular reason.  While loss with increasing length of the contour, in the curve’s original shape is inevitably, we actually under scaling picking the right point generated set, should be able to match to a given length scaling factor, such points post scaled to desired terrain node values.   So we don’t need to deal with re interpolating the curve points post scaling so that we can recompute position points on the terrain that are aligned to node positions.  This should happen automatically as we choose/compute the right scaling factors.  The downside mentioned is a warp transformation in the fall off curve which becomes more pronounced as scaling factor differences between x or y and z scaling factor differences increase.  Likely for any transformation we are probably going to see a compression of the threshold curve coupled with stretching factor.  That being, given a difference in scale between length versus height distance means we potentially run into the impossibility of finding a curve to a desired specification...for instance, were scale factor sz = .8 while sx = 10 means that sampling a polynomial to produce a circle over the entire scale length (with preserved geometry) is impossible for the supplied boundary conditions above.

8/29 notes:  generally finished on cspline interpolation packages at the moment.  I’ve switched over to preparing terrain data, a lot of this organizational in nature.  For instance, creating an array of a class that contains pertinent terrain information.  Information such as :

1.  Neighboring edge heightmap data.

2.  Neighboring edge indexing (address to a given terrain neighbor).

3.  Heightmap data (i.e., coordinate data, terrain map boundary information)

I have an additional problem to solve here which is what to do in translate matching neighboring points from an existing terrain heightmap at a given corner and threshold transition.

 

The diagrams present problems and solutions.  The solution shown to the right, however, leads to a folding or creasing of the surface topology here potentially at the internodal boundaries for each terrain.  We’ll have to see to what degree of notable presence this leads to in rendering.  You could visualize this problem as being like a picture frame problem where the beveled edge of the picture frame leads to some noticeable curvature presence in terrain rendering.  

Sequentially translations I imagine should occur left to right, and bottom.  Top edge translations, it should be obvious, need not be done with a neighboring bottom edge already translated.  

Shows a solution for a given 3x3 world terrain matrix.  That is to avoid a conflict in the translational of points (as mentioned above), we don’t perform translations on the same world map node to orthogonal edges.  Thus we need sequence a given translation (an example is shown to the right).  

 

Generalizing the matrix logical notes are as follows:

A Row always alternates.  

A Column is always alternating and in agreement with alternation.

The first column determines the alternation of the entire alternation of the world terrain matrix.

9/3 notes:  Odds and ends, created several different class objects, one as a central data information object (basically an interface in so many words, although I know nothing about C++ really), then a central controller object, and then interfacing controller objects with some linear algebra packages and a c spline generator.  

9/11 notes:  Generally complete on project.  The following map encapsulates some basics of the program.  

Terraindata controller -  Is the main body inter facing structure of the program.  Where I centrally make all data processing calls, and sending data for storage to the given Terrain Data interface.

Terrain Data interface (technically I created a class of this)- This is where all the data processing information is stored in the computation process.  

 Cspline point generator-  this is coupled to a both a cspline computation package that I created, one part computes the cspline polynomial for two samples points on the domain [0,1] and range [0,1] using arbitrarily supplied first order derivatives (I used tangent from trig formula for instance ,generating equal tangent  lines for either point...although I am sure you can provide your own method).  The cspline point generator also references another point generator package, which basically using the polynomial coefficients computed from the cspline generator computes for a given domain point set a given range set on the prescribed intervals mentioned.  The terrain data controller makes calls to this class.

Rescaling operator class - terrain data controller makes calls on this.  It needs scaling parameters and domain and range point sets given from the cspline point generator.  

Main terrain data controller does point to point computations on a given terrain boundary edge, although for organizational sake, you could separate this into a separate class structure in its own right.  These boundary edge differences provide the necessary (z ) range scaling parameters for our given point set while the domain scaling parameter is determined by a given boundary threshold for interpolation.

Sequentially processing takes place as follows.  

1. Build 4 boundary edges for each terrain map.   Build a world mapping index.  The world mapping index function should determine for a given terrain a bordering neighbor index use by default -1 for any edge that doesn’t have a neighbor terrain.  I’ve generalized this indexing program for any m by n case.  Append all necessary initial data to the Terrain data interface.

2.  compute cspline points but I actually stored this in the terraindata controller, alongside all other Terrain data sets, since we need only compute for all terrains one cspline and point set for our initial domain and range mentioned above, avoiding computational redundancies.

3. Compute differences between matching boundary edges.  This data information is updated for each terrain data interface inside the terrain data controller.

4.  Point difference data on edge boundaries of each terrain alongside threshold length is supplied as scaling parameters, re scale compute and store original cspline point data.  Storage of this data will now go into the terrain data interface.

5.  Then use the world translation map algorithm shown above visually for point to  point translation on the necessary given neighbor edge boundary on a given terrain data set.

I’d hint the control statements on world mapping indexing and for the world translation map are a bit involved in terms of the generalized m x n world terrain map case.  Or basically you have to consider potentially at least two specific m x n cases (having only one row, or one column) that leads to specific control flow cases, and then a generalized form for all other m x n cases.  

 

I recommend to save debugging time,testing class structures, and linear algebra packages individuals for test cases.  At  least this sometimes helps before stringing the whole of everything together at once.

In C # if you are a noob, the most common errors, if you are leaping from the less formal languages like Python or Ruby, for me were type or variable instantiation especially as related to multi dimensional arrays whether using jagged or multi dimensional arrays.  The other big most common error, were empty arrays passed off to something else, usually this shows up as a ‘null exception error’, or somehow you missed getting data properly into a given data class.  If you are working with Unity engines for game development, I haven’t been able to figure out how to properly integrate Mathnet.Numerics into a project using NuGet managers, or in other words, I had to manually down load mathnet.numerics and then apply the .dlls for Net3.5 files (threading and numerics) into the project file reference libraries.  At least the cool thing with C sharp is that referencing other classes in a given library is very easy with namespace declarations.

I’d say at the outset I’ve spent over a week, from start to finish, figuring out a number of things, learning things about C sharp (that I hadn’t known), determining data interface structures in the Unity game engine.  There is a fine code example out there with a google search.  

Here’s a bit of code

public class terrainexample : MonoBehaviour {

        // Use this for initialization

        Terrain terr; // terrain to modify

        int hmWidth; // heightmap width

        int hmHeight; // heightmap height

        Terrain[] terrs;

        

        void Start () {

                

                terr = Terrain.activeTerrain;

                //terr.terrainData.

                hmWidth = terr.terrainData.heightmapWidth;

                hmHeight = terr.terrainData.heightmapHeight;

                terrs = Terrain.activeTerrains;

                Debug.Log (terrs.Length);

                Terrain.activeTerrain.heightmapMaximumLOD = 0;

                Debug.Log("I am alive!");

                

//                for (int h = 0; h<terrs.Length ; h++){

//                        terr = terrs[h];

//                        hmWidth = terr.terrainData.heightmapWidth;

//                        hmHeight = terr.terrainData.heightmapHeight;

//                        float[,] heights = terr.terrainData.GetHeights(0,0,hmWidth,hmHeight);

//                        for(int i = 0; i < hmWidth ; i++){

//                                for (int j=0; j< hmHeight; j++){

//                                        if(heights[i,j]!=0){

//                                                Debug.Log (i);

//                                                Debug.Log (j);

//                                                Debug.Log (heights[i,j]);

//                                        }

//                                }

//                        }

//                }

        }

At least this gives enough information alone for getting terrain data and modification is done through a SetHeights function.

for example:

terr.terrainData.SetHeights(0,0,heights);

where any modified height data is adjusted in the heights variable.

Some added things (at least as of this writing) for Unity, Terrain meshes are appended to the array, meaning that first mesh added is pushed on the array stack to the next array position, so keep this in mind when tracking terrain indexing.  When rescaling points on for a given threshold falloff curve (on the neighbor edge map), you may need to invert your curve domain and range, if you like I, stored each fall off curve unidirectionally.  Can’t think of anything else here at the moment, there are a few finer points of threading the needle here and there, but generally this problem is fairly straightforward.    

Anyways, hopefully this gives an idea as to the extent of a problem in working with the World Terrain mosaics and tackling solution methods for neighboring terrain mesh conflicts.

9/23/14  - Discovery of a problem namely in the seaming method above which is at the corners.  Same problem really as mentioned before.  Apparently translation sequencing with alternation of translation on the mosaic grid doesn’t resolve the problem for mosaic corners especially when there is an intersection of 4 mosaic maps, so the following resolution is proposed.  Sequentially on the mosaic map the following illustrates a solution:  

With 4 maps at a given corner intersection, we use the following sequencing for translation:  

Map 1:  Up:   - Left corner:  Within threshold neighbor edges, diagonalize the match to the corner, meaning we sequence the cspline map through the threshold starting on the diagonal (most right position on the threshold) thru to the left, reducing by one spline point the threshold curve length.  

Down:

Map 2:

-          

Simplifying this.  The idea is that with boundary edge differences we need to patch differences maps since the creation of boundary difference maps, is erroneous when not accounting for post translation values.  So we need to account for this mainly in the four corners of our terrain mosaics intersection.  The methodology is pretty simple here as I’ve modeled it.  We start by working on the first corner of our terrain map (if applicable to the world terrain row) so that in this corner region we fix the corner point on terrain map 4.  Sequentially running counter clockwise, terrain 3 is adjusted as usual.  However, counterclockwise from this position, terrain map 1 is adjusted not only as expected only the lower edge map boundary, but we adjust this on the threshold length EW orientation accounting for the new edge difference given by map 3, so we need to add not only original differences computed but also new differences given from map 4 differences imposed on map 3.  Then we repeat this procedure on map 2 from map 1, and then from map 2 we account differences up to the fixed point on map 4 excepting that in the threshold region we create a threshold translation diagonal as opposed to fixing the threshold region with equal length thresholds.  Now, I’ve adjusted the cspline threshold maps, so that we have a diagonalized points threshold region.  On alternate column values for an interior corner we perform this procedure clockwise as opposed to counter clockwise using a fixed point on the opposite map side.