Skip to: Site menu | Main content

Blog > AUUG conf: Google Maps

>> AUUG conf: Google Maps

Thu, Oct 12th 5:42pm 2006: Conferences

The opening keynote by Lars Rasmussen (head of the Google Maps team) was very interesting. He answered a couple of questions that have been bugging me for a while.

For example, how they manage to get square tiles out of the surface of a sphere, and no matter how far you move the tiles are still perfectly square.

Imagine you're looking down at a square km of the Earth's surface, then you move your view in 1km square steps for 1000km to the East. Each adjacent tile still has 1km edges, and if you then went another 1000km North, then 1000km West, then 1000km South, you'd end up back at your point of origin. But when you're in the North-East corner of your trip the Earth has theoretically curved away from you, right? Unless the plane of the tile was changing each time, but then the edges wouldn't be perfectly square. So the tile should be suffering severe distortion after 1000km displacement.

Hmm, how to get around the problem?

Easy, approximate the shape of the Earth as a cube!

Well, it's not actually that simple of course but that's the starting point. Then bisect each face of the cube vertically and horizontally in a sort of double binary chop. Then bisect those, and so and. After about 20 iterations you can approximate any point on the planet down to about 1cm resolution.

Pretty obvious when it's pointed out, but I've always wondered how they did it.

Another point Lars made was the difficulty of datasource conflation. They obtain feature data from a bunch of different sources: for example, one company may provide data on lakes in a certain area, while another provides data on road locations. In some cases multiple data sources may even describe the same feature, so a decision has to be made about how to reconcile potentially contradictory data such as when the same feature is shown in different locations or when one feature (road) intersects another (lake)... (splash).

Datasource conflation can be a remarkably hard problem. I've been butting my head against it a bit recently in the very early stages of a still-secret project that will require multiple sources to be referenced for the same data. (hint: it involves modelling a complex system and generating projections based on changing input values, principally CO2).

Overall a very interesting talk. They've come up against interesting problems and solved them in equally interesting ways, which is always fun.

Bookmark and Share