The beginning of a new year is always a good time to look ahead at what we think will be coming in the future. I have worked in the geospatial industry for 34 years, and I feel we will see more dramatic changes in the next few years than we have in the past 34. In particular, I believe that a number of emerging technologies will transform the way that we capture and maintain data about the world around us, transitioning from manual processes to fully automated ones. These include augmented reality, reality capture, computer vision and machine learning (there is some overlap between these areas).
The geospatial data maintenance problem
A long-standing challenge for infrastructure companies is how to keep their geospatial data up to date as they extend and maintain their networks. Currently it can take weeks if not months for “as built” changes to make their way back into the enterprise system of record, typically a GIS. There are many business reasons why having inaccurate and out of date information on network infrastructure is unacceptable, including network reliability, customer service, time to market and safety.
Crowdsourcing
The only viable solution to this problem is crowdsourcing, or what we sometimes call fieldsourcing: having the person who makes the changes to the network record what they’ve changed at the time that they do it. At IQGeo we believe that we are leading the industry in this area with our mobile solutions, which enable smart updates to the network asset data to be done in the field with simple to use apps. These apps are being used to improve network quality with many of our telecommunications, electric and gas utility customers. However, there is still more that could be done to make this data capture process more automated, making it even simpler and faster, and leading to broader adoption in a wider range of situations.
The future is already here - it’s just not evenly distributed
William Gibson is the author of Neuromancer, the science fiction novel in which he coined the term “cyberspace”. He once said, “the future is already here - it’s just not evenly distributed”, and this applies very well to the technologies I mentioned above. They are already being applied in a range of different industries but have not yet made a significant impact on geospatial applications for infrastructure companies.
Self-driving cars are one particular market that is really advancing the state of the art in automatic recognition of the world around us. Watch this short video from Tesla:
You can see here that the car is recognizing, and accurately locating, a wide range of features in the physical world. Many times per second it is identifying street signs, traffic lights, moving cars and people, curb lines, lane markings, and more. This is almost all done just using cameras. Clearly these same principles could be applied to capture data about infrastructure like poles, manholes, cabinets, etc.
There is a longer (30 minute) video of a presentation by Andrej Karpathy, Senior Director of Artifical Intelligence at Tesla, which I highly recommend watching when you have time. He talks about their “fleet of cars” – unlike most other manufacturers, all Tesla cars are permanently online and communicating information back to Tesla. They have about 1 million cars on the road, and he explains how all these cars play an integral role in their software development and testing process, capturing data and doing testing of new algorithms in the background as they drive around. All of this happens without any involvement from the driver. Both trucks and field workers could play a similar role in infrastructure companies.
Another fascinating element of this video is the extent to which they are using machine learning (ML) in their self-driving software. The initial focus for using ML was in recognizing the features in the environment around the car, but they have massive amounts of recorded data on how drivers interacted with the pedals and steering wheel in different situations, and now they are applying machine learning using that data to decide what the car should do in any given situation. For these sorts of complex problems, machine learning is rapidly taking over from traditional software development. This also has applicability in various areas in infrastructure companies, including data capture but also others like automated design.
Reality Capture
A general term that is used for the idea of automatically capturing data about the real world is Reality Capture. There are a wide range of devices that can be used for different types of reality capture, as shown in this diagram.
Some of these are more specialized and expensive, others are relatively inexpensive and widely available. The latter are most relevant to the idea of fieldsourcing for data maintenance. We must use devices that are practical for every field worker to carry with them, or indeed that they are already carrying, in the case of a smart phone. Cameras alone can capture a lot of information, but an interesting development is that the newest high-end iPhones and iPads contain LiDAR scanners, which can help capture 3D models of the real world more quickly and accurately.
Machine Learning
An important technology related to reality capture is machine learning, as was mentioned above. This is applicable in multiple applications, but one area which is already quite mature and easily usable is that of image recognition.
This image shows how the Amazon Rekognition service can identify items found in a photo. Microsoft, Google, Apple, IBM and many more offer similar services. These algorithms can be easily trained to recognize objects in a particular domain, such as poles, transformers, manholes, etc. Recognition of text, barcodes and QR codes is also now a commoditized capability. All these capabilities are very useful in automated data capture.
Augmented Reality
Augmented Reality (AR) capabilities continue to advance. The addition of LiDAR scanners to high end iPhones and iPads enables more advanced AR applications. This is a nice short video showing some examples of what is possible.
The Ikea and Shapr3D videos are nice examples of doing design using AR, and both of these have obvious parallels with design creation for infrastructure companies. Both of these demos also show feature recognition as part of the process.
The AR cloud
Currently, AR works best indoors over relatively small spatial areas. Outdoor AR applications over larger areas have mainly focused on using GPS together with orientation sensors up to this point, but this approach has intrinsic limitations in terms of accuracy. Using accurate point clouds to locate objects is a much more precise approach, and an area of major focus currently for many large companies, including Google, Apple, Niantic and others. The idea of building an accurate point cloud of the whole world as a foundation for AR and other applications is generally known as the AR cloud. It will be a few years before this vision is fully delivered, but when it is, this will dramatically improve the accuracy with which we can calculate locations of items in the real world. Elements of this concept are usable now, for example for measuring distances between nearby objects with better accuracy than is possible with GPS.
How will all this impact geospatial for infrastructure companies?
These technologies will dramatically change the nature of geospatial applications in infrastructure companies. Data capture will transition from a largely manual process to a fully automated one, using a variety of sensors. Both machine learning and augmented reality will play major roles in design. These two things mean that manual editing of GIS data will become largely obsolete. Augmented reality and Virtual Reality will be commonly used for visualization of geospatial data. 2D maps won’t become obsolete though, they are still a useful abstraction for communicating many types of information.
At IQGeo, we feel that our mobile-first approach, and existing capabilities of full network modeling and editing on any mobile device, positions us very well to take advantage of these exciting technologies as they continue to evolve. We are actively evaluating developments in these areas in our R&D team and look forward to adding new capabilities to our product offerings over time.