Page 377 - Contributed Paper Session (CPS) - Volume 4
P. 377

CPS2315 Nele C.
                Vast improvements have been made in these fields. The democratization
            of  data  through  generous  programs  like  Sentinel  and  Copernicus  makes
            satellite imagery and radar data freely available to everyone. The Internet of
            Things and explosion of sensors (drones, webcams, video feeds, connected
            traffic sensors, etc.) have ensured that we have access to more data than ever
            before.
                While the data itself is freely available, knowledge of the technology to
            convert the data to usable information is not. While we can get free radar
            coverage of every point on the earth every two weeks, processing that data is
            still complicated. We can unwrap radar data to find minute changes in the
            earth’s surface (down to a millimeter), but the knowledge of those change
            detection algorithms and how to use them remain a black box – and there is
            no foreseeable end to that.
                Even  with  the  technology  to  perform  change  detection  on  satellite
            imagery, it still requires a human being to interact with the imagery and make
            a map. Frequently, this process can take weeks to months to complete, and by
            the time the imagery is processed, and a map is created, the information is out
            of date. Because making traditional maps takes so long and is so expensive,
            we  try  to  make  them  do  too  much.  Every  map  has  to  perform  multiple
            purposes: land cover, land use, roadway mapping, and topography, to name a
            few. This compromises at least the intent of the map, if not the map’s accuracy.
                Information  that  can  be  derived  from  the  data  collected  by  all  these
            sensors has great potential. What is needed is a way for domain experts to
            build  sophisticated,  reusable  algorithms  that  can  ingest  streams  of  sensor
            data. We need a platform that allows data to be plugged in to the platform as
            soon  as  it  is  collected,  and  then  have  the  system  pull  the  data  through
            processing steps so  that analyses run automatically and generate updated
            maps. These maps would not only allow end users to see the current state of
            the land, but to see the entire time series so they can understand the patterns
            behind the change and begin to formulate predictions. Users need to see not
            only what was and what is, but also what can be.
                This technology exists today. It is not a map; it is a Hexagon Smart M.App
            — a dynamic information service. By moving from the static map model —
            which collects data, analyzes it, and then produces a static printed, digital, or
            web-based  map  —  to  a  dynamic  information  service,  we  can  not  only
            automate  the  process,  but  we  can  build  job-specific  and  use-case-specific
            maps. Instead of multiple departments sharing a single multi-purpose map,
            each department can access its own view of the map containing information
            produced from the data specifically for them. Because this Smart M.Apps are
            lightweight and quickly produced, they are easy to prototype. Domain experts
            can build the map, incorporate feedback from users, and then make the map
            accessible to land use departments. As new data comes in, it is fed into the

                                                               366 | I S I   W S C   2 0 1 9
   372   373   374   375   376   377   378   379   380   381   382