Application of Near Real-Time and Multiscale Three Dimensional Earth Observation Platforms in Disaster Prevention

Application of Near Real-Time and Multiscale Three Dimensional Earth Observation Platforms in Disaster Prevention

Whey-Fone Tsai*, Bo Chen, Jo-Yu Chang, Fang-Pang Lin, Charlie H. Chang, Chia-Yang Sun, Wen-Ray Su, Ming-Fu Chen, Dong-Sin Shih, Chih-Hsin Chen, Shyi-Ching Lin, and Shiann-Jeng Yu

National Applied Research Laboratories (NARL), Taiwan

(Received 9 September 2011; Published on line 1 December 2011)
*Corresponding author:
DOI: 10.5875/ausmt.v1i2.124

Abstract: Taiwan frequently experiences natural disasters such as typhoons, floods, landslides, debris flows, and earthquakes. Therefore, the instant acquisition of high-definition images and topographic or spatial data of affected areas as disasters occur is crucial for disaster response teams and making emergency aid decisions. The National Applied Research Laboratories has implemented the project “development of near real-time, high-resolution, global earth observation 3D platform for applications to environmental monitoring and disaster mitigation.” This developmental project integrates earth observation technologies, data warehousing, high-performance visualization displays, grids, and disaster prevention technology to establish a near real-time high-resolution three-dimensional (3D) disaster prevention earth observation application platform for Taiwan. The main functions of this platform include (1) integration of observation information, such as Formosat-2 satellite remote sensing, aerial photography, and 3D photography of disaster sites, to provide multidimensional information of the conditions at the affected sites; (2) disaster prevention application technologies, such as large-sized high-resolution 3D projection system, medium-sized active stereo projection systems, and small-sized personal computers with multiscale 3D display systems; (3) a 3D geographical information network platform that integrates data warehousing and cloud services, complies with the Open Geospatial Consortium (OGC) international standard for image data exchange and release processes, and includes image overlaying and added-value analysis of disasters; and (4) near real-time and automated simulation of image processing procedures, which accelerates orthophoto processing once raw data are received from satellites and provides appropriate images for disaster prevention decision-making within 3 to 6 h. This study uses the 88 Flood event of Typhoon Morakot in 2009, Typhoon Fanapi in 2011, and the 311 Earthquake of Japan in 2011 as examples to dissert the applications, functions and features of this platform for supporting disaster response and disaster recovery decision-making.

Keywords: earth observation technology; data warehousing; high-performance visualization displays; grid; disaster prevention technology; 3D geographical information


Addressing global warming and climate change has become an urgent matter; thus, how to promptly understand the environmental changes on earth is a crucial issue. Earth observation is closely associated with outer space, remote sensing, communication, and information technologies and plays an essential role in land and resource management and environmental safety. The increasing application of contemporary geospatial information technology provides a developmental channel for in-depth observation and extensive collection of earth observation data.

Earth observation data can frequently be combined to produce vital information and facilitate comprehension of natural environmental events. Spatial information technology plays a role in emergency management [1]; through several applications and enhancements, it has improved emergency management and prevention of unexpected natural disasters. Unexpected natural disasters that occur in Taiwan include earthquakes, typhoons, landslides, and debris flows. Demands for spatial information of earth observations are increasing continually.

The functions of information systems related to environmental and disaster prevention change rapidly. Because of the massive amount of data and images generated from digital earth observations over time, despite substantial data processing and distributed information integration efficacy, the urgent demands of disaster response are often not meet. Therefore, the focus of this study is to establish an efficient information integration platform and analysis method to enable near real-time high-resolution disaster management and decision-making supported by earth observation data at different scales and using forward-looking data warehousing and grid technologies [2, 3].

Because spatial data have 3D properties, forwarding-looking visualization technologies are vital for the effective and clear communication of earth observation data to decision-makers. Following the release of the movie “Avatar,” which employed 3D technologies, numerous 3D technology industries offering 3D systems suitable for large-scale (such as a theater), medium-scale (such as an office space), and personal-scale spaces emerged; however, the development of digital content was comparatively slow. Meanwhile, display technology for virtual reality has also gradually developed toward augmented reality and is widely applied in geographical information.

This study considers these technological developments and integrates cross-field research involving satellite and airborne telemetry, data warehousing, high-performance visualization displays, grids, and disaster prevention technologies of the National Applied Research Laboratories to execute the program of “development of near real-time, high-resolution, global earth observation 3D platform for applications to environmental monitoring and disaster mitigation” [4]. The front-end aim of this study is to create a complete and vertically integrated value chain that facilitates the acquisition of spatial images of the disaster conditions in the shortest possible time, establish a near real-time and simulated autonomous remote image capture system, and develop a general processing and releasing procedure. This study combines existing environmental monitoring networks and uses the Open Geospatial Consortium (OGC) international standard and specifications [5, 6] as the standard for exchanging observed data among cooperating units. Additionally, this study combines a database of disaster prevention and rescue fundamentals, an analysis model, and the evaluation outcome of a disaster situation to enable decision-makers to rapidly understand the disaster situation and estimate damages, which is also the back-end aim of this study.

The key concepts of this study are “near real-time,” “high-resolution,” and “3D platform.” The rapid acquisition of images that can be used to determine the damages and for making decisions is crucial after major natural disasters. Therefore, the image processing and the display platform procedures must be integrated to accelerate image processing and image and 3D topographic data integration and connection with the display end. Using a “time first” principle, near real-time operation must be achieved first; however, this results in relatively low resolution images for determining damage and making decisions. High-resolution color-corrected orthoimages can be provided subsequently for disaster recovery planning and be stored in image data warehouses as a reference for academic and relevant government agencies conducting future studies, applications, and recovery.

In summary, this study establishes a near real-time high-resolution 3D earth observation application platform for disaster prevention in Taiwan. We elaborate the aims and features of the platform, which include multiscale earth observation technologies, disaster prevention applications with multiscale 3D display technologies, 3D geographical information networks, near real-time and automating simulation orthophoto image processing of images of disaster-affected areas. This study uses the 88 Flood of Typhoon Morakot, Typhoon Fanapi, and Japan’s 311 Earthquake to test the functions of the platform for supporting disaster response and disaster recovery decision-making.

The Earth Observation Information Service Architecture for Disaster Prevention and Disaster Relief

Data generated by various earth observation technologies are often required to undergo integration and releasing procedures before they can be used by the government for making decisions regarding disaster prevention. These procedures include the following four layers: data providing, data integration/releasing, experts, and decision makers (Figure 1). The functions and characteristics of each layer are as follows:

Figure 1. Disaster prevention image information services.

Data provider layer: the data provider layer includes data providers of various earth observation technologies. Image data providers include the Space Organization of Formosat-2 satellite, various distribution centers that provide aerial images, the Instrument Technology Research Center, and the Aerial Survey Office of Forestry Bureau. Anaglyphs generated by 3D photography and a disaster knowledge base also relevant to disaster prevention. In addition to image data, this layer also comprises vector data, such as electronic maps and road networks, key data points, and other numerical value and real-time video streaming data.

Information integration/releasing layer: The main task of this layer is to integrate various earth observation images and relevant information to establish streaming and releasing mechanisms. The Web Map Service (WMS) [5] developed by the OGC is typically used to facilitate the streaming and acquisition of data.

Expert layer: This layer involves the analysis of disaster information and various estimations after the disaster related image data is obtained from the data integration/releasing layer; additional information acquired from other sources is also combined. This layer also includes displaying the analysis and evaluation outcomes on the 3D geographical information platform and the formulation of disaster rescue plans. According to the current plans, the National Science and Technology Center for Disaster Reduction and government departments related to disaster prevention will adopt the primary role at the expert layer.

Decision maker layer: This layer comprises the administrative authorities who have executive and decision-making power. Additionally, this layer involves making decisions based on the analysis results provided by the expert layer and various programs for decision-making. According to the current plans, the Central Emergency Operation Center will be the decision-maker.

Application of Multiscale Earth Observation in Disaster Prevention

This study integrates the earth observation technology of the National Applied Research Laboratories, including the development of multiscale earth observation technologies, with large-scale satellite telemetry, medium-scale aerial photography, and small-scale high resolution on-site 3D photography. The combination of spatial integrated telemetry images and a 3D ground map provides a multi-angle view of disaster conditions and improves the layer of decision-making support. The multiscale earth observation technologies are described below.

Large Scale- Satellite Remote Measurement

Images captured by the Formosat-2 satellite [7] are shown in Figure 2(a). The role of Formosat-2 is to conduct near real-time telemetry of Taiwan and global land and oceans. The images Formosat-2 captures during daylight should be applied to land-use planning and disaster prevention and rescue missions. Formosat-2 is an application telemetry satellite focused on earth that passes over Taiwan twice a day; its shooting swath is greater than 24 km, with an incidence angle of ± 45°, and it can perform stereo photography. Currently, the maximum resolution of Formosat-2 is approximately 2 m. The short-time image capturing range encompasses the entire island of Taiwan and is among the large-scale (greater than 100 km2) telemetry imaging observation tools.

Formosat-2 takes images of the entire island of Taiwan with five bands. The clear images are then selected to compose orthoimages of the entire island of Taiwan (Figure 2(b)), which subsequently can be post-produced into a base map layer to provide 3D geographical information of Taiwan.


Figure 2. Orthoimages of the entire island of Taiwan captured by Formosat-2: (a) Large-scale satellite telemetry; (b) Orthoimages of the entire island of Taiwan captured by Formosat-2 with five bands.

Medium-Scale Aerial Photography

Airplanes are used to carry high-precision cameras for capturing images; resolution reaches 20 to 50 cm. This study uses the Vegetation and Change Detection imager (VCKi) [8] developed by the Instrument Technology Research Center for aerial photography on major disaster areas. The VCDi-660 system is a high-resolution large-swath multispectral imaging telemetry equipment. The major functional feature of VCDi-660 is that it contains four multispectral wave band camera modules; it encompasses the range of approximately one watershed (10 km2) in one flight mission, and is a medium-scale telemetry imaging observation tool. Figure 3(a) shows images of landslide lakes caused by 88 Flood after Typhoon Morakot in disaster areas in Kaohsiung and Nantou. The images were taken using VCDi-660 observation instrument mounted on airplane in high-resolution aerial photography. This approach can be employed to trace potential disasters and facilitate crisis control.


Figure 3. Airborne aerial photography and images of disaster areas: (a) Airborne VCDi-660; (b) Aerial photographing

Small-Scale 3D On-Site Photography

Both satellites and aerial photography are air-to-earth observation technologies that encompass a comparatively wide regional space; whereas on-site photography is ground-to-ground observation. Because photographers are present at the sites for on-site photography, they can capture crucial visual angles and details, produce 3D image effects [9] through visual illusions and slight differences in visual angle, and convey the relative relationship of the overall field depth to produce a sense of being present at the site. Images shot with a 3D dual-lens camera can have stereo and immersive effects through the application of parallax display technology and a specialized 3D display screen with stereoscopic glasses. The largest difference between 3D and 2D photographs is that photographs taken using 3D cameras provide a greater field depth contrast, the distances are extremely realistic, and the disaster sites (under 1 km2) can be better observed. Thus, users of the image data are not required to be present at the disaster site to understand the site conditions on the sites through anaglyphs.Figure 4(a) shows two sets of cameras for shooting images of the left and right eye separately; the technology was applied in areas affected by the 88 Flood disaster, including Kaohsiung, Tainan, Nantou, and Taitung. As shown in Figure 4(b), the resolution can be as high as 1920 x 1080 pixels; the use of 3D large-scale high-resolution projection provides excellent effects as if the viewers were actually on site. Additionally, continuous spatial photographs can be stitched into panoramic images, similar to the image of stacked wood on the Kaoping River Basin shore after the 88 Flood (Figure 4(c)), to provide situational browsing of the disaster area.



Figure 4. 3D photography at disaster sites: (a) On-site stereo shooting; (b) 3D photographs of 88 Flood; (c) a panoramic surround image composed of several photographs.

Integration of the air-to earth observations from satellite and aerial photography and the ground-to-ground method of on-site photography can facilitate advanced disaster prevention decision-making. Figure 5 shows application examples of multiscale earth observations taken during the 88 Flood. Figure 5(a) shows the condition of Xiaolin Village after the 88 Flood, including images captured by Formosat-2 and on-site high-resolution stitched images. The 3D platform of a geographical information network was employed to display satellite images of Xiaolin Village from certain angles, and the on-site stitched images were overlaid on key areas. This approach enabled the display of small-scale details of the only remaining house in the disaster area.

Figure 5(b) shows the images captured by Formosat-2 of the condition of the estuary of Taimali River in Taitung after the 88 Flood. The panoramic ground images were composed of approximately 20 on-site high-resolution photographs stitched together; high-resolution localized details of the bridge can be displayed. Both examples of multiscale earth observation integrations mentioned above combine different spatial earth observations with the ground view to highlight the overall conditions and demands of the affected areas.


Figure 5. Integrated application of multiscale Earth observation: (a) applications on Xiaolin Village disaster area after the 88 Flood; (b) a pplications on Taimali disaster area after the 88 Flood.

Multiscale 3D Display Technologies

This study establishes an immersive 3D geographical information display platform of the entire island of Taiwan as the background and fundamental image data for various disaster management and decision-making support applications. The platform is developed based on the 2-m telemetry 2D images captured by Formosat-2 and the 5- and 40-m elevation information of the Ministry of the Interior. 3D VR engine technology [10] , including OpenGL language, 3D and stereo imaging, automatic navigation, multilevel granularity, large-scale topographic data, and volume rendering functions developed by the National Center for High-Performance Computing is used to display the stereo effects. The large-scale 3D geographical information navigation system of topographic data and high-resolution satellite images developed accordingly is shown in Figure 6.

Figure 6. The application of 3D display technologies on the geographical information of the entire island of Taiwan.

The 3D immersive browsing system used can include active and passive projection systems [11] ; Figure 7(a) shows a passive stereo SONY VR projection system that uses two SXRD4K [12] projectors and polarization filters to create an immersive browsing interactive environment. This large-scale projector has a high resolution of 4 K pixels and an ANSI lumen intensity of 10 000; it requires substantial graphics hardware performance to generate images. Because stereo images require dual-projection imaging, the load graphic system for stereoscopic displays is double that of flat panel displays. Additionally, 4 K projectors are large and require greater electric power and cooling equipment. Because the resolution of this projector is extremely high, it is suitable for projection in a theater or large experimental space containing hundreds of people, and belongs to the category of large-scale 3D projection displays.



Figure 7. Projection and display of multiscale geographical information: (a) large-scale 3D passive projector; (b) Medium-scale portable active projection; (c) small-scale 3D personal computer displayer.

Regarding disaster prevention and rescue purposes, portable immersive browsing systems are frequently required to display the necessary spatial geographical information. Figure 7(b) shows an active projection system that complies with nVIDIA 3D Vision 3D specifications [13] . The advantages of this system are that can facilitate a substantially large audience, can project onto an ordinary white screen or white wall, and is easy to carry. Figure 7(c) shows a 3D monitor of a general personal computer (PC) or notebook computer. Currently, 3D technology has developed to allow the display of 3D images through the Internet or on webpages. The use of cross-platform browsers, such as Silverlight by Microsoft or HTML5 [10] , is feasible if n-VIDIA accelerators, such as Geforce, a Windows 7 environment, and 3D Vision glasses, transmitters are used; liquid crystal displayers, projectors, or 3D televisions of 120 Hz or above should also be used. These arrangements allow the immersive 3D browsing of geographical information over the Internet. Personal displayers belong in the category of small-scale projection systems and are intended for one or a limited number of viewers.

Figure 8 shows the stereo display applications of immersive 3D technology in the main areas affected by 88 Flood of Typhoon Morakot in 2009. These areas include Xiaolin Village and Jiaxian in Kaohsiung, and Taimali in Taitung. The 3D geographical information provided by Formosat-2 was used as the fundamental platform for these images. Immersive 3D browsing allows rapid switching between different earth observation figure data during large-scale disasters and enables large-scale data comparisons before and after disasters. The interaction between high-resolution immersive 3D browsing and aviation in disaster areas can be used as a reference when making decisions and plans regarding disaster recovery.

Figure 8. Comparison of 3D image browsing of areas affected by the 88 Flood.


Figure 9. Applications of the 3D geographical information platform of Taiwan: (a) Overlay of typhoon cloud and radar echoes on 3D geographical information; (b)The 3D topography of Taiwan and the seabed of adjacent waters.

Figure 9(a) shows the overlays of typhoon cloud and radar echoes using the 3D geographical information base platform of Taiwan. The images are displayed with immersive 3D to explore the interaction between the 3D cloud layer and the image base layer to understand the location, dynamics, and structure of the eye of typhoon. Figure 9(b) shows the expansion of the 3D geographical information base platform of Taiwan, including the seabed topography of adjacent waters. This information overlaid with data generated from ocean current simulations or tsunami storm surge simulations can be displayed with an immersive 3D system and used as decision support tools for crisis exercises and during disaster events.

Regarding the 3D geographical information display of the simulation results of flooding caused by typhoon, this study collaborates with the Taiwan Typhoon and Flood Research Institute to establish a real-time simulation system with integrated atmosphere and watershed hydrographic patterns. The system was tested in the basin of Lanyang River [4] to estimate the potential flooding range; the simulation processes and results were vertically integrated in 3D geographical information platform display, as shown in Figure 10. Overlaying the virtual simulation results on a physical platform is within the application scope of augmented reality [14] and is also an extension of virtual reality. Displays of dynamic disaster prevention simulation applications using the 3D geographical information platform and augmented reality applications will increase gradually.

Figure 10. Overlays of rainfall and flooding caused by a typhoon in the basin of Lanyang River.

Applications of 3D Geographical Information Platforms and Disaster Prevention

This study has described the applications of multiscale earth observations and 3D displays. Regarding practical disaster prevention decisions, a high-performance network platform is still required for the integration, distribution, and release of image data. The World Wind 3D [15] space network information application platform is an international high-resolution digital display platform for geographical information of earth and space. The source code of the platform was released by the National Aeronautics and Space Administration (NASA) as a tool for the 3D visualization of geographical information systems, and is mainly applied to research of earth science and the natural environment. Considering the decision-making needs in disaster prevention, this study uses the World Wind 3D platform for customized development, and connects the platform with the database containing numerous disaster-related satellite images taken by Formosat-2 and image data provided by other units. The platform is integrated and established as a system platform for applications and displays of the history of digital geographical information. The practical application of this platform is to support disaster prevention missions and develop new applications. The platform can also combine disaster prevention information with real-time information and overlay the simulation of each natural disaster and observed image data using this display platform to help establish a disaster rescue decision-making system for the government.

After local customization, we used the World wind 3D platform as the 3D geographical information application platform network for Taiwan in this study (Figure 11). The customized system contains a transparency adjustment function for the integrated display of image layer overlays, complies with the OGC international standard for exchanging and releasing image data, and provides image overlays and added-value analysis for disaster events.

Figure 11. Application platform of the 3D information network of Taiwan.

The 3D geographical information platform of Taiwan can be synchronously linked with and seamlessly integrate various data displays, such as real-time water regimen images, real-time data of the gauging stations in Taiwan, relevant digital image collections, and communication and decision-making systems. For example, the platform displays the 3D topography of the Lanyang River basin and simultaneously integrates real-time images and water layer data of gauging stations (Figure 12(a)) to satisfy the practical needs of disaster prevention and decision support. Figure 12(b) shows a 3D platform display of the overlaid images from aerial photography of Shihmen Reservoir floodway areas and images captured by Formosat-2 of the watershed; the platform is simultaneously connected with two real-time monitors downstream of the spillway.


Figure 12. Integration of numerous data on the 3D platform network: (a) Integration of the real-time images and water layer data of Lanyang River basin; (b) Integration of the real-time monitoring images of Shihmen reservoir.

The 3D geographical information platform network of Taiwan has practical functions in the decision-making of disaster prevention. Figure 13(a) shows the overlaid images from aerial photography, satellite images, and elevation data of Xiaolin Village after the 88 Flood; the images can be browsed and reviewed on the 3D platform network. Figure 13(b) shows the overlaid images from aerial photography and 3D satellite images of the affected areas in Jiaxian, Kaohsiung, after a 6.4 Richter scale earthquake in 2010; the images can be reviewed on the 3D platform network. Furthermore, this study obtained image data of a landslide accident that occurred on National Highway No. 3 near Keelung in April 2010, where the hillside near the Qidu section of Maling Keng collapsed onto the road, from by space survey. Images before and after the disaster were processed, integrated, and displayed using the 3D platform network (Figure 14). This information was provided to government units for disaster rescue and follow-up surveys and as an academic and research reference.


Figure 13. Practical applications of the 3D platform network for decision-making in disaster prevention: (a) Xiaolin Village during the 88 Flood; (b) epicenter area of Jiaxian earthquake.


Figure 14. Applications of the 3D platform network in National Highway landslide analysis: (a) before disaster; (b) after disaster.

The 3D geographical information platform network of Taiwan is connected with the back-end data warehouse to provide users with a direct connection and access to the cloud services of the platform using various application software through the Internet; the provided information includes the image layers of WMS service, an electronic map of Web Feature Services (WFS) [5] , key data points, and topography elevation data. Release of near real-time information notifications and integrated displays of the front-end 3D geographical information platform are also implemented using this service architecture, allowing users of relevant applications to acquire updated information as a reference for decision-making. Thus far, the accumulated data has reached approximately 14 TB, with 80 pieces of image layer data; the data includes images of disaster areas and relevant disaster information of past disasters acquired by satellite and aerial photography.

Release of the images is conducted through a network with standardized WMS (Figure 15). WMS delivers the images required by customers in segments and the data is safely stored in the warehouse system; the segmented images are subsequently assembled before being released. With the support of the near real-time automatic image processing mechanism established by the National Space Organization, the release of critical data can be accelerated. The subsequent aim is to develop automatic image uploading and cloud platforms to automatically and directly display the received images on the WMS server of the cloud services architecture. Therefore, the system can release an image at near real-time once the image is renewed, allowing users to browse the images through the network. The overall cloud services architecture is shown in Figure 16. Complete cloud services [3] are constructed to provide various services and data required by general platform interworking applications and to establish a template for relevant applications to facilitate external promotions.

Figure 15. Standardized method of releasing WMS images.
Figure 16. The architectures of the data warehouse and cloud services.

Near Real-Time and Simulation Automation Processing of Satellite Images

Five-band wide area imaging is a new image acquisition function developed after the launch of Formosat-2, and to which the National Space Organization has added a parallel processing mechanism [3] . This eliminated the overloading of the original system caused by the five-band wide range imaging, and the processing time decreased to that of single band imaging. Products generated by the image processing system of Formosat-2 at the 1A and the second stages are sufficient for preliminary discrimination of the disaster prevention and rescue procedures. However, additional orthophoto processing is required to obtain further combined geographical information, such as roads and boundaries between counties and cities, or for 3D displays of overlaid elevation data in this platform system. The National Space Organization significantly reduced the time required to generate orthophotos by introducing an orthophoto processing accelerator hardware kit previously used by the U.S. military, and by controlling the automatic matching gears to address the need for human intervention. The time required to generate an orthophoto was reduced from approximately 15 min to less than 1 min. Similarly, once the orthophotos are complete, the images are released by the WMS release system and all data can be integrated in the 3D space information application platform. The advantage of this system is that the processing time is minimal and can be controlled. The limitation, however, is that the processing unit is based on the scene and is only appropriate for applications in small areas and rapid images of disaster areas. Image processing can be divided into six time stages from the initiation of image processing to image releasing, which comprises the total time required for processing and releasing satellite images (Figure 17). The major processing time at the current stage includes the parallel processing of multiple-band images, orthophoto processing of accelerator hardware, and the release of image data by WMS.

Figure 17. A schematic diagram of the near real-time image processing mechanism of Formosat-2.

Consider the Kaohsiung flooding caused by Typhoon Fanapi in 2010 for example (Figure 18), a near real-time Formosat-2 image releasing exercise was conducted with multiband parallel and orthophoto processing hardware accelerators; the near real-time image processing of 14 scenes located in the flooded areas in Kaohsiung was conducted after three-band wide area imaging. The entire processing procedure met the goal of downloading images and delivering products within 6 h, providing images to disaster prevention and rescue units for identification and analysis in the shortest possible time after the disaster.


Figure 18. Near real-time image processing of flooding in Kaohsiung during Typhoon Fanapi.

Consider a magnitude 9.0 large-scale thrust-zone earthquake that occurred off the coast of the northeast region of Japan on March 11, 2011 as another example. The epicenter was located in the Pacific Ocean east of Sendai City, the capital of Miyagi Prefecture, the focal depth was measured to be 24.4 km, and the tallest tsunami triggered reached up to 38.9 m. Since the start of earthquake records in Japan, this was the largest earthquake, and the tsunami it triggered was also the most severe. Additionally, fire and nuclear leakage accidents paralyzed the nationwide functions and suspended economic activities; several cities northeast of Japan were also destroyed. The National Space Organization sent Formosat-2 to photograph the affected areas in Japan to support the disaster response. Because Japan is located at relatively high latitude, the photography angles were too high, affecting the resolution of the acquired images; nevertheless, the images were supplied to relevant Japanese units. The nuclear leakage accident at Fukushima nuclear power plant increased the severity of the situation. The National Space Organization also provided satellite images of these areas to the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Advanced Industrial Science and Technology (AIST). The National Space Organization also cooperated with the Center for Space and Remote Sensing Research of National Central University for processing orthophotos, and the immersive 3D terrain environment around the Fukushima Nuclear Power Plant 3 was simulated by the National Center for High-Performance Computing (Figure 19(a)). Images captured before and after the disaster were compared. By comparing the images displayed in Figure 19(b), we identified localized flooding caused by the tsunami. Satellite image data captured by NASA Landsat [16] was used to represent the area prior to the disaster, and no stagnant water was observed in the image.

The images taken by the Formosat-2 satellite revealed that the coastline of Sendai City withdrew by 5 km after the tsunami (Figure 19(c)).



Figure 19. Near real-time processing of images taken by Formosat-2 during the Japan 311 Earthquake: (a) A 3D satellite image of the Fukushima Nuclear Power Plant 3; (b) Identification of the flooded areas caused by the tsunami using image comparisons; (c) Sendai City before and after the earthquake.


This study vertically integrated cross-field innovation technologies, such as multiscale earth observation data, multiscale 3D display technologies, data warehouses, grids, cloud services, and disaster prevention applications. This study also developed a near real-time high-resolution 3D application platform for earth observations that facilitates near real-time acquisition of telemetry images after disasters and processes and releases integration procedures in compliance with international standards. The study outcomes have been successfully employed during domestic natural disasters such as typhoons, floods, and earthquakes in recent years; it was also used to support Japan’s decision-making following the 311 Earthquake. The capabilities of the platform to respond to natural disasters will be continuously augmented through practical exercises. Additionally, the platform is expected to be extended to industries, government, and academia.


  1. X. Ge and H. Wang, "Cloud-based service for big spatial data technology in emergence management," in International Conference on Geo-spatial Solutions for Emergency Management and the 50th Anniversary of the Chinese Academy of Surveying and Mapping, Beijing, China, 2009, pp. 126-129.
  2. W. F. Tsai, W. Huang, F. P. Lin, B. Hung, Y. T. Wang, S. Shiau, S. C. Lin, C. H. Hsieh, H. E. Yu, Y. L. Pan, and C. L. Huang, "The human‐centered cyberinfrastructure for scientific and engineering grid applications," Journal of the Chinese Institute of Engineers, vol. 31, no. pp. 1127-1139, 2008.
    doi: 10.1080/02533839.2008.9671468
  3. W. F. Tsai, B. Chen, and C. H. Chang, "Development of cloud-based 3D GIS platform for near real-time image data pipeline processing," NGIS Quarterly, vol. 74, no. pp. 39-51, 2010.
  4. W. F. Tsai, "Achievement report of government science and technology project: On the development of high resolution 3D demo platform," National Applied Research Laboratories, Taiwan, 2011.
  5. "Ogc® standards and specifications," ed: Open Geospatial Consortium.
  6. C. Lee and G. Percivall, "Standards-based computing capabilities for distributed geospatial applications," Computer, vol. 41, no. 11, pp. 50-57, 2008.
    doi: 10.1109/MC.2008.468
  7. National Space Organization, Taiwan. Formosat-2. Available:
  8. M. F. Chen, J. Y. Lai, L. J. Lee, and T. M. Huang, "Defective CCDs detection and image restoration based on inter-band radiance interpolation for hyperspectral imager," 2010, pp. 78570W-78570W-78512-78570W-78570W-78512.
    doi: 10.1117/12.869480
  9. H. S. Sawhney, Y. Guo, K. Hanna, R. Kumar, S. Adkins, and S. Zhou, "Hybrid stereo camera: An IBR approach for synthesis of very high resolution stereoscopic image sequences," New York, USA, 2001, pp. 451-460.
    doi: 10.1145/383259.383312
  10. Visualization and Interactive Media Laboratory of NCHC. 3D VR engine. Available:
  11. P. J. Bos, "Performance limits of stereoscopic viewing systems using active and passive glasses," 1993, pp. 371-376.
    doi: 10.1109/VRAIS.1993.380756
  12. Sony Electronics Inc. Sony SRXT420 ultra-high resolution projector. Available:
  13. NVIDIA Corporation. nVIDIA 3D VR engine technology. Available:
  14. R. Azuma, "A survey of augmented reality," presented at the Presence: Teleoperators and Virtual Environments, 1997.
  15. National Aeronautics amd Space Administration. NASA world wind. Available:
  16. National Aeronautics amd Space Administration. NASA the landsat program. Available:


  • There are currently no refbacks.

Copyright © 2011-2017  AUSMT   ISSN: 2223-9766