Universität Bonn

Center for Remote Sensing of Land Surfaces (ZFL)

Using Radar Data to Detect Floods

Flood Module

This tutorial uses Sentinel-1 Radar data in conjunction with ESA’s SNAP software. To download Sentinel-1 Data in GRD format, please follow the instructions described within the general module. The ESA Snap software can be downloaded here.
Pre-processing
Preprocessing is necessary before the actual flood assessment. Here is an outline of the general pre-processing steps:

Orbit file application
This step is necessary for accurate geocoding since the orbit ephemerides encoded in the imagery is not very accurate, it is necessary to apply the precise orbit parameters that have been processed post-image acquisition. SNAP has the capability to auto-download the precise satellite ephemerides of the input image (Figure 1).

To apply precise orbit file click on Radar > Apply Orbit-File. In the new window under the Processing parameters tab;

  • Select Sentinel Precise (Auto Download) in the Orbit State Vectors drop down menu.
  • Check “Do not fail if new orbit file is not found”.

Running the Orbit File tool creates a new image with the precise orbit ephemerides applied.

Figure 1 Orbit file application.gif
© Figure 1 Orbit file application

Converting from Digital Numbers (DNS) to backscatter
Sentinel-1 images are provided in Digital Numbers (DN) format representing linear backscatter intensity. However, digital numbers have an asymmetric noise profile and skewed value distribution hence making it difficult for statistical evaluation. To solve this issue, we can convert the digital numbers to non-linear values, decibels (DB) for easier analysis. The conversion is mission specific and a user has to see the look up tables provided with Sentinel-1 Level-1 products. SNAP automatically detects what kind of input product has been loaded and the conversion that needs to be applied based on the product’s metadata.

To make this conversion in SNAP go to Radar >Radiometric>Calibrate

  • In the pop up window,

            o  Under I/O parameters specify the source and target products.

            o  Under the Processing Parameters tab, select both VH and VV polarizations and check the output sigma0 band (Figure 2).

Figure 2 Radiometric Calibration.gif
© Figure 2 Radiometric Calibration

Image Subset
The downloaded imagery is usually a scene covering the whole footprint of the satellite. For better analysis and reduction of computation time it is recommended to crop the image to the area of interest. To perform cropping in SNAP, go to Raster>Subset (Figure 3).

In the subset window, you are presented with four different types of subsets, Spatial, Band, Tie-point and Metadata subset. For cropping to the study area, we choose the spatial subset approach. We can specify the extents of the study areas by moving and resizing the focus window to fit in the study area or entering the pixel or Geo-coordinates. Geo-coordinates should only be used after performing geometric correction of the image.

Figure 3 Image subset.gif
© Figure 3 Image subset

Geometric Calibration/Terrain Correction
Sentinel 1 GRDH does not have geographic coordinates but instead has pixel coordinates. For further applications, it is necessary to transform from the pixel coordinates to the geographic coordinates.

SNAP provides several ways to perform geometric correction but the most appropriate is the Range Doppler approach (Figure 4). The Range Doppler approach utilizes orbit satellite information and topography information usually acquired from a digital elevation model (DEM) to correct the topographic distortions and derive a precise geolocation for each pixel.

Figure 4 Speckle filter.gif
© Figure 4 Speckle filter

De-speckling or Speckle Filtering
Speckle or Noise occurs in Sentinel Images due to different backscatter response from the objects on the Earth’s surface. Speckle is manifested in satellite images by a grainy look, or salt and pepper look which reduces the capability to identify objects from SAR images. It is therefore necessary, as a pre-processing step to filter speckle from the image.

Several algorithms to perform speckle filtering have been developed. The most common algorithm is the Lee filter and its modifications such as Refined Lee filter which filters speckle from the image while preserving the image sharpness and detail. The Lee filter works by iteratively updating the values of each pixel based on local statistics computed from a square window around the pixel. The size of the window varies depending on how fine the user wants the image to be.

To perform speckle filtering in SNAP, go to Radar>Speckle Filtering >Single Product Speckle Filter (Figure 5).

Figure 5 Speckle filter options.jpg
© Figure 5 Speckle filter options

After specifying the input image and the directory to save the filtered output, under the Processing parameters tab, select the source bands and select the type of filter you wish to use. Each filter has its own processing parameters that must be specified, for example if you choose Lee filter, you have to specify the size of the scan window using Filter Size X (odd number) and Filter Size Y (odd number). One common parameter among several filtering algorithms is the number of looks, which specifies the effective number of looks used to estimate the noise variance, which effectively controls the amount of smoothing applied to the image. A smaller value results in more smoothing and less preservation of more distinct features.

Flood assessment

RGB Composites
RGB composites are useful in visualizing the flood extents by means of a temporal color image. For this technique a set of two images of the area of interest are required, one image acquired before the flood event, which is referred to as an archive image and another acquired during the flood event which is referred to as a crisis image. The underlying principle of this technique is that water surfaces have a low backscatter compared to non-water surfaces since most the signal is reflected away from the sensor.

It is based on the changes in backscatter between the pre-flood and flood images and a combination of bands to make these differences visually remarkable. The differences are defined by the change in intensity values of the same pixel in different dates.

To apply this technique preprocessing must be performed on both images. The pre-processing steps described in the previous section generally apply, however some pre-processing steps need to be done synchronously to ensure that the images have the same properties before comparison. During sub setting, both the crisis and the archive image should be opened in the same window and zoomed to the extent of the area of interest. To do this in SNAP, open the two images in the viewer and select Window> Tile Evenly, to ensure synchronous panning of the area in the Views tab, make sure Synchronize image views is checked (Figure 6).

Figure 6 Synchronized views.jpg
© Figure 6 Synchronized views

Once the preprocessing steps outlined above have been applied on the images, a stack is created. Since the images have been terrain corrected, and projected onto a map system, we can use the geo-coordinates to overlay them. To do this in SNAP, go to Radar>Co-registration>Stack Tools>Create Stack (Figure 7).

Figure 7 Stack creation.jpg
© Figure 7 Stack creation

In the coregistration window (Figure 8), select the input images you want to stack.

Figure 8 Coregistration window.gif
© Figure 8 Coregistration window

In the CreateStack tab (Figure 9), under Initial Offset Method, select Product geolocation.

Figure 9 CreateStack tab.jpg
© Figure 9 CreateStack tab

As a first visualization, it is possible to compare a crisis image and an archive image by overlaying both images and changing the transparency of one of the images using the slider in the Layer manager window as shown in (Figure Layer Manager window and slider.) . To  overlay  two images in the same viewer, go to Layer Manager>Add Layer, in the subsequent pop-up window choose the layer type you would wish to add, in this case Image of Band/Tie-point Grid Figure 10 ) Once the type of layer to be added is specified, the available images and the available bands are displayed in a pop wind, the desired band is then selected from the list (Figure 11 Choosing a band to add to the viewer).

Figure Layer Manager window and slider..jpg
© Figure Layer Manager window and slider.
Figure 10 Type of layer selection.gif
© Figure 10 Type of layer selection
Figure 11 Choosing a band to add to the viewer.gif
© Figure 11 Choosing a band to add to the viewer

To observe the change which should be the flooded areas turn off one of the layers or use the slider.

We can also visualize by generating an RGB composite in SNAP by right clicking on one of the images (archive/crisis) > Open RGB image In New window>Specify the images to assign the different RGB channels (Figure 12).

Figure 12 Assigning bands to  RGB color channels..gif
© Figure 12 Assigning bands to RGB color channels.

The crisis image is assigned to the R-band and the archive image is assigned to the G-band and the B-band. Different land covers will be visualized in specific colors depending on the values in each of the bands. Water surfaces that are stable in both dates, such as rivers, lakes and oceans will have a dark color since we have a low backscatter return in both the archive and the crisis images. The flooded areas will be visualized in red since we have high values in the red channel and low values in the blue and green channels (see Figure 13).

Figure 13 Visualization of flooded areas.gif
© Figure 13 Visualization of flooded areas

Calibration Threshold Technique
The thresholding approach is based on the distinct difference between the backscatter reflectance of non-water surfaces and water surfaces (Figure 14). The smooth surface generated by the sheet of water causes the backscatter intensity to decrease towards even more negative values. In this case, water surfaces can be distinguished based on a threshold. The threshold can be determined locally, based on the specific characteristics of the flood event or alternatively, a global threshold can be used. A global threshold is based on the expected reflectance of water surfaces which is known to be between -1.0 and -22 dB in VV polarization.

In general, the steps involved in mapping flooded areas using the calibration threshold technique are calibrating the image to sigma_0 backscatter values, performing speckle filtering and lastly creating a mask of water bodies by exploiting the difference in backscatter between water and land surfaces.

Once the flood image has been pre-processed as described in the previous section of pre-processing steps, the threshold can be determined by visualizing the distribution of the backscatter intensity values using the histogram. The histogram has two peaks, the smaller peak represents the water pixels while the larger peak represents non-water pixels. The threshold can be chosen as a value in between the two peaks.

Figure 14 Image and histogram.gif
© Figure 14 Image and histogram

Once the threshold is determined, the next step is binarization of the image. To do this in SNAP, go to Raster>Band Maths.

In the subsequent pop-up window (Figure 15), type in the name of the new band that will be generated. Uncheck the virtual band checkbox to create a band in the memory rather than a virtual one. Click on the Edit expression button to enter the expression that separates the water pixels from the non-water pixels. In this case we want water pixels to have a value of 1 and the non-water pixels to have a value of 0 (Figure 16).

Figure 15 Selection of the target product and specification of the attributes of the resulting binarized band.gif
© Figure 15 Selection of the target product and specification of the attributes of the resulting binarized band
Figure 16 Binarization expression.gif
© Figure 16 Binarization expression

A new band with the specified name will be generated. You can view the generated image in the viewer window.

The generated image is still bearing the sensor coordinates, for purposes of further preprocessing, it is necessary to transform the sensor coordinates to geographic coordinates. To do this in SNAP, go to Radar>Geometric>Terrain Correction>Range Doppler terrain Correction. In the subsequent pop-up window specify the bands to be reprojected (for illustration, refer to the preprocessing section).

Since the image is geometrically corrected and has world coordinates, the flooded areas can be exported in form of KMZ and visualized in Google Earth or QGIS.

To export the water band to KMZ, there are two options in SNAP, first you can right click on the opened image > Export view as Google Earth KMZ (Figure 17) or second option is exporting the band from the file explorer; File>Export>Select file format to export. It is important to note that the product is exported with the colors assigned to the different values in the color manipulation window.

Figure 17 Flooded area result.jpg
© Figure 17 Flooded area result

The exported KMZ file can be loaded in Google Earth and visualized. From the visualization, we can assess how correct the binarization is in separating the water pixels from the non-water pixels is.

Wird geladen