How to choose the image

First of all you must determine for what purposes do you need a space image. Today on orbit there are a huge number of spacecraft for remote sensing and each of them has its own unique characteristics.

Let's define the basic concepts which will help you to determine the choice of satellite images.

Spatial resolution - the size of the smallest possible feature that can be detected or the size of a pixel that is recorded in a raster image – typically pixels may correspond to square areas ranging in side length from 0.3(high resolution) to 1,000(low resolution) metres.

The same picture with different spatial resolutions
The conditional ranging of eight spectral channels in WorldView-2 images

Spectral resolution - in the first instance, a sensor's spectral resolution specifies the number of spectral bands in which the sensor can collect reflected radiance. The finer the spectral resolution, the narrower the wavelength range for a particular channel or band. Panchromatic sensors detect broadband light in the entire visible range, and signal intensities are displayed as grey levels, i.e., black and white imagery. Many remote sensing systems record energy over several separate wavelength ranges at various spectral resolutions. These are referred to as multi-spectral sensors. Advanced multi-spectral sensors called hyperspectral sensors, detect hundreds of very narrow spectral bands throughout the visible, near-infrared, and mid-infrared portions of the electromagnetic spectrum.

Radiometric resolution - specifies how well the differences in brightness in an image can be perceived; this is measured through the number of the grey value levels. The maximum number of values is defined by the number of bits (binary numbers). An 8 bit representation has 256 grey values, a 16 bit (ERS satellites) representation 65.536 grey values. The finer or the higher the radiometric resolution is, the better small differences in reflected or emitted radiation can be measured, and the larger the volume of measured data will be.

Nadir distance
Stereopair WorldView-2 in anaglyph mode (red blue glasses is need for viewing)

Nadir - the point on the ground vertically beneath the perspective center of the camera lens. Off-nadir - any point not directly beneath a scanner's detectors, but rather off at an angle. Significant deviations from Nadir greatly complicates the processing and vectorization of images.

Stereopair - two aerial photographs of the same area taken from slightly different angles that when viewed together through a stereoscope produce a three-dimensional image. Anaglyph 3D is the name given to the stereoscopic 3D effect achieved by means of encoding each eye's image using filters of different (usually chromatically opposite) colors, typically red and cyan. Red-cyan filters can be used because our vision processing systems use red and cyan comparisons, as well as blue and yellow, to determine the color and contours of objects. Anaglyph 3D images contain two differently filtered colored images, one for each eye. When viewed through the "color-coded" "anaglyph glasses", each of the two images reaches one eye, revealing an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a three dimensional scene or composition.

CE90 - is the circular error at the 90th percentile. This means that a minimum of 90 percent of the points measured has a horizontal error less than the stated CE90 value.

We have tried to describe the main characteristics that will help you in the future ordering. In any case, we recommend you to contact our experts that help you make the right choice.