Satellite image classification is the most significant technique used in remote sensing (GIS) for the computerized study and pattern recognition of satellite GIS, which is based on diversity structures of the image that involve rigorous validation of the training samples depending on the used ML/AI classification algorithm.
Satellite imagery is important for many applications including disaster response, law enforcement, and environmental monitoring. These GIS applications require the automated AI-powered identification of objects and facilities in the imagery.
In this post, we focus on the satellite image segmentation using Fast.Ai.
Satellite imagery is being used together with AI and deep learning in many areas to produce stunning insights and discoveries. Today we look at applying this approach to recognising buildings, woodlands & water areas from satellite images.
Conventionally, we use 4 classes for identifying objects in GIS images:
- Building
- Woodland
- Water
- Background (i.e. everything else).
For this multi-label image classification problem, we will use the Planet dataset, where it’s a collection of satellite images with multiple labels describing the scene.
The entire workflow consists of the following steps:
- Grab our input data
- Train a model with fastai
- QC with fastai metrics.
Let’s set the working directory YOURPATH
import os
os.chdir(‘YOURPATH’)
os. getcwd()
and import the following libraries
from fastai.vision.all import *
import pandas as pd
import torch
from torch import nn
from fastcore.meta import use_kwargs_dict
from fastai.callback.fp16 import to_fp16
from fastai.callback.progress import ProgressCallback
from fastai.callback.schedule import lr_find, fit_one_cycle
from fastai.data.block import MultiCategoryBlock, DataBlock
from fastai.data.external import untar_data, URLs
from fastai.data.transforms import RandomSplitter, ColReader
from fastai.metrics import accuracy_multi, BaseLoss
from fastai.vision.augment import aug_transforms
from fastai.vision.data import ImageBlock
from fastai.vision.learner import cnn_learner
from torchvision.models import resnet34
Let’s import the input dataset
planet_source = untar_data(URLs.PLANET_SAMPLE)
df = pd.read_csv(planet_source/’labels.csv’)

Let’s check the content
df.head()

Let’s edit the data columns
df = df[df[‘tags’] != ‘blow_down clear primary road’]
batch_tfms = aug_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
planet = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
get_x=ColReader(0, pref=f'{planet_source}/train/’, suff=’.jpg’),
splitter=RandomSplitter(),
get_y=ColReader(1, label_delim=’ ‘),
batch_tfms = batch_tfms)
dls = planet.dataloaders(df)
and plot the first 9 selected images
dls.show_batch(max_n=9, figsize=(12,9))

We can also invoke the lambda function
blocks = (ImageBlock, MultiCategoryBlock)
get_x = lambda x:planet_source/’train’/f'{x[0]}.jpg’
val = df.values[0]; val
array(['train_21983', 'partly_cloudy primary'], dtype=object)
get_x(df.values[0])
get_y = lambda x:x[1].split(‘ ‘)
planet = DataBlock(blocks=blocks,
get_x=get_x,
splitter=RandomSplitter(),
get_y=get_y,
batch_tfms=batch_tfms)
dls = planet.dataloaders(df)
dls.show_batch(max_n=9, figsize=(12,9))

Let’s invoke planet.dataloaders
def _planet_items(x): return (
f'{planet_source}/train/’+x.image_name+’.jpg’, x.tags.str.split())
planet = DataBlock.from_columns(blocks=(ImageBlock, MultiCategoryBlock),
get_items = _planet_items,
splitter=RandomSplitter(),
batch_tfms=batch_tfms)
dls = planet.dataloaders(df)
dls.show_batch(max_n=9, figsize=(12,9))

Let’s train the model
from torchvision.models import resnet34
from fastai.metrics import accuracy_multi
learn = cnn_learner(dls, resnet34, pretrained=True, metrics=[accuracy_multi])
class BCEWithLogitsLossFlat(BaseLoss):
“Same as nn.BCEWithLogitsLoss
, but flattens input and target.”
@use_kwargs_dict(keep=True, weight=None, reduction=’mean’, pos_weight=None)
def init(self, *args, axis=-1, floatify=True, thresh=0.5, **kwargs):
if kwargs.get(‘pos_weight’, None) is not None and kwargs.get(‘flatten’, None) is True:
raise ValueError(“flatten
must be False when using pos_weight
to avoid a RuntimeError due to shape mismatch”)
if kwargs.get(‘pos_weight’, None) is not None: kwargs[‘flatten’] = False
super().init(nn.BCEWithLogitsLoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
self.thresh = thresh
def decodes(self, x): return x>self.thresh
def activation(self, x): return torch.sigmoid(x)
learn.loss_func = BCEWithLogitsLossFlat()
learn.lr_find()
SuggestedLRs(valley=0.0020892962347716093)

lr = 1e-2
learn = learn.to_fp16()
learn.fit_one_cycle(5, slice(lr))

learn.save(‘stage-1’)
Path('models/stage-1.pth')
learn.unfreeze()
learn.lr_find()
SuggestedLRs(valley=7.585775892948732e-05)

learn.fit_one_cycle(5, slice(1e-5, lr/5))

learn.show_results(figsize=(15,15))


Summary
- Instead of cats & dogs, the Planet Competition Dataset consists of satellite images from the Amazonian region.
- The task here consists of classifying which types of land covers are present on each image. We can have multiple landcovers types present on one image.
- Here the task is a multi-label classification problem, where each image can belong to multiple classes.
- Using pre-trained models is a good practice in general.
Explore More
fast.ai’s superresolution model on satellite imagery.
Multi-label classification using fastai
Multi-Label Keras CNN Image Classification of MNIST Fashion Clothing
ML/AI Wildfire Prediction using Remote Sensing Data
Embed Socials
Make a one-time donation
Make a monthly donation
Make a yearly donation
Choose an amount
Or enter a custom amount
Your contribution is appreciated.
Your contribution is appreciated.
Your contribution is appreciated.
DonateDonate monthlyDonate yearly