In my recent post about using Keras generators I was able to achive 75% classification accuracy on the EuroSat dataset using a very simple model. While there is a lot to do in regards to improving the model there is a simple change that can be made without the need for the analysis work needed for an improved model.

In my generators post I elected to use the JPEG variant of the dataset, for reasons of not introducing to many new concepts into that post. Alternatively what can be done is to use the multispectral TIFF images from the dataset, thus gaining access to much more information for the machine learning to base its conclusions on.

This turned out to be a relatively simple thing to do which surprised me as very little informations on this was available online, I mostly found blog posts of people asking how to get it working.

Starting with the code in my post on generators (generators.py, example.py) we can make a simply replace the read_image function and the code will be able to process multispectral images. Code below

import numpy as np import rasterio read_image_cache={} def read_image(path, rescale=None): key="{},{}".format(path,rescale) if key in read_image_cache: return read_image_cache[key] else: with rasterio.open(path) as img: data=img.read() data=np.moveaxis(data,0,-1) if rescale!=None: data=data*rescale read_image_cache[key]=data return data

What this code does is that it stops using the Keras load_img function and instead uses the Rasterio library to directly read images to numpy arrays. This function will return a 3D array with a depth equal to the number of bands in the image.

Making that change and running the same test as in the generators post we get the following results

Epoch 1/120 125/125 [==========] - 49s 394ms/step - loss: 2.7323 - acc: 0.2477 - val_loss: 1.7014 - val_acc: 0.3857 Epoch 2/120 125/125 [==========] - 49s 392ms/step - loss: 1.3841 - acc: 0.4800 - val_loss: 1.2359 - val_acc: 0.5559 Epoch 3/120 125/125 [==========] - 49s 393ms/step - loss: 1.0834 - acc: 0.5998 - val_loss: 1.1012 - val_acc: 0.5928 Epoch 4/120 125/125 [==========] - 49s 392ms/step - loss: 0.8800 - acc: 0.6778 - val_loss: 0.8057 - val_acc: 0.7107 Epoch 5/120 125/125 [==========] - 49s 393ms/step - loss: 0.7929 - acc: 0.7115 - val_loss: 0.7359 - val_acc: 0.7394 Epoch 6/120 125/125 [==========] - 49s 392ms/step - loss: 0.7211 - acc: 0.7380 - val_loss: 0.7304 - val_acc: 0.7544 Epoch 7/120 125/125 [==========] - 49s 393ms/step - loss: 0.6667 - acc: 0.7578 - val_loss: 0.7604 - val_acc: 0.7031 Epoch 8/120 125/125 [==========] - 49s 393ms/step - loss: 0.6208 - acc: 0.7830 - val_loss: 0.6004 - val_acc: 0.7833 Epoch 9/120 125/125 [==========] - 49s 392ms/step - loss: 0.6095 - acc: 0.7867 - val_loss: 0.6019 - val_acc: 0.7885 Epoch 10/120 125/125 [==========] - 49s 393ms/step - loss: 0.5913 - acc: 0.7905 - val_loss: 0.5670 - val_acc: 0.7961 ... Epoch 90/120 125/125 [==========] - 48s 384ms/step - loss: 0.2038 - acc: 0.9375 - val_loss: 0.3243 - val_acc: 0.8854 Epoch 91/120 125/125 [==========] - 48s 382ms/step - loss: 0.2064 - acc: 0.9315 - val_loss: 0.3140 - val_acc: 0.8943 Epoch 92/120 125/125 [==========] - 48s 384ms/step - loss: 0.2059 - acc: 0.9325 - val_loss: 0.3232 - val_acc: 0.8870 Epoch 93/120 125/125 [==========] - 48s 382ms/step - loss: 0.1994 - acc: 0.9345 - val_loss: 0.3165 - val_acc: 0.8900 Epoch 94/120 125/125 [==========] - 48s 382ms/step - loss: 0.2030 - acc: 0.9375 - val_loss: 0.3013 - val_acc: 0.8970 Epoch 95/120 125/125 [==========] - 48s 381ms/step - loss: 0.1952 - acc: 0.9400 - val_loss: 0.3164 - val_acc: 0.8917 Epoch 96/120 125/125 [==========] - 48s 381ms/step - loss: 0.1961 - acc: 0.9380 - val_loss: 0.3295 - val_acc: 0.8878 Epoch 97/120 125/125 [==========] - 48s 381ms/step - loss: 0.2003 - acc: 0.9387 - val_loss: 0.3145 - val_acc: 0.8920 Epoch 98/120 125/125 [==========] - 48s 381ms/step - loss: 0.1886 - acc: 0.9400 - val_loss: 0.3096 - val_acc: 0.8926 Epoch 99/120 125/125 [==========] - 48s 381ms/step - loss: 0.1983 - acc: 0.9323 - val_loss: 0.3287 - val_acc: 0.8907 Epoch 100/120 125/125 [==========] - 48s 380ms/step - loss: 0.1923 - acc: 0.9338 - val_loss: 0.3190 - val_acc: 0.8887 Epoch 101/120 125/125 [==========] - 48s 382ms/step - loss: 0.1927 - acc: 0.9313 - val_loss: 0.3107 - val_acc: 0.8957 Epoch 102/120 125/125 [==========] - 47s 376ms/step - loss: 0.1788 - acc: 0.9375 - val_loss: 0.3131 - val_acc: 0.8941 Epoch 103/120 125/125 [==========] - 47s 377ms/step - loss: 0.1932 - acc: 0.9370 - val_loss: 0.3008 - val_acc: 0.8978 Epoch 104/120 125/125 [==========] - 48s 380ms/step - loss: 0.1894 - acc: 0.9405 - val_loss: 0.3049 - val_acc: 0.9019 Epoch 105/120 125/125 [==========] - 47s 377ms/step - loss: 0.1821 - acc: 0.9420 - val_loss: 0.3138 - val_acc: 0.8915 Epoch 106/120 125/125 [==========] - 47s 379ms/step - loss: 0.1811 - acc: 0.9400 - val_loss: 0.3159 - val_acc: 0.8924 Epoch 107/120 125/125 [==========] - 47s 375ms/step - loss: 0.1797 - acc: 0.9400 - val_loss: 0.3079 - val_acc: 0.8972 Epoch 108/120 125/125 [==========] - 47s 378ms/step - loss: 0.1826 - acc: 0.9382 - val_loss: 0.3215 - val_acc: 0.8935 Epoch 109/120 125/125 [==========] - 47s 378ms/step - loss: 0.1798 - acc: 0.9393 - val_loss: 0.3031 - val_acc: 0.8972 Epoch 110/120 125/125 [==========] - 47s 376ms/step - loss: 0.1763 - acc: 0.9455 - val_loss: 0.3588 - val_acc: 0.8776 Epoch 111/120 125/125 [==========] - 47s 379ms/step - loss: 0.1723 - acc: 0.9445 - val_loss: 0.3039 - val_acc: 0.8965 Epoch 112/120 125/125 [==========] - 47s 376ms/step - loss: 0.1822 - acc: 0.9407 - val_loss: 0.3099 - val_acc: 0.8978 Epoch 113/120 125/125 [==========] - 47s 378ms/step - loss: 0.1831 - acc: 0.9412 - val_loss: 0.3140 - val_acc: 0.8917 Epoch 114/120 125/125 [==========] - 47s 376ms/step - loss: 0.1674 - acc: 0.9455 - val_loss: 0.3166 - val_acc: 0.8898 Epoch 115/120 125/125 [==========] - 48s 381ms/step - loss: 0.1734 - acc: 0.9475 - val_loss: 0.3126 - val_acc: 0.8965 Epoch 116/120 125/125 [==========] - 47s 377ms/step - loss: 0.1677 - acc: 0.9430 - val_loss: 0.3025 - val_acc: 0.8954 Epoch 117/120 125/125 [==========] - 47s 377ms/step - loss: 0.1788 - acc: 0.9463 - val_loss: 0.3092 - val_acc: 0.8920 Epoch 118/120 125/125 [==========] - 47s 377ms/step - loss: 0.1622 - acc: 0.9472 - val_loss: 0.2990 - val_acc: 0.9004 Epoch 119/120 125/125 [==========] - 47s 376ms/step - loss: 0.1629 - acc: 0.9465 - val_loss: 0.3225 - val_acc: 0.8900 Epoch 120/120 125/125 [==========] - 47s 378ms/step - loss: 0.1800 - acc: 0.9397 - val_loss: 0.3025 - val_acc: 0.8981

As you can see from the program output we are getting a much better result of approximately 90%. We can also see that from about epoch 100 we are mostly oscillating around this value, this tells us that we are likely at the limit for how good our simple model can become, necessitating a more thought out one for better results (potentially more and or better training data might also be needed). We could keep running for more epochs but that would most likely lead to over training.

Our new source now looks like this (example.py). The generators code is unchanged.

Feel free to use this code for any and all purposes, consider it in the public domain or if that is not workable for you you can use it under the terms of the MIT License