Use it for various segmentation tasks. Features include image label creation, ResNet34 neural network model (arch) training and image prediction.
Based on the bachelor-thesis "Microstructure analysis of materials with the assistance of artificial technology" by Kerim Yalcin on February 2024.
Usage examples:
- metallographic image segmentation
 - medical image segmentation (tumor detection, organ detection)
 - object detection
 - road segmentation
 - crop yield detection in agriculture
 - microscopy image analysis
 
Features:
- read and save image files
 - use filters to change brightness and gaussian-blur amount of loaded images
 - apply binarization and tresholding
 - invert image output
 - trace desired image features using resizable brush tool in black or white
 - create image labels for a model (max. 2 classifications)
 - create and train a model by using the labels
 - predict images with a model
 
Implemented packages
tkinter- integrated GUI library in PythonPIL- image processing libraryOpenCV- computer vision libraryfastai v2- deep learning librarynumpy- library for arrays and matricesthreading- integrated threading library in Python
Use this script for manual label creation. After that, create labels using the image crop tool.

Use this script for creating and training a ResNet34 model. After that, predict images using the trained model.

The application can either be started using the executable or directly by running the scripts after installing Python and the required packages.
The executables are created using auto-py-to-exe by Brent Vollebregt and can be downloaded here under Assets. The interpreter version used is: Python 3.11.8
Steps:
- create a folder named structureAnalysis or similar
 - copy both 
manualSegmentation.exeandsemanticSegmentation.exein this folder - copy 
codes.txtin this folder - start either one of the applications
 
After that, you should see the directories /labels, /images, /raw and /raw/labels created automatically.
Steps:
- Download Python 3.11.x
 - Run the installation (add environment PATH, remove MAX_PATH limitation)
 - open 
cmd.exeand typepython --version 
You should see something like Python 3.x.x on the console output.
You need fastai v2 and OpenCV in order to use the script files.
Documentation:
Steps:
- start 
cmd.exeand typepip install opencv-python - after that, type 
pip install fastai 
This is similar to installing the executables
Steps:
- create a folder named structureAnalysis or similar
 - copy both 
manualSegmentation.pyandsemanticSegmentation.pyin this folder - copy 
codes.txtin this folder - start either one of the scripts by double clicking on the files
 
After that, you should see the directories /labels, /images, /raw and /raw/labels created automatically.
- start 
manualSegmentation - on the left side load an image with 
[Load Image]

 - adjust thresholding output using brightness and gaussian-blur filters
 - adjust brush size and brush color with 
[Black/White] - invert the image with 
[Invert] - align both previews with 
[Sync] - on the right side trace image features using the brush tool

 - save the output image for later tracing with 
[Save Image] - close 
manualSegmentation 
- start 
manualSegmentation - on the left side load an image with 
[Load Image]

 - on the right side load the corresponding image you have prepared for image label creation

 - set image size (default: 336x336 px)
 - set image increment (default: 0) which saves each image counting upwards
 - turn on image crop mode using 
[Save-Crop ON/Save-Crop OFF] - on the right side click to save one label each (until you have about 60-80 labels; image increment adds one [increments] each time)

 - close 
manualSegmentation 
You find the labels in /images, /labels and /raw/labels. The labels in /images and /labels are required to train a model. Use the labels in /raw/labels for your own documentation purposes.
- start 
semanticSegmentation(this may take a while depending on how fast your setup is) - set to TRAIN mode with 
[TRAIN mode/PREDICT mode] - set the path and name of the model with 
[Save path] - start model training with 
[TRAIN](this can take at least 10 min or more) - close 
semanticSegmentation 
Console output while training should be something like:
-- Starting thread
-- Running TRAIN mode
-- Home path: .
epoch     train_loss  valid_loss  time
0         0.729087    0.415635    01:34
epoch     train_loss  valid_loss  time
Epoch 1/6 : |███████-----------------------------------------------------| 12.50% [1/8 00:10<01:14]
- start 
semanticSegmentation(this may take a while depending on how fast your setup is) - on the left side load an image which you want to predict using 
[Load Image]

 - set to PREDICT mode with 
[TRAIN mode/PREDICT mode] - load the model with 
[Load path] - start model prediction with 
[PREDICT](this can take at least 1 min or more)

 - on the right side save the predicted image using 
[Save image] - close 
semanticSegmentation 
Console output after prediction should be something like:
-- Starting thread
-- Running PREDICT mode
-- Try to load model at: C:/Users/path/to/your/model/your_trained_model.pkl
fastai\learner.py:59: UserWarning: Saved file doesn't contain an optimizer state.
  elif with_opt: warn("Saved file doesn't contain an optimizer state.")
-- Thread finished
- Reading and saving image file names are not supported in Unicode due to OpenCV 
imreadandimwritefunction. Avoid file names containing chars like Umlaute äüö or special characters. - Cancelling the file dialog without selecting a path can lead to termination of the app
 - an internet connection is required to create the ResNet32 model
 - if 
semanticSegmentationapp does not respond due to threading you have to restart the application