Key point: starting in Dragonfly 2020.1 (differently from the prior versions like 4.1), the Multi-ROI used as "output" of the training must be fully labelled (i.e., every pixels labeled as one of the classes) in the "training areas". The training areas are normally a few selected slices in an image stack, and they may or may not be the full size in each selected slice. All unlabeled pixels will be regarded as non-training areas and excluded from the training.

See an example of the training area fully segmented: 

Following is a demo to show how to create a U-Net model and make your training data accepted. This demo is NOT for how to conduct a training to produce a satisfactory segmentation model.

When creating a semantic segmentation model, you need to specify the number of classes. This number should include the features of your interest plus the background. In this demo, it's 3 (2 types of weaving threads and 1 background):

In the Deep Learning Tool panel, when you select your data set as the Input, the program will fill in the Output with the Multi-ROI that matches the geometry of the the data set AND the number of classes of the model you want to train: 

Since the model has a class count of 3,  your Multi-ROI must has 3 classes/labels as well (shown as below): 

If you have only two classes/labels in your Multi-ROI, you can easily Add a new class and then use the following context menu to label all background pixels to this class: