Microsoft Lobe: Image Recognition Made Simple


In the past few years, artificial intelligence (AI) has become synonymous with deep learning which is based on artificial neural networks (ANNs) with multiple layers. In healthcare, the most common use of AI is image recognition, particularly for the fields of cardiology, pathology, radiology, and ophthalmology. The most common algorithmic approach for image recognition is the convolutional neural network (CNN).

To date, creating a CNN meant you were a data scientist with advanced mathematical and programming skills and were an expert with programs such as TensorFlow and PyTorch. The reality is that this excludes most people who would like to know more about image recognition and would like to use it.

This changed with the introduction of Lobe in October 2020. The original company was acquired by Microsoft in 2018 and released two years later as a free beta program. It is an image recognition platform based on the CNN ResNet-50 V2. The number fifty tells you there are 50 layers in the CNN. Lobe is downloaded to the desktop and works on Windows and Mac OSs.

It appears that the primary intended use is for mobile developers who want to embed image recognition into a mobile app. The model can be used as a local Python app or hosted on Microsoft Azure, Google Cloud, and Amazon Web Services (AWS). The model can be exported as a TensorFlow 1.15 model, TensorFlow Lite (Android), CoreML (iOS), or hosted as a .Net API.

There are sample starter projects hosted on GitHub and a Community Forum on Reddit. The most useful information about Lobe is found in the document section in the Help menu in the app and website. Lobe consists of three steps:

  1. Label — where you import images that are already labeled, or you can label images after the upload. A simple approach would be to create a folder e.g., Plants and within that folder create two folders labeled, for example, “poison ivy” and “not poison ivy.” You then populate these folders with poison ivy or not poison ivy images You point Lobe to the Plants folder, and it does the rest. You can import images (JPG, PNG, BMP) or use a spreadsheet with image URLs, or import from a webcam. On GitHub, there are instructions on how to use Python to upload image URLs contained in a CSV file.
  2. Train — it automatically splits the images into train (80%) and test (20%) sets and the training begins immediately after importing the folder. There is an “optimize” feature under the File menu that improves accuracy but takes longer. Train generates an accuracy score, and you can view any mistaken labels to see where the model made errors. When you hover over an image it will give you a confidence score. To see how Lobe does on held-out test images, while in Train go to View >> Test Images. Below is an image that trained on lung cancer (abnormal) versus (normal) chest x-rays.
  3. Play- once your model is complete and optimized and you are satisfied with the accuracy, you can import more images to see how it performs on images it has not seen.
Screenshot of lobe analyzing chest x-ray images

There are some limits to Lobe as pointed out in Documentation:

  • The maximum image size Lobe can process is 178,956,970 pixels. For a square image, that is about 13.3K x 13.3K pixels
  • There is a maximum of 4,994 images per label in a single import. If you have more, split up the dataset outside of Lobe so that each label has 4,994 or fewer images and import each split separately
  • Try to have 100–1000 images per label to adequately train the model
  • Attempt to balance the images, so for example, the number of normals is about the same as abnormal images
  • You can view incorrect answers while in Train >> View >> Incorrect First
  • For more speed, but less accuracy, you can change to a different architecture by going to File >> Project Architecture and using MobileNetV2

For an example of how one user applied Lobe to detecting face mask-wearing visit this link.

My immediate question using Lobe was how would Lobe do with medical images? I tested it with small samples derived from Google Images in these fields: Cardiology (normal ECG vs atrial fibrillation), Radiology (normal chest x-ray vs. lung cancer), Pathology (normal prostate biopsy vs. prostate cancer), Dermatology (basal cell carcinoma vs. melanoma) and Ophthalmology (normal retina vs. diabetic retinopathy).

Below is a table to display accuracy, recall, precision, and specificity. These performance measures were calculated by creating a confusion matrix for each condition. The matrix will help determine sensitivity (recall), specificity, and precision (positive predictive value or PPV). Keep in mind that the sample size is too small to draw any definite conclusions, but it does encourage more studies with far greater sample sizes. I was particularly struck by the results on prostate biopsies where the image differences were microscopic and not macroscopic.

Table comparing classification performance

In December 2020, several new features were added. You can now use other connected devices to input images, such as a microscope, laptop camera, or webcam. The device is recognized when it is plugged in. Lobe’s export capabilities were extended to include TensorFlow.js and ONNX to promote machine learning with Lobe. Accelerated GPU training is available now, but only for Windows OS, for the time being.


  • Lobe is the first image recognition platform of its type for the average user that does not require coding
  • More functionality is right around the corner
  • Lobe uses a state-of-the-art CNN
  • Preliminary results for medical imaging look promising but much larger samples are required
  • The focus is on mobile apps, but undoubtedly other use cases will be found