March 17, 2017 By Andrew Trice 4 min read

How to sharpen Watson Visual Recognition results with simple preprocessing

The IBM Watson Visual Recognition service enables your applications to turn images into actionable data. You can obviously use it to tag images for content, recognize faces, and find similar images, but that’s not all it can do.  You can also create what are known as custom classifiers train the Visual Recognition service to recognize specific content within your images.

How might you use this in a real world case?  Imagine you want to train a system to:

  • Recognize specific structures in satellite images,

  • Spot cracks in pipes,

  • Pinpoint rust or corrosion on infrastructure,

  • Flag known defects in a manufacturing process.

Or any other of the millions of use cases?  Custom classifiers are how you can implement these types of solutions.  You train the service based on collections of images that are both positive and negative of the condition that you want to classify, and then you’ll be able to use it to analyze other images to identify the presence of that condition.  Keep in mind, the quality of your output is directly related to the quality of your training data, so you’ll want to make sure that you have high quality training based on real world sample images.

In many use cases a general/entire-image classification might not be enough.  If the condition you want to identify is only within a smaller region of a larger image, the entire image might not be classified with high enough confidence and a positive result could be missed.  Or, what if you want localized classification – meaning, where within the image can you find the areas recognized by the classifier?

Improving recognition of finer details with image preprocessing

One technique that I have used successfully on several projects is to use slice an image into smaller images, and analyze each one of the images individually against a custom classifier.

This approach lets us analyze each portion of the image independently, and then assemble the results into the bigger picture to provide a view into where within that image there are conditions recognized by your classifier.

Here’s a sample from one insurance-focused solution, which shows areas of hail damage that are recognized on a shingle roof by the Watson Visual Recognition service.

The colorization of the “heat map” is based on the confidence score for each tile images that is coming back in the results of the Watson Visual Recognition service.

Below you can also see a video of another sample, which demonstrates rust detection using Custom Classifiers.  Using image slicing, we can identify where within that image there is the presence of rust.

 

 

Try it yourself with this sample code

If you’d like to test this out with your own images and custom classifiers using the Watson Visual recognition service, you’re in luck! The code for this project is available as a learning resource, or for extending into your own solutions at github.com/IBM-Bluemix/Visual-Recognition-Tile-Localization (GitHub).

The sample is a Node.js application that uses the GraphicsMagick module for image preprocessing, submits the tiled images to the Watson Visual Recognition service for analysis, and then display the results within the browser superimposed over the source image.   The colorized squares represent Visual Recognition service’s confidence score for each area.

Confidence Scores

When you’re thinking about this type of scenario, it’s also important to understand confidence scores for custom classifiers returned by the Visual Recognition service.  Many people incorrectly think of confidence scores as a measure of absolute truth; it’s better to think of them as a threshold for action.   For example, consider results with a confidence score of only 60-70%, but the results are consistent, reliable, and do not include false positives or false negatives.  This would be a scenario where a confidence score of 60% is extremely valuable, and you wouldn’t need a 90% confidence score to perform an action.

Confidence scores can be though of much like temperatures in a weather report—I don’t know what the scientific definition of 75 degrees Fahrenheit is, but I know that it’s a pleasant temperature.  Likewise, I don’t know exactly what 50°F means, but I know that I’ll need a jacket.

From a cost-benefit perspective, I know that if I don’t wear a jacket at 50°F, then I am going to be cold.  Likewise, in my custom classifier scenario, I might be able to make a similar type of assumption or assessment. In my workflow, will it cost less if I invoke an action based on a confidence score of 65%?  In many cases, this answer could be yes, and this is where the value of the service comes in.  Of course, confidence score values are subjective.  They vary based upon training images, evaluation images, and the types of criteria that you want to classify. It is up to the developers of each solution to determine what confidences score values are appropriate thresholds for action for a given solution.

Conclusion

With a little bit of preprocessing, your application can improve the detection accuracy of the Watson Visual Recognition service, especially when applied to smaller details within an image. Are you ready to learn more?  Check out these references for more information:

Photo credit: A rusty bridge (Wikipedia.org)

Was this article helpful?
YesNo

More from

Putting AI to work in finance: Using generative AI for transformational change

2 min read - Finance leaders are no strangers to the complexities and challenges that come with driving business growth. From navigating the intricacies of enterprise-wide digitization to adapting to shifting customer spending habits, the responsibilities of a CFO have never been more multifaceted. Amidst this complexity lies an opportunity. CFOs can harness the transformative power of generative AI (gen AI) to revolutionize finance operations and unlock new levels of efficiency, accuracy and insights. Generative AI is a game-changing technology that promises to reshape…

IBM API Connect named a leader in the Forrester Wave: API Management Software, Q3 2024

4 min read - We are excited to announce that Forrester has recognized IBM API Connect® as a Leader in The Forrester Wave™: API Management Software, Q3 20241. Forrester conducted a 24-criteria evaluation of API management software providers to make their assessment and final results. IBM API Connect received the highest score possible in 17 out of the 24 criteria. [button link="https://www.ibm.com/account/reg/us-en/signup?formid=urx-52934"]Download a complimentary copy of the report here[/button] IBM: What to look for when shopping for API Management Software Transformation and integration initiatives…

AI that’s ready for business starts with data that’s ready for AI

6 min read - By 2026, over 80% of enterprises will deploy AI APIs or generative AI applications. AI models and the data on which they're trained and fine-tuned can elevate applications from generic to impactful, offering tangible value to customers and businesses. For example, the Master’s generative AI-driven golf fan experience uses real-time and historical data to provide insights and commentary for over 20,000 video clips. The quality and quantity of data can make or break AI success, and organizations that effectively harness…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters