We can find success stories in some leading companies that already applied AR or Augmented Reality in maintenance since few years. They used the services of some AR specialized companies that made the optical recognition of their equipment a reality.
Those examples show a remarkable improvement and a reasonable return on investment where they applied AR in maintenance. This means that AR is just around the corner from other big and middle sized companies. To be ready for making use of AR we need to understand the real preparations and efforts behind applying it.
That’s what we are going to chat about today. We shall start from how we can help our AR see the world with our eyes. The possibility of AR in maintenance was discussed in this chat: The future of maintenance – Part 6 – Augmented reality in Maintenance
How AR works?
AR adds a layer or an image containing digitally created information on top of the view you see.
Let’s break it into elements that we can handle.
- The view you see. How AR captures it?
- The elements of the scene. How to break the scene into manageable elements that we can define attributes and augments info to them?
- The location or identification of this view relative to some point. In case there are repeated identical views or work pieces.
- What information is digitally available for those elements. We are looking for the information that are not directly visible. They are in instructions in some document or flags in some PLC.
- How to link those information to the elements in real time.
- What is static and what is dynamic in our environment? Workpiece, tools settings, worker actions, settings changed by the operator..
- What displays the available additional info on top of the current view? In other words, which device will be used to show the user the mix between the real image and the digital overlay.
- How to ensure that user’s vision and hand are freed for doing the work? In other words if the user of AR is doing some task based on the AR information supplied, how to ensure that AR tool and information are not distracting the worker from doing his core job.
- Do we need some sort of AI agent in the back of the scene to draw some conclusions. Or will our AR system will be a just providing info.
Let’s go through those elements one by one. So, when someone offers you an AR tool you know what you will be looking for.
AR in Movies
It is easier to understand something that we had seen before rather than something new we read about. It is always easier to see with your own eyes than to imagine with your soul, heart or mind eyes; However you like to call them. If I told you that AR was originally built for F-14 fighters to see the enemy fighters, the ally fighters and to aim their weapons perfectly. So his headset allows him to see al this simultaneously. It augmenting the blue sky and blinking dots around him with all this data while he is flying at a supersonic speed.
Maybe my description was vivid enough to let you see it as any storyteller. But it would be always closer to your senses to see Tom Cruise in Top Gun doing it or maybe Robocop, Ironman or the Terminator see through their screens who is alive, who is dead and how is a robot in a human body.
Military used the AR since sometime. The Movies Industry had shown us this in many of its top films. Now, it is our time to get command of it not only read about it or watch movies about using it.
Optical recognition whether 2D or 3D
That’s how the computer system to split the image it captures into manageable pieces. This happens for standstill image or a stream of images whether in 2D or 3D.
The common application for the 2D optical recognition are the OCRs. Optical character recognition. It detects the letters in the image by comparing them to the images of letters it stores in its database. Actually, the database of the letter shapes or glyph need to include all known font variations. That’s for machine typed letters; What about handwriting?
Later on the detected letters can be used for translation or scanned document editing
An example of the 3D recognition is the tool of face recognition added in mobiles. It depends on 3D sensing. Three-dimensional (3D) sensing is the process of obtaining length, width, and depth information electronically. Then it compares it to the 3D image of the user for secure login and transactions. This tool compares the depth maps of the images not just the 2D image pixels.
The basic version of 3D recognition was used in games to let the player gestures interact with other characters in the game. So the player jumps, runs, punches and his avatar in the game is just doing the same.
Optical recognition in Manufacturing and Maintenance
Now if we need to use AR in manufacturing or maintenance, we need it to recognize the elements of image or stream of images its camera or sensors are pointing at .Then to compare the elements it isolates from the image to the reference images. After that, it concludes what is this element; A motor, a Pump a robot, a human, a pipe and so on.
There is a general shape for a motor or a pump. However, there are thousands of variations of this shape based on the function it serves. Variations staring from the direction of hanging to the ventilation method and the IP class create some real shape variations. So the best way is to add the actual images of what you want the AR to recognize. Then it would be a matter of matching specially after sometime and in the presence of noise.
How to detect Objects in a view?
Recognizing objects is one of the ultimate goals of AR. After detecting and isolating the object from the big picture you can do a lot of stuff with it digitally. That’s why the AR needs an eye. Its eye is computer vision. Where computer vision is the task of finding and identifying objects in an image or video. Humans are able to recognize a multitude of objects in images with little effort, despite the fact that the image of the objects may vary somewhat from different viewpoints, in many different sizes and scales or even when they are translated or rotated.
That’s why still one of the security tests to detect that what is trying to access this webpage is a human is marking which images includes a bicycle or a car. The images show the bicycle or car from different angles and with different sizes and models. To make sure that only humans can detect them 🙂
Computer Vision as the key to optical recognition of objects
You can find one of the best descriptions of Computer Vision in IBM article: What is computer vision? below is an abstract from it. The next chat will link the detected objects to the physical equipment whether we use Marker-based or Marker-less AR
“Computer vision is a field of artificial intelligence (AI) that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs — and take actions or make recommendations based on that information. If AI enables computers to think, computer vision enables them to see, observe and understand.
Computer vision works much the same as human vision, except humans have a head start. Human sight has the advantage of lifetimes of context to train how to tell objects apart, how far away they are, whether they are moving and whether there is something wrong in an image.
Computer vision trains machines to perform these functions, but it has to do it in much less time with cameras, data and algorithms rather than retinas, optic nerves and a visual cortex. Because a system trained to inspect products or watch a production asset can analyze thousands of products or processes a minute, noticing imperceptible defects or issues, it can quickly surpass human capabilities” ( 🙂 by me)
What is computer vision? IBM
In Conclusion,
In order that we can augment what we see with some helpful digital information, we need the computer or server to see what we see and detect whatever objects are there in the scene as good as we can do as humans. That’s why Optical recognition is a cornerstone for a successful AR implementation. Computer Vision is the digital application of the optical recognition we can do as humans.
If you feel you need help with any of these ideas we discussed, drop us a line for initial investigation in the form by Clicking here or request Management Consultancy or Coaching Services From our Store
One reply on “The future of maintenance – Part 6.1 – The eyes of the AR”
[…] to understand the concept of “computer vision”. So, we covered it in this article: The future of maintenance – Part 6.1 – The eyes of the AR. This article introduced how machines can see and perceive elements from the same scene we see and […]