Capturing Data from Drawings to Make it Usable - iTech Data
iTech Data Services

Capturing Data from Drawings to Make it Usable

20Aug
Read Time: 3 minutes

Companies with multiple sites have multiple design drawings that meet geographic and local zoning criteria for these buildings. Many of these are digitized, but too many reside as paper in drawers or storage tubes. Often these need to be accessed to find specific information, which can be a daunting task for companies and architectural firms.

For these large-format drawings, many businesses provide scanning services. This option is advantageous because they now get stored on a server.

Extracting data from each drawing to make this effort searchable is the best method to ensure that the data needed is always available.

Data from title blocks, specific data, and even measurements get extracted using Machine Learning enhanced OCR, allowing for simple search functions. What used to take hours is now attainable in seconds from any location.

Contents

Paper Drawings Digitization

In the engineering sector, a lot of information gets stored in paper documents and drawings. Because retrieving material from such drawings using typical tools is extremely resource-intensive, these get classified as unstructured data.

However, you can train artificial intelligence (AI) systems to recognize visual content in drawings and provide a simplified context — enter Machine Learning enhanced OCR.

A process drafter who understands the engineering domain and symbology is traditionally skilled at generating drawings. For AI to interpret the drawing, it must have a similar comprehension of standard symbology.

Pattern recognition, line-segment recognition, and text recognition are principles that AI can apply to create a model that learns to recognize components of an engineering drawing.

Pattern Recognition

The automatic recognition of patterns and regularities in data gets referred to as pattern recognition. When used in photos, pattern detection detects similar visual data of a specific class (such as persons, buildings, or cars) in digital images and videos.

The pattern in a drawing could be a symbol, text, or line, with the data being all pixels in the drawing. You could accomplish visual recognition of engineering drawings with the help of a well-trained algorithm.

The algorithm gets fed by symbols commonly found in engineering drawings. The AI examines several examples of symbol patterns. After a few cycles, AI learns to associate the graphic design on the drawing with the symbol type it represents. AI can detect the presence of a symbol within a drawing by analyzing symbol-forming pixels and their related locations.

Line Recognition

Lines in a drawing define the flow of the Piping and Instrumentation Diagram (PID). In contrast to a symbol, a line does not have a fixed shape. As a result, determining the margins of a line necessitates a different approach. For AI to build comprehension of a line, you must present several instances of marked lines.

AI can now distinguish the lines and edges on the drawing, thanks to this training. You can use this line-coordinate data to recreate lines on a digital platform in the future.

Other information can be retrieved, such as the length of the line or the components on a line. This line recognition can help pinpoint the data help within the drawing, helping it to become multi-functional and digitized with ease.

Text Recognition

It is just as crucial to read the text content in a drawing. Text on a drawing, such as tag numbers, notes, and holds, gives the drawing context. If a tag number cannot get linked to the matching symbol, image recognition is useless.

The mechanical or electrical conversion of images of typed, handwritten, or printed text into machine-encoded text, whether from a scanned document or a photo of a document, is known as Optical Character Recognition (OCR).

Text extraction consists of two parts: the position of the text and the content. The precise location of the text on the scanned image can be associated with image recognition to add metadata to the image. The digitized files can then be changed, searched, and stored more efficiently digitally.

For this matter, there are numerous ML-enhanced OCR approaches you can use to extract text from a scanned drawing.

For one, the text can be filtered using Natural Language Processing (NLP) to retrieve components that comply with a regular expression. A regular expression is a specific notation for describing matching patterns. NLP aids in the pattern-based filtering of text.

Thanks to Machine Learning-enhanced OCR, the global economic system is thrust into new digital realms at a breakneck pace.

Summation

Manual redrafting takes a long time and costs a lot of money. For example, replicating a drawing on a digital platform can take two to three days. However, with the emergence of Machine Learning-enhanced OCR, the paper drawing can be reproduced with AI in a matter of minutes, saving at least 50% of manual labor.

For more information on extracting data from large format drawings and blueprints, check out: https://itechdata.ai/solutions/large-format-image-capture/.

Subscribe to our blog for the latest industry trends

    IDS Commander iTech2021