MICRON-LEVEL VISION TECHNOLOGY

Measurement and Detection for Biomedical Components and Samples


Measurement and Detection for Biomedical Components and Samples As precision components in various industries become smaller and tolerances become accordingly more stringent, advanced technologies are required to ensure tighter process control and detection of microscopic defects. Notable arenas in which ever-smaller components and tighter tolerances are continually demanded are the medical devices, semiconductors, and Micro-Electro- Mechanical Systems (MEMS) sectors.

At the same time, economics perennially drive the requirements for increased production rates of such components. Figure 1 illustrates a microscopic defect detection application in the semiconductor industry.

Other needs also exist widely in the medical community for "visual" detection and measurement of microscopic objects and effects - organisms, symptoms, contamination and their behavior. Massive amounts of specialized labor are continually consumed screening and analyzing medical tissue and culture samples. Much of this effort is routine and repetitive and distracts resources that could be better applied to the tasks requiring the most human expertise and judgment.

Tracking of bacterial growth rates, as indicated in Figure 2, is only one of many such examples.

There is a common technical thread in these apparently distinct needs. That is the automation of robust capabilities for 3D visual measurement, detection and analysis of microscopic phenomena captured in high-resolution imagery.


Figure 1 - Microscopic defect detection

Figure 2 - Tracking bacterical growth rates

Using recent major advances in vision technology, along with close sustained collaboration of vision technology experts and medical community subject matter experts, each such medically-relevant vision task area can now successfully be automated.

Several physical factors must be knowledgeably traded off to provide the optimum measurement capability for a given object shape, material type, types of measurements to be made, and types of defects or effects that must be detected. Many applications require non-contact sensing, measurement and defect detection for multiple reasons, which may include:

  • Delicacy or fragility of the component, organism or effect to be observed
  • Required sterile processing (and/or)
  • Required processing speeds too fast for slow-moving probes

Let us take a look at the primary technical tradeoffs in micron-level 3D visual metrology and detection: First, there is always a requirement for some level of spatial resolution.

How small a defect must be detected? How accurate a measurement must be made? Highly reliable measurement and detection usually requires that the basic sensing element's resolution be five to ten times finer than the smallest artifact or phenomenon of interest. Current advanced systems can provide optical resolutions at the micron level or better in all three physical dimensions.


Figure 3 - The ShaPix family, one of which is shown above, incorporates the form of 3D sensing and processing.

The least understood dimension of 3D spatial measurement and detection resolution is the range dimension (often called the Z dimension). Some optical measurement and detection technologies cannot discriminate distances finer than approximately 100&mirco;m. Interferometric technologies can resolve range differences as small as tenths of a micron or less.

The second tradeoff issue is the required instantaneous field of measurement - which defines how much area must be viewable at one instant.

Some current advanced vision platforms employ available photosensor technology that can view and sample a field of measurement that is 4,000 or more times larger than the lateral resolution of the individual sensor elements, the two lateral dimensions (often called the "X" and "Y" spatial dimensions). For example, 1&mirco;m defects can readily be observed over a field of 4mm. When a continuously moving material must be measured or inspected, the field of measurement can be indefinitely greater than the basic sensor element resolution, such as 16,000 times that resolution or more.

One micron phenomena can then be observed on a web that is 16mm wide or much more. Also when 3D data is produced for each instantaneous field of measurement, a suitably equipped vision system can accurately "stitch" together measurements from an indefinitely large number of small and overlapping views to "map" an entire larger part or assembly.


Figure 4 - To best highlight exactly what must be measured and detected, a light engine is shown integrated with sensors.

When a 3D object must be accurately measured or 3D defects or effects must be detected and classified, multiple sensors are often arranged to capture more information-rich data regarding the dimensions and relationships of surfaces, features or defects of interest. All biological or natural 3D vision has similarly evolved.

Again, the Z, or range dimension of the field of measurement, is often least understood. Most interferometric methods cannot reliably resolve the differences in range values over more than one wavelength of light. Others require slow mechanical searching to overcome the ambiguity interval challenges of interferometry. But multiwavelength holographic interferometry allows measurement of ranges to submicron accuracies over a Z measurement range of many centimeters.

The ShaPix family, one of which is shown in Figure 3, uniquely incorporates that form of 3D sensing and processing. Vision technologies employing triangulation methods have Z measurement ranges limited by their focusing components. Other technologies employing Moire' fringe pattern methods have lateral (X-Y) resolutions that are relatively coarse but adequate for some applications.

The third tradeoff factor is the required processing rate at which measurement or detection must occur. This usually will relate to the production rate of the material to be inspected or measured.

This cycle time requirement will often impact required computing power and the choice of sensors since there is a limit to how rapidly image data can be acquired from a semiconductor photosensor array. Line sensors used in web-material inspection can deliver tens of thousands of lines per second. Two-dimensional imaging array data acquisition rates will differ (as their field-of-measurementto- resolution ratios differ) from thousands of frames per second down to a few frames per second. Requirements for data acquisition speeds are also driven by constraints limiting the motion of the surfaces or components. As with any camera system, vision sensor exposure times must freeze any 3D motion that would excessively degrade achievable resolutions.

The fourth technical tradeoff area affecting measurement and detection performance of micron-level 3D visual measurement and inspection is the illumination of the measured component surface. This is an often-underappreciated factor. Inadequate illumination can make reliable measurement and detection either excessively expensive or impossible, while optimized "smart" illuminators can make a solution economical and robust. Modern high-performance vision systems for high-resolution applications use computer- controlled light engines that can supply:

  • The right wavelengths and intensities of light
  • From the optimum incidence direction
  • With desired polarizations With the desired illumination distribution or pattern,
  • In the best rapid-fire time sequence (and)
  • At the right instants of time.

To best highlight exactly what must be measured and detected, a light engine is shown integrated with sensors in Figure 4. Because various component surfaces and different materials differ dramatically in their spectral characteristics and their other reflectance characteristics, no single simple illumination method delivers high-performance measurement or defect detection results for all applications.

When computer-controlled sensors and illuminators are combined to implement a system solution, many powerful configurations become feasible.

It is not difficult to envision the achievable performance from being able to measure and analyze a surface by simultaneously: combining many views (a multi-sensor suite) and optimizing multiple lightings (a smart illuminator suite), in order to observe the surface that is being measured or inspected.


Figure 5 - illustrates the displayof an icon-driven vision application development environment.

Beyond the hardware elements of precision vision systems, the ability of the human users to efficiently and precisely define the vision task can determine whether the solution is economically viable or not. This fact is even more significant when it is realized that such vision tasks are often dynamic in nature. New measurements will be desired, new tolerances will be specified, new types of phenomena will be encountered, and new features will become of high interest. So underlying the power of any modern measurement and detection platform's physical technologies must be an extremely easy application development software interface. That essential element must enable a vision task solution to be created or refined, within the timecycles that specific task needs and criteria change. This means days or less, not months, for a refinement cycle to be accomplished. Graphic icondriven tools provide an easily learned productive solution. Figure 5 illustrates the display of an icon-driven vision application development environment.

Vision algorithm component steps in a library are represented as chips that can be rapidly connected to implement any complete vision solution.

For vision technology used to expedite the performance and productivity of research, clinical, development, diagnostic, production, quality control or other activities, easy and rapid changes in application algorithms are important, since change is inherent in all of those functions.

Often miniature critical items such as medical devices must be inspected for their integrity, absence of microscopic defects, and dimensional correctness on all sides. It is often then most practical to arrange the fixturing, illumination and observation of the item so that it can be sequentially or simultaneously viewed section by section, from different vantage points around its perimeter.


Figure 6 - illustrates a view of a stent from just one of the vantage points.

Figure 6 illustrates a view of a stent from just one of these vantage points.

Typically six sensors or six rotational positions of the observed item would be used to form a complete characterization of the item's dimensional correctness and structural integrity.

Finally, the implementation of successful object, material surface and feature measurement and detection systems always involves a close working partnership between the subject matter experts and the vision/ metrology technology experts.

The former have deep understanding of the medical or material science, engineering and operations requirements; the latter have equally deep knowledge of how to rapidly extract accurate information from the optical signatures of objects, materials and phenomena. Together they can produce solutions that enable the subject matter to be visually processed as consistently, accurately, rapidly, and as near-perfectly as required.

Coherix Inc.
Ann Arbor, MI
coherix.com

September 2009
Explore the September 2009 Issue

Check out more from this issue and find your next story to read.