A point cloud, on the one hand, is the most comprehensive collection of raw measurements of real-world items. In the other hand, it’s all a bunch of “dumb” points with no context or physical representation of what’s being portrayed. Humans would quickly distinguish objects in point clouds, but doing so with all of the point cloud’s meaningful objects will take a long time. That is where clever applications based on Artificial Intelligence / Machine Learning will assist them in automating a portion of this time-consuming mission. Figure 3 portrays the same scene, with the exception that the front panels of the cabinets have been clipped down, suggesting that the point cloud provides little details about the contents of these cabinets.
A point cloud is simply a set of millions (or even billions) of points created by a scanner. It’s vital to note that these points are often on the surfaces of objects. Each point has three coordinates that allow it to be located in space, as well as color and/or intensity data.
Laser scanners are one way to get point clouds. By emitting a laser beam in a specific direction (described by angles and), this type of scanner produces a point cloud. This beam reflects on a surface, and the distance r between the reflection and the observer is calculated. One point in the point cloud is the effect. A point cloud is created by sweeping this beam around and measuring the distances of all of these reflections on surfaces. The ideal CAD model that we’d like to recreate from the point cloud is seen in Figure d.
The aim of this second blog post is to clarify how 3D scanners function. Stereo Imaging (how different does an object appear from two different views) and Time-of-Flight are the two most critical techniques for determining distance while scanning (how long does it take to reflect and return to be measured).