Citation: Miao Zhu, Giulio Ventura. 3D imaging technology for improvement of and application in architecturalmonitoring[J]. AIMS Mathematics, 2018, 3(3): 426-438. doi: 10.3934/Math.2018.3.426
[1] | Sitalakshmi Venkatraman, Ramanathan Venkatraman . Big data security challenges and strategies. AIMS Mathematics, 2019, 4(3): 860-879. doi: 10.3934/math.2019.3.860 |
[2] | A. Joumad, A. El Moutaouakkil, A. Nasroallah, O. Boutkhoum, Mejdl Safran, Sultan Alfarhood, Imran Ashraf . Unsupervised segmentation of images using bi-dimensional pairwise Markov chains model. AIMS Mathematics, 2024, 9(11): 31057-31086. doi: 10.3934/math.20241498 |
[3] | Wang Zhang, Jingkun Zhang . Stability of scientific big data sharing mechanism based on two-way principal-agent. AIMS Mathematics, 2023, 8(8): 18762-18779. doi: 10.3934/math.2023955 |
[4] | S. Neelakandan, Sathishkumar Veerappampalayam Easwaramoorthy, A. Chinnasamy, Jaehyuk Cho . Fuzzy adaptive learning control network (FALCN) for image clustering and content-based image retrieval on noisy dataset. AIMS Mathematics, 2023, 8(8): 18314-18338. doi: 10.3934/math.2023931 |
[5] | Xiufang Zhao . Decay estimates for three-dimensional nematic liquid crystal system. AIMS Mathematics, 2022, 7(9): 16249-16260. doi: 10.3934/math.2022887 |
[6] | Emile Franc Doungmo Goufo, Abdon Atangana . On three dimensional fractal dynamics with fractional inputs and applications. AIMS Mathematics, 2022, 7(2): 1982-2000. doi: 10.3934/math.2022114 |
[7] | Abdul Mateen, Ghulam Hussain Tipu, Loredana Ciurdariu, Fengping Yao . Analytical soliton solutions of the Kairat-Ⅱ equation using the Kumar–Malik and extended hyperbolic function methods. AIMS Mathematics, 2025, 10(4): 8721-8752. doi: 10.3934/math.2025400 |
[8] | Jianxia He, Ming Li . Existence of global solution to 3D density-dependent incompressible Navier-Stokes equations. AIMS Mathematics, 2024, 9(3): 7728-7750. doi: 10.3934/math.2024375 |
[9] | Zhongying Liu, Yang Liu, Yiqi Jiang . Global solvability of 3D non-isothermal incompressible nematic liquid crystal flows. AIMS Mathematics, 2022, 7(7): 12536-12565. doi: 10.3934/math.2022695 |
[10] | Xiaochun Sun, Yulian Wu, Gaoting Xu . Global well-posedness for the 3D rotating Boussinesq equations in variable exponent Fourier-Besov spaces. AIMS Mathematics, 2023, 8(11): 27065-27079. doi: 10.3934/math.20231385 |
With the continuous evolution of information science and technology, theories such as three-dimensional (3D) simulation, physical reconstruction, and virtual reality have been proposed, and people's understanding has shifted from flat two-dimensional (2D) spaces to 3D spaces. 3D laser scanners that employ 3D laser scanning technology, also known as "real copy technology", can solve many problems owing to advantages such as noncontact function, scanning speed, access to information capacity, high precision, real-time use, and fully automated complex environmental measurements. This technology helps to overcome the limitations of traditional measurement instruments and has become a crucial means of direct access to target precise 3D data. 3D visualization is the next technological revolution after global positioning system (GPS) mapping technology fields.
3D laser scanning technology has unique advantages over traditional single-point measurement methods, such as high efficiency and precision. This technology can also provide 3D point cloud data of a scanning surface and can be used to obtain high accuracy and high resolution of a digital terrain model [1,3,6,9,11,12,13,15]. 3D laser scanning technology is a set of optical, mechanical, electrical, and other technologies acquired from traditional mapping, measurement technology, and process integration through sophisticated sensors and a variety of modern high-tech means, constituting a variety of traditional summary mapping technology and integration.
Often, the archaeology and architecture undergoes partial or total collapse due to a lack of maintenance and control. In particular, land subsidence is caused by the degradation of mortar due to weather conditions or seismic activity, and can often occur with progressive movements that become excessive, thereby causing collapse [6,7,10,17].Although topographic or electronic monitoring systems could solve or prevent such events, their implementation is virtually impossible for many reasons, the most crucial of which are the invasiveness and aesthetic disturbance that usually accompanies the installation of instrumentation and high cost of installation and maintenance.
We propose cheap laser instrumentation equipment that once positioned can ensure that the visibility of parts of an archaeological site or building are protected without human intervention. Thanks to software with concepts derived from advanced tools of computational mechanics, the system shall execute every scan of a complete engineering analysis with an interpretation of the acquired data, sending across the network standard alarm messages based on certain parameters. Such a system should be tested during the course of the research project in sites of special interest. The installation of targets or other equipment on the property is not required [2,18,21].
The remainder of this paper is organized as follows. Section 2 presents the design objectives Section 3 explains that the core of the system is to obtain descriptions of the 3D information space. Section 4 presents perspectives regarding further research.
3D laser scanning technology has a wide range of applications in the field of surveying and mapping. Laser scanning technology and inertial navigation systems, the GPS, charge-coupled devices and other technologies, as well as real-time access to a wide range of high-precision digital elevation models, geographic information on 3D reconstructions of cities and local areas all exhibit strong advantages and are components in protogeometric and remote sensing technology. Examples of successful applications in engineering, environmental testing, and other aspects of urban development include 3D mapping sections, large-scale topographic maps, hazard assessment, establishing 3D city models, complex building construction, deformation monitoring, and construction of other large buildings
We propose low-cost laser instrumentation that once positioned can ensure the visibility of the parts of archaeological sites or buildings to be protected without human intervention. According to the design goals, the 3D model has its own special requirements, which are described as follows.
1. Instrument volume is small but easy to install. The system can be simply operated while human intervention is minimized.
2. A simple and cost-effective modeling method is required. Because of the widespread application prospects of virtual reality, low-cost modeling systems should be developed to facilitate the widespread application. The modeling system should be simple, fast, and effective.
3. To guarantee the real-time speed of the system, the number of patches including the object model should be used sparingly. The focus is on building systems monitoring. We will ignore the scanning measurements of the building. The accuracy can be increased within a specified area of the geometric model.
According to the aforementioned requirements and analysis, our prototype system has the following characteristics.
1. The data acquisition system principle is phase comparison; thus, the accuracy of the data collected by the system is protected.
2. Quick and easy operation: The process of setting up the whole system is simple and can be completed in minutes. After the setup is complete, the system automatically obtains and stores information regarding the target area.
3. Relatively low-cost hardware is selected.
We list a collection of basic assumptions that a 3D scanning scheme must satisfy. To develop a 3D laser-ranging module, a measured positive environment must be designed. To convert a 2D laser scanning laser rangefinder into a 3D scanning range finder, the simplest method is to combine the 2D scanning laser rangefinder with a rotating platform. We independently developed this software to satisfy the requirements of the entire scanning system. Single-target or multi-target 3D scanning can be timed and the scan results can be compared.
This section presents the block diagram expression of the whole model of 3D scanning coordinate transformation. Evaluating the 3D coordinates of the object is the focus of the monitoring system as well as the present study. The key point is the modeling system including many coordinate systems [4,5,8].
Currently, the core objective of 3D scanning is to convert and optimize a coordinate system. This subject has been extensively explored and is still under investigation through methodological aspects for concrete applications. It involves the reverse process of 3D graphics displays for multiple coordinate systems including the image coordinates system, camera coordinates system, world coordinate system. Transformation between these coordinates affects the 3D image [14,15,16,19,20]. Each step of the coordinate transformation process is shown in Figure 1.
By undergoing processes in Figure 1, we transformed the screen coordinate system into the world coordinate system. Because of the selected scanning method and apparatus of the specific characteristics of each coordinate system, we did not require complex adaptive equipment, conversion speed, flexible operation, or any other parts of the conversion process.
The positioning process is the transformation between the coordinate systems, and thus we needed to present a clear definition of the system by using a variety of coordinate systems. Figure 2 shows a schematic of the coordinate system. In the camera coordinate system, we define the horizontal angle and vertical angle as α and −γ, respectively. Therefore, when α=γ=0, the coordinate is O−XYZ. The new coordinate system is transformed into the coordinate O−XYZ through shifting. We call this coordinate system the target object coordinate system and refer to it as o−xyz. Regardless of the coordinate system, we always defined the Y axis as the laser line, and thus point P is always in the Y axis of o−xyz.
When scanning an object, we define P as a point on an object. The point P has coordinate parameters (0,ρ,0), where ρ is the distance of P from the laser. Therefore, coordination of the relationship between the world coordinate system and camera coordinate system has been simplified [17]. The conversion relationship is shown in Figure 2.
In Figure 3, the point coordinates of the coordinate of the target object is expressed as Equation (1).
x=0y=ρz=0 | (1) |
We need to transform the coordinates from the target object coordinates o−xyz to the camera coordinates O−XYZ. To guarantee the accuracy of the conversion, we set the vector ξ.
In the coordinates of the target object o−xyz:
S=(e1,e2,e3)(xyz) |
In the camera coordinates system O−XYZ:
ξ=(ε1,ε2,ε3)(XYZ) |
We set (e1,e2,e3) as the Cartesian coordinate system and (ε1,ε2,ε3) as the coordinate system after rotation. Among these,
(e1,e2,e3)=(100010001) |
First, we consider the X and Z axes as the center of rotation, as shown in the schematic in Figure 4. Thus, the coordinates are expressed as follows:
(ε1,ε2,ε3)=(e1,e2,e3)(cosγ−sinγ0sinγcosγ0001) |
(ε1,ε2,ε3)=(e1,e2,e3)(1000cosα−sinα0sinαcosα) |
Through substitution into vector ξ, we can obtain
(e1,e2,e3)(xyz)=(e1,e2,e3)(1000cosα−sinα0sinαcosα)(cosγ−sinγ0sinγcosγ0001)(XYZ) | (2) |
We determine point P in the camera coordinate system O−XYZ as follows:
(XYZ)=(1000cosα−sinα0sinαcosα)−1(cosγ−sinγ0sinγcosγ0001)−1(xyz) | (3) |
The preparation for the transformation from the theoretical part of the two coordinate systems needed to be based on the previous derivation formula and the operation of the instrument. In addition, the formula and operation needed to be bound. The coordinate system conversion formula must be obtained. Figure 5 shows a schematic of the instrumentation and camera coordinate system [15].
α is the data of horizontal angle. This is a horizontal rotation angle of the laser from the Y-axis to X-axis angle of rotation. We can obtain the data of α from the program. This is a known variable TA.
β is the rotation about the Y-axis. Because of its nature, the instrument will not produce the Y-axis rotation angle, and thus the value of β is 0
γ is the vertical angle. This is a vertical rotation angle of the laser from the Y-axis to the Z-axis angle of rotation. We can obtain the data of γ from the program. This is a known variable −PA.
ρ is the distance between point P and the camera coordinate system origin. The data are calculated from the internal program. This is a known variable.
In our case:
α=TAβ=0γ=−PA |
At the same time, we can obtain the distance to P from the machine, and thus P is coordinated by (x,y,z). Among these
x=0y=ρz=0 |
Subsequently, we improve the transformation between the camera coordinate system and world coordinate system. When the instrument is in its initial state, the α,β,γ angles are all 0. In this state, the coordinate system is the world coordinate system. These studies are based on the assumption that the derivation coordinate, origin of the camera coordinate system, and origin of the world coordinate system are coincidental. However, in reality, the instrument scanning the origin of the camera coordinate system will be moving, which only generates an error, thereby affecting the final results. Therefore, we need to calibrate the system, namely the calibration project, and then run the program, in which we set the motion vector (u,v,w) as a correction of the system.
The next section introduces the specific methods of the calibration project. At the same time, we summarize the scanning method that is useful for arguments based on calibration. We first need to obtain multiple groups of data, and subsequently correct the data by using the correction system. Finally, we compare the effectiveness of the data to confirm the effectiveness of the correction system.
The previous coordinate transformation theory can be applied to an existing model. The point P transformation formula is expressed as follows:
(X∗Y∗Z∗)=(1000cosα−sinα0sinαcosα)(cosγ−sinγ0sinγcosγ0001)(uv+ρw)=(1000cosα−sinα0sinαcosα)(ucosγ−(v+ρ)sinγusinγ+(v+ρ)cosγw)=(ucosγ−(v+ρ)sinγcosα(usinγ+(v+ρ)cosγ)−wsinαsinα(usinγ+(v+ρ)cosγ)+wcosα) |
In the following discussion, we abbreviate the formula as follows:
(X∗Y∗Z∗)=(cosγ−sinγ0cosαsinγcosαcosγ−sinαsinαsinγsinαcosγcosα)(uv+ρw) |
In equation (4), (X∗,Y∗,Z∗) is the data obtained after the conversion of the actual measurement results, (X,Y,Z) is the theoretical conversion value required, and (u,v,w) is the correct value of the system.
(u,v,w) consists of two parts. One is the self-generated system error and the other is the error generated when the camera, namely the laser emission point, is moving. Different angles are used so that the receiving point remains uninterrupted.
This section presents the experimental data demonstrating that the correction system is a crucial aspect of the 3D scanning monitoring system [19] Before comparison of the experimental data, we must first be introduced to the calibration process and formula deduction. The principle of the system calibration procedure is as follows. First, we set up a plane perpendicular to the horizontal plane, and the point of the plane is a sampled scan. Because these sampling points are located on the same plane, in theory, these coordinates in the world coordinate system (X,Y,Z) in the value of Y should be the same. We want to solve the equation by each point of correction to ensure they are on the same plane.
This section describes a system for the analysis of the calibration procedure. N>7 points on a plane are scanned. Each point must satisfy the (unknown) plane equation.
aX+bY+cZ−d=0 |
(X∗,Y∗,Z∗) is given by equation (4)
a(X∗−h)+b(Y∗−i)+c(Z∗−j)−d=0 | (5) |
The N equations given by this equation for the scanned point form a nonlinear system of equations in the unknown a,b,c,d,h,i,j, which can be solved using the least squares method with an iteration, where in each step, the linear system computes the solution increment that may be solved in the least square sense.
From equation (5), we can organize them again as follows:
aX∗=d−bY∗−cZ∗−(ah+bi+cj) |
By dividing both sides by a, we obtain
X∗=(d−bY∗−cZ∗+(ah+bi+cj))/a |
Subsequently, we reset the variables as follows:
μ0=d/aμ1=−b/aμ2=−c/aϕ=(ah+bi+cj)/a | (6) |
Thus,
X∗=μ0−μ1Y∗−μ2Z∗+ϕ | (7) |
According to equation (6), the values of μ0,μ1,μ2,ϕ are calculated using the least squares method. Subsequently, we can obtain the values of a,b,c,d. Subsequently, we can use equation (7) as follows:
ϕ=h−μ1i−μ2j | (8) |
We can use the least squares method again to calculate h,i,j. Details of the relevant procedures are provided in the appendix.Once the solution has been determined, the constants h,i,j are the instrument calibration constants and the coordinates of a scanned point are provided by the equation of the world coordinate system, in which the angles are provided by machine settings.
We used different data to verify the practicality of the correction system. We used six different sets of experiments (1 m, 3 m, 6 m, 9 m and 12 m) for testing and then we got two sets of data for each experiment: use correction system and not use correction system. Our target is to determine the error between calculated and theoretical data, so each set of experiment has been done 50 times. The experimental results in the Figure 6. The red image in the picture is the data of the not using the correction system and the blue image of the using the correction system. It can be seen that using the correction system, the data will be closer to our target.
We consider a set of experimental results for comparison, with a first set of data without correction and a second set that has been corrected. Because the data obtained by the scanning instrument is expressed in meters, we must ensure that the data is accurate up to three decimal places. The experimental set up is based on the data, and the instrument is scanned on a 7-m flat plate. Subsequently, imaging and analysis are conducted. The results are shown in Figure 7.
Even if the preferred interpretation is inaccurate, the data strongly suggest that the calibration procedure is crucial. We further analyzed the results shown in Figure 7. The analysis results are shown in Table 1.
Before correction(m) | After correction(m) | |
The maximum distance | 7.074 | 7.002 |
The minimum distancs | 6.852 | 6.872 |
The average distance | 6.965 | 6.957 |
The Variance between the standard distance | 0.0057 | 0.0023 |
The experimental results show that the corrected data led to better results compared with the data that had not been corrected. By comparing the data, we found that the first set of data for the difference between the maximum and minimum values yielded a result of 0.222 m. The second set of data had a maximum difference of 0.129 m. We determined the optimal number of calculations and used suitable correction data. In addition, the error value in the correction process was obtained mainly from two aspects; the first of which was to return the instrument to scan the data with a certain degree of error. The accuracy of each second data conversion formula also had an impact on the data. Only the minimum error correction is possible to ensure optimum effectiveness of 3D imaging.
This paper concludes with a discussion of future research. Based on the theme of 3D scanning and monitoring, we explored the system requirements and design issues of the coordinate system. In addition, we determined and implemented a feasibility plan. We developed a low-cost laser instrumentation system that once positioned can ensure the visibility of parts of archaeological sites or buildings to be protected. The system requires no human intervention. Thanks to software with concepts derived from advanced tools of computational mechanics, the system shall execute every scan within a given period based on a complete engineering analysis with an interpretation of the acquired data.
The experimental results may not be completely satisfactory, but a wide range of applications for the system can be observed. Better imaging and data analysis methods could contribute to higher instrument accuracy. The following aspects could be improved.
1) Choice of coordinate system perspective: Based on the experimental results, a suitable viewing angle could help to obtain the desired results easily. Thus, if we can artificially control the viewing angle, the performance of the instrument will be further enhanced.
2) Coordinate scale improvements: The coordinate system of the unit length is represented by 1 m. Although the instrument can detect changes of 1 cm, it cannot be easily positioned in the observation coordinate system. Selecting the most appropriate coordinate ratio affects the monitoring results.
The authors declare no conflict of interest.
[1] | V. Barrile, G. Bilotta, G. M. Meduri, et al. Laser Scanner Technology, Ground-Penetrating Radar and Augmented Reality for the Survey and Recovery of Artistic, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, (2017), 123-127. |
[2] | M. Kincey, C. Gerrard, J. Warburton, Quantifying erosion of 'at risk' archaeological sites using repeat terrestrial laser scanning, Journal of Archaeological Science: Reports, 12 (2017), 405-424. |
[3] | D. G. Hadjimitsis, K. Themistocleous, A. Agapiou, et al. Monitoring Archaeological Site Landscapes in Cyprus using Multi-temporal Atmospheric Corrected Image Data, International Journal of Architectural Computing, 7 (2009), 121-138. |
[4] | M. Gaiani, E. Gamberini, G. Tonelli, VR as work tool for architectural and archaeological restoration: the ancient Appian way 3D web virtual GIS, Virtual Systems and Multimedia, (2001), 86-95. |
[5] | F. M. Abed, M. U. Mohammed, S. J. Kadhim, Architectural And Cultural Heritage Conservation Using Low-Cost Cameras, Applied Research Journal, 3 (2017), 376-384. |
[6] | Z. Zhang and L. Yuan. Building a 3D scanner system based on monocular vision, Appl. Optics, 51 (2012), 1638-1644. |
[7] | J. Shi, Z. Sun and S. Bai, Large-scale three-dimensional measurement via combining 3D scanner and laser rangefinder, Appl. Optics, 54 (2015), 2814-2823. |
[8] | C. Fröhlich, M. Mettenleiter. Terrestrial Laser Scanning - New Perspectives in 3D Surveying, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, (2004), 7-13. |
[9] | F. Remondino, Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning, Remote Sensing, 3 (2011), 1104-1138. |
[10] | F. M. Giammusso Surveying, analysis and 3D modeling in archaeological virtual reconstruction The inner colonnade of the naos of Temple G of Selinunte, Virtual Systems and Multimedia (VSMM), 2012 18th International Conference on IEEE, 2012, 57-64. |
[11] | H. S. Park, H. M. Lee, H. Adeli, et al. A New Approach for Health Monitoring of Structures: Terrestrial Laser Scanning, Comput-Aided Civ. Inf., 22 (2007), 19-30. |
[12] | G. Teza, A. Galgaro, N. Zaltron, et al. Terrestrial laser scanner to detect landslide displacement fields: a new approach, International Journal of Remote Sensing, 28 (2007), 3425-3446. |
[13] | G. Bitelli, M. Dubbini and A. Zanutta, Terrestrial laser scanning and digital photogrammetry techniques to monitor landslide bodies, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 35 (2004), 246-251. |
[14] | P. J. Besl, N. D. McKay, A method for registration of 3D shapes, IEEE T. Pattern Anal., 14 (1992), 239-256. |
[15] | S. Al-khedera, Y. Al-shawabkeha, N. Haalab, Developing a documentation system for desert palaces in Jordan using 3D laser scanning and digital photogrammetry, J. Archaeol. Sci., 36 (2009), 537-546. |
[16] | J. Armesto-González, B. Riveiro-Rodrguez, D. González-Aguilera, et al.Terrestrial laser scanning intensity data applied to damage detection for historical buildings, J. Archaeol. Sci., 37 (2010), 3037-3047. |
[17] | S. C. Kuzminskya, M. S. Gardinerb, Three-dimensional laser scanning: potential uses for museum conservation and scientific research, J. Archaeol. Sci., 39 (2012), 2744-2751. |
[18] | M. Cigola, A. Gallozzi, L. J. Senatore, et al. The Use of Remote Monitored Mobile Tools for the Survey of Architectural and Archaeological Heritage, INTBAU International Annual Event. Springer, Cham, (2017), 756-765. |
[19] | M. Canciani, C. Falcolini, M. Saccone, et al. The architectural 3D survey vs archaeological 3D survey, Digital Heritage International Congress (DigitalHeritage), 1 (2013), 765-765. |
[20] | F. Fischnaller, A. Guidazzoli, S. Imboden, et al. Sarcophagus of the Spouses installation intersection across archaeology, 3D video mapping, holographic techniques combined with immersive narrative environments and scenography, Digital Heritage, 1 (2015), 365-368. |
[21] | J. H. R. Burns, D. Delparte, R. D. Gates, et al. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs, PeerJ, 3 (2015), e1077. |
1. | Yongsheng Yin, Juan Antonio, Application of 3D laser scanning technology for image data processing in the protection of ancient building sites through deep learning, 2020, 102, 02628856, 103969, 10.1016/j.imavis.2020.103969 |
Before correction(m) | After correction(m) | |
The maximum distance | 7.074 | 7.002 |
The minimum distancs | 6.852 | 6.872 |
The average distance | 6.965 | 6.957 |
The Variance between the standard distance | 0.0057 | 0.0023 |