ANALYSIS OF MATHEMATICAL MODELS AND METHODS FOR 3D ENVIRONMENT CHANGE DETECTION FROM IMAGERY
DOI:
https://doi.org/10.20535/kpisn.2025.3.336383Keywords:
3D environment changes, point clouds, digital twin, computer vision, machine learningAbstract
Background. Automated detection of changes in three-dimensional (3D) environments constitutes a fundamental challenge in the development and maintenance of urban infrastructure digital twins, environmental monitoring systems, and facility security applications. Current methodologies exhibit limitations in uncertainty quantification, computational scalability for large-scale datasets, and semantic information integration. Existing reviews predominantly emphasize technical implementations or domain-specific applications, with insufficient attention to the particularities of the mathematical methods used. This gap impedes the principled selection of algorithms for digital twin actualization.
Objective. To provide a comprehensive systematization and comparative analysis of mathematical models and computational methods for three-dimensional environmental change detection. To establish a taxonomic framework that delineates methodological advantages, constraints, and applicability domains. To formulate evidence-based recommendations for method selection across diverse monitoring scenarios of 3D environment.
Methods. We conducted a systematic analysis encompassing classical geometric approaches for point cloud registration and comparison, statistical frameworks for uncertainty quantification, and contemporary machine learning paradigms for automated change classification. Formal mathematical representations were developed for three-dimensional data structures, change typologies, and corresponding evaluation metrics.
Results. A unified mathematical framework for characterizing diverse change phenomena in three-dimensional environments was established, providing a theoretical foundation for automated digital twin synchronization. Method selection criteria were derived based on data properties and application-specific requirements. Empirical evidence demonstrates that hybrid architectures integrating geometric primitives with machine learning techniques achieve superior accuracy while preserving interpretability in digital environment model updates.
Conclusions. The proposed systematic framework facilitates principled method selection for specific change detection applications. Future research directions include the development of self-adaptive algorithms with autonomous parameter optimization and the incorporation of semantic reasoning through advanced deep learning architectures.
References
D. Lague, N. Brodu, and J. Leroux, “Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z),” ISPRS J. Photogramm. Remote Sens., vol. 82, pp. 10–26, Aug. 2013, doi: 10.1016/j.isprsjprs.2013.04.009.
R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI: IEEE, July 2017, pp. 77–85. doi: 10.1109/CVPR.2017.16.
Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic Graph CNN for Learning on Point Clouds,” ACM Trans. Graph., vol. 38, no. 5, pp. 1–12, Oct. 2019, doi: 10.1145/3326362.
I. De Gélis, S. Lefèvre, and T. Corpetti, “Siamese KPConv: 3D multiple change detection from raw point clouds using deep learning,” ISPRS J. Photogramm. Remote Sens., vol. 197, pp. 274–291, Mar. 2023, doi: 10.1016/j.isprsjprs.2023.02.001.
U. Stilla and Y. Xu, “Change detection of urban objects using 3D point clouds: A review,” ISPRS J. Photogramm. Remote Sens., vol. 197, pp. 228–255, Mar. 2023, doi: 10.1016/j.isprsjprs.2023.01.010.
A. Shafique, G. Cao, Z. Khan, M. Asad, and M. Aslam, “Deep Learning-Based Change Detection in Remote Sensing Images: A Review,” Remote Sens., vol. 14, no. 4, p. 871, Feb. 2022, doi: 10.3390/rs14040871.
R. Qin, J. Tian, and P. Reinartz, “3D change detection – Approaches and applications,” ISPRS J. Photogramm. Remote Sens., vol. 122, pp. 41–56, Dec. 2016, doi: 10.1016/j.isprsjprs.2016.09.013.
A. Kharroubi, F. Poux, Z. Ballouch, R. Hajji, and R. Billen, “Three Dimensional Change Detection Using Point Clouds: A Review,” Geomatics, vol. 2, no. 4, pp. 457–485, Oct. 2022, doi: 10.3390/geomatics2040025.
C. Andresen and E. Schultz-Fellenz, “Change Detection Applications in the Earth Sciences Using UAS-Based Sensing: A Review and Future Opportunities,” Drones, Apr. 2023, doi: 10.3390/DRONES7040258.
E. P. Herrera-Granda, J. C. Torres-Cantero, and D. H. Peluffo-Ordóñez, “Monocular visual SLAM, visual odometry, and structure from motion methods applied to 3D reconstruction: A comprehensive survey,” Heliyon, vol. 10, no. 18, p. e37356, Sept. 2024, doi: 10.1016/j.heliyon.2024.e37356.
H. Fan, H. Su, and L. Guibas, “A Point Set Generation Network for 3D Object Reconstruction from a Single Image,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI: IEEE, July 2017, pp. 2463–2471. doi: 10.1109/CVPR.2017.264.
J. Gehrung, M. Hebel, M. Arens, and U. Stilla, “A VOXEL-BASED METADATA STRUCTURE FOR CHANGE DETECTION IN POINT CLOUDS OF LARGE-SCALE URBAN AREAS,” ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., vol. IV–2, pp. 97–104, May 2018, doi: 10.5194/isprs-annals-IV-2-97-2018.
M. M. Bronstein and I. Kokkinos, “Scale-invariant heat kernel signatures for non-rigid shape recognition,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA: IEEE, June 2010, pp. 1704–1711. doi: 10.1109/CVPR.2010.5539838.
F. Pomerleau, F. Colas, and R. Siegwart, “A Review of Point Cloud Registration Algorithms for Mobile Robotics,” Found. Trends® Robot., vol. 4, no. 1, pp. 1–104, 2015, doi: 10.1561/2300000035.
P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256, Feb. 1992, doi: 10.1109/34.121791.
Y. Chen and G. Medioni, “Object modelling by registration of multiple range images,” Image Vis. Comput., vol. 10, no. 3, pp. 145–155, Apr. 1992, doi: 10.1016/0262-8856(92)90066-C.
Q.-Y. Zhou, J. Park, and V. Koltun, “Fast Global Registration,” in Computer Vision – ECCV 2016, vol. 9906, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., Cham: Springer International Publishing, 2016, pp. 766–782. Accessed: July 30, 2025. [Online]. Available: http://link.springer.com/10.1007/978-3-319-46475-6_47
P. J. Huber, “Robust Estimation of a Location Parameter,” Ann. Math. Stat., vol. 35, no. 1, pp. 73–101, Mar. 1964, doi: 10.1214/aoms/1177703732.
A. E. Beaton and J. W. Tukey, “The Fitting of Power Series, Meaning Polynomials, Illustrated on Band-Spectroscopic Data,” Technometrics, vol. 16, no. 2, pp. 147–185, May 1974, doi: 10.1080/00401706.1974.10489171.
B. K. P. Horn, H. M. Hilden, and S. Negahdaripour, “Closed-form solution of absolute orientation using orthonormal matrices,” J. Opt. Soc. Am. A, vol. 5, no. 7, p. 1127, July 1988, doi: 10.1364/JOSAA.5.001127.
A. Myronenko and Xubo Song, “Point Set Registration: Coherent Point Drift,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2262–2275, Dec. 2010, doi: 10.1109/TPAMI.2010.46.
S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, Que., Canada: IEEE Comput. Soc, 2001, pp. 145–152. doi: 10.1109/IM.2001.924423.
G. K. L. Tam et al., “Registration of 3D Point Clouds and Meshes: A Survey from Rigid to Nonrigid,” IEEE Trans. Vis. Comput. Graph., vol. 19, no. 7, pp. 1199–1217, July 2013, doi: 10.1109/TVCG.2012.310.
Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vis., vol. 13, no. 2, pp. 119–152, Oct. 1994, doi: 10.1007/BF01427149.
K.-L. Low, “Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration”.
S. Rusinkiewicz, “A symmetric objective function for ICP,” ACM Trans. Graph., vol. 38, no. 4, pp. 1–7, Aug. 2019, doi: 10.1145/3306346.3323037.
A. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” in Robotics: Science and Systems V, Robotics: Science and Systems Foundation, June 2009. doi: 10.15607/RSS.2009.V.021.
Y. Guo, M. Bennamoun, F. Sohel, M. Lu, and J. Wan, “3D Object Recognition in Cluttered Scenes with Local Surface Features: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 11, pp. 2270–2287, Nov. 2014, doi: 10.1109/TPAMI.2014.2316828.
R. B. Rusu, N. Blodow, and M. Beetz, “Fast Point Feature Histograms (FPFH) for 3D registration,” in 2009 IEEE International Conference on Robotics and Automation, Kobe: IEEE, May 2009, pp. 3212–3217. doi: 10.1109/ROBOT.2009.5152473.
D. Hutchison et al., “Unique Signatures of Histograms for Local Surface Description,” in Computer Vision – ECCV 2010, vol. 6313, K. Daniilidis, P. Maragos, and N. Paragios, Eds., Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 356–369. Accessed: July 30, 2025. [Online]. Available: http://link.springer.com/10.1007/978-3-642-15558-1_26
M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, June 1981, doi: 10.1145/358669.358692.
J. Lawrence, J. Bernal, and C. Witzgall, “A Purely Algebraic Justification of the Kabsch-Umeyama Algorithm,” J. Res. Natl. Inst. Stand. Technol., vol. 124, p. 124028, Oct. 2019, doi: 10.6028/jres.124.028.
D. Aiger, N. J. Mitra, and D. Cohen-Or, “4-points congruent sets for robust pairwise surface registration,” in ACM SIGGRAPH 2008 papers, Los Angeles California: ACM, Aug. 2008, pp. 1–10. doi: 10.1145/1399504.1360684.
N. Mellado, D. Aiger, and N. J. Mitra, “Super 4PCS Fast Global Pointcloud Registration via Smart Indexing,” Comput. Graph. Forum, vol. 33, no. 5, pp. 205–215, Aug. 2014, doi: 10.1111/cgf.12446.
S. Huang, Z. Gojcic, M. Usvyatsov, A. Wieser, and K. Schindler, “PREDATOR: Registration of 3D Point Clouds with Low Overlap,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA: IEEE, June 2021, pp. 4265–4274. doi: 10.1109/CVPR46437.2021.00425.
D. Girardeau-Montaut, M. Roux, R. Marc, and G. Thibault, “CHANGE DETECTION ON POINTS CLOUD DATA ACQUIRED WITH A GROUND LASER SCANNER,” 2005.
R. Lindenbergh and P. Pietrzyk, “Change detection and deformation analysis using static and mobile laser scanning,” Appl. Geomat., vol. 7, no. 2, pp. 65–74, June 2015, doi: 10.1007/s12518-014-0151-y.
M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise”.
D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 603–619, May 2002, doi: 10.1109/34.1000236.
G. Vosselman, M. Coenen, and F. Rottensteiner, “Contextual segment-based classification of airborne laser scanner data,” ISPRS J. Photogramm. Remote Sens., vol. 128, pp. 354–371, June 2017, doi: 10.1016/j.isprsjprs.2017.03.010.
Y. Xu, R. Boerner, W. Yao, L. Hoegner, and U. Stilla, “Pairwise coarse registration of point clouds in urban scenes using voxel-based 4-planes congruent sets,” ISPRS J. Photogramm. Remote Sens., vol. 151, pp. 106–123, May 2019, doi: 10.1016/j.isprsjprs.2019.02.015.
C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” June 07, 2017, arXiv: arXiv:1706.02413. doi: 10.48550/arXiv.1706.02413.
A. Vaswani et al., “Attention Is All You Need,” 2017, arXiv. doi: 10.48550/ARXIV.1706.03762.
H. Zhao, L. Jiang, J. Jia, P. Torr, and V. Koltun, “Point Transformer,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada: IEEE, Oct. 2021, pp. 16239–16248. doi: 10.1109/ICCV48922.2021.01595.
I. De Gélis, S. Lefèvre, and T. Corpetti, “Change Detection in Urban Point Clouds: An Experimental Comparison with Simulated 3D Datasets,” Remote Sens., vol. 13, no. 13, p. 2629, July 2021, doi: 10.3390/rs13132629.
C. V. D. Sande, S. Soudarissanane, and K. Khoshelham, “Assessment of Relative Accuracy of AHN-2 Laser Scanning Data Using Planar Features,” Sensors, vol. 10, no. 9, pp. 8198–8214, Sept. 2010, doi: 10.3390/s100908198.
T. Tran, C. Ressl, and N. Pfeifer, “Integrated Change Detection and Classification in Urban Areas Based on Airborne Laser Scanning Point Clouds,” Sensors, vol. 18, no. 2, p. 448, Feb. 2018, doi: 10.3390/s18020448.
I. De Gélis, S. Saha, M. Shahzad, T. Corpetti, S. Lefèvre, and X. X. Zhu, “Deep unsupervised learning for 3D ALS point clouds change detection,” ISPRS Open J. Photogramm. Remote Sens., vol. 9, p. 100044, Aug. 2023, doi: 10.1016/j.ophoto.2023.100044.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Сергій Сахаров, Олег Чертов

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under CC BY 4.0 that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work