-
Forest phenomics is undergoing a fundamental transformation. Terrestrial and airborne LiDAR have long served as the 'gold standard' for quantifying forest structural metrics; however, recent advances in implicit neural representations, particularly Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), have rapidly gained prominence. This commentary argues for an emerging shift from 'explicit geometry-based measurement' toward 'implicit neural reconstruction', enabled by advances in generative AI and the widespread availability of consumer-grade imaging hardware. These developments hold significant promises for democratizing high-precision forest inventory, with direct implications for Monitoring, Reporting, and Verification (MRV) in carbon markets and high-throughput phenotyping in forest breeding programs. At the same time, the reliance on learned representations introduces new epistemological risks, including reconstruction artifacts and model 'hallucinations', underscoring the need for novel validation frameworks and uncertainty-aware methodologies within the evolving paradigm of Smart Forestry.
HTML
-
For more than two decades, the laser pulse has been synonymous with forest digitization. To unravel the occluded architecture of forest canopies, active remote sensing, most notably Light Detection and Ranging (LiDAR), has been indispensable. This era of 'explicit geometry measurement' culminated in landmark datasets such as NEON or ForestGEO, which demonstrated physics-based forest reconstruction at unprecedented spatial resolution. Yet, despite its accuracy, high-end terrestrial and mobile laser scanning (TLS/MLS) systems, often exceeding USD
50,000, remain financially prohibitive[1].${\$} $ These cost barriers are particularly consequential in the Global South. Regions such as Southeast Asia and sub-Saharan Africa, where deforestation rates are high and carbon monitoring is critical for compliance with international climate agreements (e.g., the Paris Accord), frequently lack access to advanced LiDAR infrastructure. Manual forest inventories, while more affordable, are labor-intensive and susceptible to systematic bias, with biomass underestimation of 10%–20% commonly reported in dense tropical forests. The resulting tension between precision and accessibility has catalyzed a transformative shift in forest phenomics.
-
Recent comparative studies highlight the disruptive potential of implicit neural representations. In mixed-evergreen forests, consumer-grade smartphone imagery processed via Neural Radiance Fields (NeRF) has been benchmarked against LiDAR-inertial SLAM systems mounted on quadruped robots costing approximately US
60,000. The NeRF-based approach achieved data acquisition speeds five times faster than SLAM, while attaining a diameter-at-breast-height (DBH) root mean square error (RMSE) of 1.68 cm, outperforming LiDAR on complex trunk geometry and reducing costs by nearly an order of magnitude (Table 1). Similar findings were reported by Arshad et al.[2] in Italian olive groves, where NeRF reconstruction derived solely from consumer smartphones matched LiDAR accuracy in urban agroforestry settings.${\$} $ Table 1. Performance comparison of LiDAR vs NeRF/3DGS prototypes in forest phenomics applications.
Metric LiDAR (TLS/MLS) NeRF/3DGS (2024–2025 prototypes) Ref. Hardware cost
(USD ${\$} $)10,000–60,000 100–1,000
(phone/drone)[1] Collection speed 1–5 h/plot 10–60 min/plot [2] DBH RMSE 1–4.5 cm 1.68–5 cm [9,16] CHM MAE 0.1–0.5 m 0.17–0.5 m [5,6] Processing time Minutes–hours 250–300 s [4,16] Scalability Plot-level Landscape (200+ acres) [6] These findings signal a broader paradigm shift: from measuring forests using expensive active sensors to reconstructing them through learned volumetric inference. Crucially, this transition enables participatory and community-driven monitoring frameworks that were previously unattainable in resource-constrained regions.
-
Unlike traditional point clouds, NeRF functions like a neural 'artist' that learns to paint a scene by modeling the forest as a continuous cloud of light and density, allowing it to probabilistically fill in gaps where direct line-of-sight is blocked by dense foliage. Rather than relying exclusively on line-of-sight measurements, NeRF probabilistically infers occluded geometry, offering a distinct advantage in multilayered forest canopies where LiDAR penetration into the understory remains imperfect. This inference-driven reconstruction has been described as 'ghostly', in that geometry emerges from learned priors rather than direct physical measurements.
Early critiques of NeRF focused on computational inefficiency. However, methodological advances in 2024–2025 have substantially mitigated these limitations. Stuart et al.[3] introduced Object-Based NeRF (OB-NeRF) for high-throughput phenotyping, reducing reconstruction time to 250 s via targeted ray-sampling while surpassing benchmarks like Instant-NGP in fidelity for complex plant architectures. Forestry-specific applications have also emerged: Li et al.[4] proposed NeRF-RE variants capable of suppressing dynamic elements such as pedestrians or wildlife during training, improving reconstruction efficiency by up to 21-fold for urban and peri-urban tree inventories.
-
The scalability challenge has been further addressed by 3DGS. In contrast to NeRF's volumetric approach, 3DGS operates like a high-speed digital collage, projecting millions of semi-transparent 'stickers' or ellipsoids onto the environment to reconstruct vast forest landscapes with significantly less computational overhead than voxel-based methods. Building on foundational work by Kerbl et al.[5], recent extensions have demonstrated competitive completeness relative to multi-view stereo (MVS) in outdoor environments. Shaheen et al.[6] introduced ForestSplat, scaling 3DGS to reconstruct approximately 200 acres of mangrove forests from drone imagery, achieving a Canopy Height Model (CHM) Mean Absolute Error (MAE) of 0.17 m relative to airborne LiDAR. In a 2025 study published by the International Society for Photogrammetry and Remote Sensing (ISPRS), Petrovska & Jutzi[7] demonstrated that the 3DGS-MCMC approach outperformed NeRF in terms of forest completeness metrics, with particularly strong gains observed under anisotropic leaf distribution conditions. Dynamic extensions further expand applicability. Shen et al.[8] demonstrated deformable 3DGS for wind-affected scenes, enabling time-series monitoring through temporally coherent Gaussian primitives. In phenomics, Stuart et al.[3] reported sub-millimeter stem segmentation accuracy in wheat using 3DGS, underscoring its potential for fine-scale trait extraction. Collectively, these results suggest that while LiDAR continues to offer superior raw precision under certain conditions (e.g., low-light), neural methods provide compelling advantages in cost and speed, and scalability (Table 1).
-
Cost-effective, scalable reconstruction is revolutionizing MRV frameworks for carbon markets. Neural pipelines deployed on drones or smartphones enable community-led biomass estimates in dense undergrowth, fostering verifiable, transparent systems that were previously infeasible. Korycki et al.[9], for example, demonstrated NeRF-based DBH estimation in mixed-evergreen forests, overcoming occlusion challenges that limit MLS and enabling more accurate surface fuel characterization/assessments. In Puerto Rico, Shaheen et al.[6] utilized 3DGS to generate high-fidelity Canopy Height Models (CHMs) in mangrove ecosystems, directly informing restoration planning and carbon sequestration modeling, aligning with the 2025 Climate Technology Progress Report on Biogenic Carbon Solutions by the United Nations Environment Program (UNEP).
Beyond geometry, 2025 marked the integration of generative AI for semantic forest analysis. Large Language Models (LLMs) coupled with Retrieval-Augmented Generation (RAG) are increasingly used to guide real-time compliance with certification standards such as FSC/PEFC. Meanwhile, synthetic forest generation has also addressed data scarcity in pest and defect detection; for instance, Arboair leveraged generative models to produce synthetic training data for infestation detection across diverse biomes, including boreal forests.
-
The versatility of implicit neural reconstruction lies in its inherent multi-scale capability. Transitioning from landscape-level monitoring to organ-level analysis, the technologies that digitize ecosystems are proving equally transformative for individual seedlings. In plant breeding programs, OB-NeRF and 3DGS facilitate high-throughput, non-destructive trait analysis, including leaf area, branching angles, and growth dynamics, yielding reported accuracy gains of up to 15%–20% over traditional 2D imaging phenotyping pipelines. Stuart et al.[3] tracked wheat growth over 15 weeks using 3DGS to identify drought-resilient traits, offering insights for stress-tolerant breeding. In the forestry breeding context, NeRF- based originally developed for horticulture crops, such as those demonstrated in tomato Choi et al.[10] are now being adapted for forest tree seedlings, potentially reducing field trials duration by up to 30%[11]. Time-series approaches further support climate-adaptive breeding by capturing phenotypic responses to environmental stress. The utility of these advancements in terms of cost and speed cannot be overstated. By compressing the timeline of traditional progeny testing, these tools are poised to accelerate the identification of climate-resilient genotypes. With potential applications in climate-adaptive forestry programs, additional studies, such as those conducted by Huang et al.[12] on individual tree point clouds via NeRF, highlight forestry-specific extensions for canopy and root trait analysis.
-
Despite these advances, the probabilistic nature of neural reconstruction introduces epistemological risk. 'Hallucinations', plausible yet inaccurate geometric completions in occluded regions, can bias biomass estimates by 5%–10%[13]. In remote sensing, Cao et al.[14] reported overestimation artifacts in satellite-based NeRF, while Rafiq et al.[15] highlighted risks of generative bias in ecological monitoring. In carbon markets, reliance on unvalidated neural reconstructions could undermine credit integrity, potentially inflating global carbon estimates by billions of tons.
Environmental costs compound these concerns. A 2025 UNEP issue note estimates that training large neural models may consume energy equivalent to hundreds of households annually, with water usage comparable to small cities. In forestry MRV, deploying neural tools at scale could inadvertently increase the carbon footprint they are meant to mitigate, especially if cloud-based training relies on fossil-fueled data centers.
-
To address these challenges, multi-modal fusion emerges as a key mitigation approach. Integrating sparse LiDAR data for geometric ground truth with NeRF or 3DGS inference has yielded accuracy improvements of 20%–30%, enhancing robustness in occluded environments. For example, LiDAR-camera fusions leverage complementary modalities to reduce errors in 3D scene understanding, particularly in autonomous or remote sensing contexts. Ensemble methods in NeRF provide variance estimates to flag unreliable regions, while Bayesian formulations explicitly model epistemic uncertainty in density predictions. Notably, efficient alternatives like 3DGS offer substantial efficiency gains up to 1,350x reductions in optimization and rendering costs compared to NeRF, making it particularly attractive for large-area deployments.
-
As forest phenomics enters 2026, the field stands at an inflection point. LiDAR has defined the structural skeleton of digital forests, NeRF and 3DGS increasingly provide the 'connective tissue'—offering unprecedented speed, accessibility, and semantic richness. The challenge ahead lies not in choosing between explicit and implicit representations, but in integrating them responsibly. Smart Forestry will require standardized benchmarks that stress occlusion and uncertainty, hybrid sensing pipelines, and energy-aware deployment strategies. AI undeniably augments the forester's vision, but its success depends on ensuring that what is reconstructed reflects the real forest, not a convincing algorithmic illusion, thereby safeguarding ecological and climatic integrity for future generations.
This work was funded by the Fundamental Research Funds of CAF (Grant No. CAFYBB2022QA001).
-
The authors confirm contribution to the paper as follows: study conception and design: Luan Q; draft manuscript preparation: Lv N; manuscript revision: Luan Q, Fan X, El-Kassaby YA. All authors reviewed the results and approved the final version of the manuscript.
-
Not applicable.
-
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
- Copyright: © 2026 by the author(s). Published by Maximum Academic Press, Fayetteville, GA. This article is an open access article distributed under Creative Commons Attribution License (CC BY 4.0), visit https://creativecommons.org/licenses/by/4.0/.
| Lyu N, Luan Q, Fan X, El-Kassaby YA. 2026. The Hallucinated Forest's phenomics: navigating the paradigm shift from LiDAR to neural radiance fields. Smart Forestry 1: e004 doi: 10.48130/smartfor-0026-0001 |





