







This research develops a data‑driven pipeline to non‑destructively predict the mechanical strength of timber beams using exclusively visual inspection metrics and imaging data. Beginning with a dataset of 323 Giant Fir specimens—filtered to 186 after cleaning and annotation preprocessing—the workflow extracts key features such as knot geometry, grain direction, and stress‑gradient metrics via YOLOv8 and Canny‑edge analysis. Beyond raw imagery, we engineer advanced vision‑based signals—including localized knot stress metrics, grain orientation vectors, and surface microtexture descriptors—to encode structural cues directly into the data pipeline. These engineered features enable more efficient learning and robust strength and elasticity predictions by distilling relevant physical information from high‑dimensional image data. Custom asymmetric loss functions were then formulated to embed European building‑code requirements—ensuring that at least 95 % of predictions do not overestimate actual strength—directly into model training. A suite of models was evaluated, from multilayer perceptrons and decision trees to convolutional and multimodal networks that jointly process numeric and image inputs. Feature engineering (knot stress, grain orientation) and leave‑one‑out plus SHAP analyses guided predictor selection, yielding an MLP variant with R² = 0.59 (bending strength) and 0.63 (modulus of elasticity), RMSE of 10 MPa, and MAE of 8 MPa. The final multimodal model achieved R² = 0.65 for strength prediction with an MAE of 6.9 MPa, translating to ~68 % exact‑class accuracy and ~96 % safe predictions—matching industry safety thresholds while substantially improving average error over existing standards. Student: Lucas Solberg Co-supervisors: Junaid Nabi, Roberto Naboni