The Cekirge Global σ-Regularized Deterministic Method introduces a non-iterative learning framework in which model parameters are obtained through a single closed-form computation rather than through gradient-based optimization. For more than half a century, supervised learning has relied on gradient descent, stochastic gradient descent, and conjugate gradient descent—methods requiring learning rates, batching rules, random initialization, and stopping heuristics, whose outcomes vary with floating-point resolution, operating-system effects, and hardware drift. As dimensions increase or matrices become ill-conditioned, these iterative processes frequently diverge or yield inconsistent results. The σ-Regularized Deterministic Method replaces this instability with a σ-regularized quadratic formulation whose stationary point is analytically unique; even very small σ values eliminate ill-conditioning and ensure machine-independent reproducibility. Learning is reframed not as a search, but as the direct computation of an equilibrium determined by the structural geometry of the data matrix. To address the common reviewer concern that stability must be demonstrated across progressive system sizes, the method is validated sequentially—from small 5×5 and 8×8 matrices, whose full algebra is explicitly inspectable, through 20×20, 100×100, and ultimately 1000×1000. Across all scales, the deterministic σ-solution remains stable and identical across platforms, whereas gradient-based algorithms begin to degrade even at moderate sizes. In practice, the σ-Regularized Deterministic Method requires only a single algebraic evaluation, eliminating the repeated matrix passes and energy expenditure inherent to iterative algorithms. Its runtime scales linearly with the number of partitions rather than the number of iterations, yielding substantial time and energy savings even in very large systems.
| Published in | American Journal of Artificial Intelligence (Volume 9, Issue 2) |
| DOI | 10.11648/j.ajai.20250902.31 |
| Page(s) | 324-337 |
| Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
| Copyright |
Copyright © The Author(s), 2025. Published by Science Publishing Group |
Deterministic Learning, σ-Regularization, Non-Iterative Optimization, Algebraic Machine Learning, Numerical Stability, Partition Methods, Energy-Efficient Computation
| [1] | Tikhonov, A. N. Solutions of Ill-Posed Problems. V. H. Winston & Sons, 1977. |
| [2] | Rumelhart, D. E., Hinton, G. E., & Williams, R. J. Learning Representations by Back-Propagation of Errors. Nature, 323(6088), 533–536, 1986. |
| [3] | Hinton, G. Efficient Representations and Energy Constraints in Learning Systems. AI Magazine, 45(1), 2024. |
| [4] | Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Networks, 61, 85–117, 2015. |
| [5] | Friston, K. Free-Energy Principle in Cognition and AI. Nature Neuroscience, 22(2), 2019. |
| [6] | Benton, R. Spectral Stabilization and Regularization in Large Transformer Architectures. arXiv: 2304.10211, 2023. |
| [7] | Zhuge, Y., Han, J., and Li, Z. Spectral Regularization in Large-Scale Transformer Training for Energy-Efficient Convergence. IEEE Transactions on Neural Networks and Learning Systems, 35(7), 8432–8447, 2024. |
| [8] | Lee, D. & Fischer, A. Deterministic Matrix-Inversion Learning for Stable Transformer Layers. Nature Machine Intelligence, 7(3), 215–228, 2025. |
| [9] | Patel, K., Ahmed, S., and Rana, P. Low-Entropy Energy Models for Reproducible AI Systems: Toward Analytical Convergence. AAAI Conference on Artificial Intelligence, 39(1), 1021–1032, 2025. |
| [10] | Nguyen, T. and Raginsky, M. Scaling Laws and Deterministic Limits in High-Dimensional Learning Dynamics. JMLR, 25(118), 1–32, 2024. |
| [11] | Cekirge, H. M., Algebraic σ-Based (Cekirge) Model for Deterministic and Energy-Efficient Unsupervised Machine Learning. AJAI, Vol. 9, No. 2, 198-205, 2025. |
| [12] | Cekirge, H. M., An Alternative Way of Determining Biases and Weights for the Training of Neural Networks. AJAI, Vol. 9, No. 2, 129-132, 2025. |
| [13] | Cekirge, H. M., Cekirge’s σ-Based ANN Model for Deterministic, Energy-Efficient, Scalable AI with Large-Matrix Capability. AJAI, Vol. 9, No. 2, 206-216, 2025. |
| [14] | Cekirge, H. M., Tuning the Training of Neural Networks by Using the Perturbation Technique. AJAI, Vol. 9, No. 2, 107-109, 2025. |
| [15] | Cekirge, H. M., Cekirge_Perturbation_Report_v4. Zenodo, 2025. |
| [16] | Cekirge, H. M., Algebraic Cekirge Method for Deterministic and Energy-efficient Transformer Language Models, AJAI, Vol. 9, No. 2, 258-271, 2025. |
| [17] | Kingma, D. P., and Ba, J. A., A Method for Stochastic Optimization, International Conference on Learning Representations (ICLR), 2015. |
| [18] | Nocedal, J., and Wright, S. J., Numerical Optimization (2nd ed.). Springer, 2006. |
| [19] | Trefethen, L. N., and Bau, D., Numerical Linear Algebra., SIAM, 1997. |
APA Style
Cekirge, H. M. (2025). The Cekirge Method for Machine Learning: A Deterministic σ-Regularized Analytical Solution for General Minimum Problems. American Journal of Artificial Intelligence, 9(2), 324-337. https://doi.org/10.11648/j.ajai.20250902.31
ACS Style
Cekirge, H. M. The Cekirge Method for Machine Learning: A Deterministic σ-Regularized Analytical Solution for General Minimum Problems. Am. J. Artif. Intell. 2025, 9(2), 324-337. doi: 10.11648/j.ajai.20250902.31
@article{10.11648/j.ajai.20250902.31,
author = {Huseyin Murat Cekirge},
title = {The Cekirge Method for Machine Learning: A Deterministic σ-Regularized Analytical Solution for General Minimum Problems},
journal = {American Journal of Artificial Intelligence},
volume = {9},
number = {2},
pages = {324-337},
doi = {10.11648/j.ajai.20250902.31},
url = {https://doi.org/10.11648/j.ajai.20250902.31},
eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250902.31},
abstract = {The Cekirge Global σ-Regularized Deterministic Method introduces a non-iterative learning framework in which model parameters are obtained through a single closed-form computation rather than through gradient-based optimization. For more than half a century, supervised learning has relied on gradient descent, stochastic gradient descent, and conjugate gradient descent—methods requiring learning rates, batching rules, random initialization, and stopping heuristics, whose outcomes vary with floating-point resolution, operating-system effects, and hardware drift. As dimensions increase or matrices become ill-conditioned, these iterative processes frequently diverge or yield inconsistent results. The σ-Regularized Deterministic Method replaces this instability with a σ-regularized quadratic formulation whose stationary point is analytically unique; even very small σ values eliminate ill-conditioning and ensure machine-independent reproducibility. Learning is reframed not as a search, but as the direct computation of an equilibrium determined by the structural geometry of the data matrix. To address the common reviewer concern that stability must be demonstrated across progressive system sizes, the method is validated sequentially—from small 5×5 and 8×8 matrices, whose full algebra is explicitly inspectable, through 20×20, 100×100, and ultimately 1000×1000. Across all scales, the deterministic σ-solution remains stable and identical across platforms, whereas gradient-based algorithms begin to degrade even at moderate sizes. In practice, the σ-Regularized Deterministic Method requires only a single algebraic evaluation, eliminating the repeated matrix passes and energy expenditure inherent to iterative algorithms. Its runtime scales linearly with the number of partitions rather than the number of iterations, yielding substantial time and energy savings even in very large systems.},
year = {2025}
}
TY - JOUR T1 - The Cekirge Method for Machine Learning: A Deterministic σ-Regularized Analytical Solution for General Minimum Problems AU - Huseyin Murat Cekirge Y1 - 2025/12/29 PY - 2025 N1 - https://doi.org/10.11648/j.ajai.20250902.31 DO - 10.11648/j.ajai.20250902.31 T2 - American Journal of Artificial Intelligence JF - American Journal of Artificial Intelligence JO - American Journal of Artificial Intelligence SP - 324 EP - 337 PB - Science Publishing Group SN - 2639-9733 UR - https://doi.org/10.11648/j.ajai.20250902.31 AB - The Cekirge Global σ-Regularized Deterministic Method introduces a non-iterative learning framework in which model parameters are obtained through a single closed-form computation rather than through gradient-based optimization. For more than half a century, supervised learning has relied on gradient descent, stochastic gradient descent, and conjugate gradient descent—methods requiring learning rates, batching rules, random initialization, and stopping heuristics, whose outcomes vary with floating-point resolution, operating-system effects, and hardware drift. As dimensions increase or matrices become ill-conditioned, these iterative processes frequently diverge or yield inconsistent results. The σ-Regularized Deterministic Method replaces this instability with a σ-regularized quadratic formulation whose stationary point is analytically unique; even very small σ values eliminate ill-conditioning and ensure machine-independent reproducibility. Learning is reframed not as a search, but as the direct computation of an equilibrium determined by the structural geometry of the data matrix. To address the common reviewer concern that stability must be demonstrated across progressive system sizes, the method is validated sequentially—from small 5×5 and 8×8 matrices, whose full algebra is explicitly inspectable, through 20×20, 100×100, and ultimately 1000×1000. Across all scales, the deterministic σ-solution remains stable and identical across platforms, whereas gradient-based algorithms begin to degrade even at moderate sizes. In practice, the σ-Regularized Deterministic Method requires only a single algebraic evaluation, eliminating the repeated matrix passes and energy expenditure inherent to iterative algorithms. Its runtime scales linearly with the number of partitions rather than the number of iterations, yielding substantial time and energy savings even in very large systems. VL - 9 IS - 2 ER -