Unsupervised learning is a fundamental branch of machine learning that operates without labeled outputs, aiming instead to uncover latent structures, intrinsic relationships, and patterns embedded in data. Unlike supervised approaches, which rely on explicit input-output mappings, unsupervised methods extract regularities directly from raw, often high-dimensional, datasets. Core methodological paradigms include clustering, dimensionality reduction, and anomaly detection. Clustering techniques partition data into groups according to similarity metrics; dimensionality reduction methods, such as Principal Component Analysis (PCA) and t-SNE, map high-dimensional inputs into lower-dimensional subspaces while preserving meaningful structure; and density estimation approaches model probability distributions to detect rare or anomalous events. A central concept is the latent space, in which data are encoded into compact representations that capture essential features. These representations may arise from empirical observations or serve as hypothetical abstractions. Weights and biases can be systematically organized using structured matrix formulations that parallel neural computation. Ultimately, unsupervised learning seeks to reveal intrinsic data regularities without external supervision, while its latent encodings provide a transferable foundation for downstream supervised tasks such as classification, regression, and prediction. Once a robust latent representation is obtained, these encoded datasets can serve as the foundation for downstream supervised learning tasks, enabling prediction, classification, or regression on previously unlabeled data. The Algebraic σ-Based (Cekirge) Model presented in this paper allows deterministic computation of neural network weights, including bias, for any number of inputs. Auxiliary σ perturbations ensure a nonsingular matrix, guaranteeing a unique solution. Compared to gradient descent, the Algebraic σ-Based (Cekirge) Model is orders of magnitude faster and consumes significantly less energy. Gradient descent is iterative, slower, and only approximates without careful tuning, resulting in higher energy usage. The method scales naturally with the number of inputs, requiring only a square system with perturbations. Biological neurons exhibit robust recognition, maintaining performance despite variations in orientation, illumination, or noise. Inspired by this, the Algebraic (Cekirge) Model, developed by Huseyin Murat Cekirge, deterministically computes neural weights in a closed-form, energy-efficient manner. This study benchmarks the model against conventional Gradient Descent (GD), a standard iterative method, highlighting efficiency, stability under perturbations, and accuracy. Results show that the Cekirge method produces weights nearly identical to GD while running over three orders of magnitude faster, demonstrating a robust and scalable alternative for neural network training.
Published in | American Journal of Artificial Intelligence (Volume 9, Issue 2) |
DOI | 10.11648/j.ajai.20250902.20 |
Page(s) | 198-205 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2025. Published by Science Publishing Group |
Unsupervised Learning, Supervised Learning, Neural Networks, Clustering, Dimensionality Reduction, Cekirge Model, Algebraic σ-Based (Cekirge) Model, Closed-Form Computation, Neural Network Weights, Robustness, Gradient Descent
Biological Neuron | Algebraic σ-Based (Cekirge) Model |
---|---|
Robust recognition despite input perturbations | Stable solutions under small input variations |
Nonlinear tolerance mechanisms | Matrix squaring reinforces correlations |
Noise filtering and generalization | σ ensures stability and robustness |
AI | Artificial Intelligence |
ANN | Artificial Neural Network |
GD | Gradient Descent |
NLP | Natural Language Processing |
PCA | Principal Component Analysis |
SGD | Stochastic Gradient Descent |
t-SNE | t-Distributed Stochastic Neighbor Embedding |
[1] | Liu, Xiao; Zhang, Fanjin; Hou, Zhenyu; Mian, Li; Wang, Zhaoyu; Zhang, Jing and Tang, Jie. "Self-supervised Learning: Generative or Contrastive". IEEE Transactions on Knowledge and Data Engineering: 1. arXiv: 2006.08218. |
[2] | Radford, Alec; Narasimhan, Karthik; Salimans, Tim and Sutskever, Ilya. "Improving Language Understanding by Generative Pre-Training" (pdf). Open AI. p. 12. Archived (pdf) from the original on 26 January 2021, Retrieved 23 January 2021, 11 June 2018. |
[3] | Li, Zhuohan; Wallace, Eric; Shen, Sheng; Lin, Kevin; Keutzer, Kurt; Klein, Dan and Gonzalez, Joey (2020-11-21). "Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers". Proceedings of the 37th International Conference on Machine Learning. PMLR: 5958-5968, 2020. |
[4] | Bousquet, O.; von Luxburg, U. and Raetsch, G., eds.. Advanced Lectures on Machine Learning. Springer. |
[5] | Duda, Richard O.; Hart, Peter E. and Stork, David G.. "Unsupervised Learning and Clustering". Pattern classification (2nd ed.). Wiley. 2001. |
[6] |
Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome (2009). "Unsupervised Learning". The Elements of Statistical Learning: Data mining, Inference, and Prediction. Springer. pp. 485-586.
https://doi.org/10.1007/978-0-387-84858-7_14 . Archived from the original on 2022-11-03. Retrieved 2022-11-03, 2009. |
[7] | Hinton, Geoffrey and Sejnowski, Terrence J., eds.. Unsupervised Learning: Foundations of Neural Computation. MIT Press. 1999. |
[8] | Buhmann, J.; Kuhnel, H.. "Unsupervised and supervised data clustering with competitive neural networks". [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. Vol. 4. IEEE. pp. 796-801, 1992. |
[9] |
Jordan, Michael I.; Bishop, Christopher M.. "7. Intelligent Systems §Neural Networks". In Tucker, Allen B. (ed.). Computer Science Handbook (2nd ed.). Chapman & Hall/CRC Press.
https://doi.org/10.1201/9780203494455 Archived from the original on 2022-11-03. Retrieved 2022-11-03, 2004. |
[10] | Garbade, Dr Michael J.. "Understanding K-means Clustering in Machine Learning". Medium. Archived from the original on 2019-05-28. Retrieved 2019-10-31, 2018-09-12. |
[11] | Eisenstein, Jacob. Introduction to Natural Language Processing. The MIT Press. p. 1. October 1, 2019. |
[12] | Goldberg, Yoav. "A Primer on Neural Network Models for Natural Language Processing". Journal of Artificial Intelligence Research. 57: 345-420. arXiv: 1807.10854. |
[13] | Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). Deep Learning. MIT Press, 2016. |
[14] | Cekirge, H. M. "An Alternative Way of Determining Biases and Weights for the Training of Neural Networks" American Journal of Artificial Intelligence, Vol. 9, No. 2, pp. 129-132. 2025. |
[15] | Cekirge, H. M. " Tuning the Training of Neural Networks by Using the Perturbation Technique, American Journal of Artificial Intelligence, Vol. 9, No. 2, pp. 107-109, 2025. |
[16] | Heikkilä, Melissa. "AI's carbon footprint is bigger than you think". MIT Technology Review. Archived from the original on 5 July 2024. Retrieved 4 July 2024, 5 December 2023. |
[17] | Coleman, Jude. "AI's Climate Impact Goes beyond Its Emissions". Scientific American. Archived from the original on 27 June 2024. Retrieved 3 July 2024. |
APA Style
Cekirge, H. M. (2025). Algebraic σ-Based (Cekirge) Model for Deterministic and Energy-Efficient Unsupervised Machine Learning. American Journal of Artificial Intelligence, 9(2), 198-205. https://doi.org/10.11648/j.ajai.20250902.20
ACS Style
Cekirge, H. M. Algebraic σ-Based (Cekirge) Model for Deterministic and Energy-Efficient Unsupervised Machine Learning. Am. J. Artif. Intell. 2025, 9(2), 198-205. doi: 10.11648/j.ajai.20250902.20
AMA Style
Cekirge HM. Algebraic σ-Based (Cekirge) Model for Deterministic and Energy-Efficient Unsupervised Machine Learning. Am J Artif Intell. 2025;9(2):198-205. doi: 10.11648/j.ajai.20250902.20
@article{10.11648/j.ajai.20250902.20, author = {Huseyin Murat Cekirge}, title = {Algebraic σ-Based (Cekirge) Model for Deterministic and Energy-Efficient Unsupervised Machine Learning }, journal = {American Journal of Artificial Intelligence}, volume = {9}, number = {2}, pages = {198-205}, doi = {10.11648/j.ajai.20250902.20}, url = {https://doi.org/10.11648/j.ajai.20250902.20}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250902.20}, abstract = {Unsupervised learning is a fundamental branch of machine learning that operates without labeled outputs, aiming instead to uncover latent structures, intrinsic relationships, and patterns embedded in data. Unlike supervised approaches, which rely on explicit input-output mappings, unsupervised methods extract regularities directly from raw, often high-dimensional, datasets. Core methodological paradigms include clustering, dimensionality reduction, and anomaly detection. Clustering techniques partition data into groups according to similarity metrics; dimensionality reduction methods, such as Principal Component Analysis (PCA) and t-SNE, map high-dimensional inputs into lower-dimensional subspaces while preserving meaningful structure; and density estimation approaches model probability distributions to detect rare or anomalous events. A central concept is the latent space, in which data are encoded into compact representations that capture essential features. These representations may arise from empirical observations or serve as hypothetical abstractions. Weights and biases can be systematically organized using structured matrix formulations that parallel neural computation. Ultimately, unsupervised learning seeks to reveal intrinsic data regularities without external supervision, while its latent encodings provide a transferable foundation for downstream supervised tasks such as classification, regression, and prediction. Once a robust latent representation is obtained, these encoded datasets can serve as the foundation for downstream supervised learning tasks, enabling prediction, classification, or regression on previously unlabeled data. The Algebraic σ-Based (Cekirge) Model presented in this paper allows deterministic computation of neural network weights, including bias, for any number of inputs. Auxiliary σ perturbations ensure a nonsingular matrix, guaranteeing a unique solution. Compared to gradient descent, the Algebraic σ-Based (Cekirge) Model is orders of magnitude faster and consumes significantly less energy. Gradient descent is iterative, slower, and only approximates without careful tuning, resulting in higher energy usage. The method scales naturally with the number of inputs, requiring only a square system with perturbations. Biological neurons exhibit robust recognition, maintaining performance despite variations in orientation, illumination, or noise. Inspired by this, the Algebraic (Cekirge) Model, developed by Huseyin Murat Cekirge, deterministically computes neural weights in a closed-form, energy-efficient manner. This study benchmarks the model against conventional Gradient Descent (GD), a standard iterative method, highlighting efficiency, stability under perturbations, and accuracy. Results show that the Cekirge method produces weights nearly identical to GD while running over three orders of magnitude faster, demonstrating a robust and scalable alternative for neural network training.}, year = {2025} }
TY - JOUR T1 - Algebraic σ-Based (Cekirge) Model for Deterministic and Energy-Efficient Unsupervised Machine Learning AU - Huseyin Murat Cekirge Y1 - 2025/09/30 PY - 2025 N1 - https://doi.org/10.11648/j.ajai.20250902.20 DO - 10.11648/j.ajai.20250902.20 T2 - American Journal of Artificial Intelligence JF - American Journal of Artificial Intelligence JO - American Journal of Artificial Intelligence SP - 198 EP - 205 PB - Science Publishing Group SN - 2639-9733 UR - https://doi.org/10.11648/j.ajai.20250902.20 AB - Unsupervised learning is a fundamental branch of machine learning that operates without labeled outputs, aiming instead to uncover latent structures, intrinsic relationships, and patterns embedded in data. Unlike supervised approaches, which rely on explicit input-output mappings, unsupervised methods extract regularities directly from raw, often high-dimensional, datasets. Core methodological paradigms include clustering, dimensionality reduction, and anomaly detection. Clustering techniques partition data into groups according to similarity metrics; dimensionality reduction methods, such as Principal Component Analysis (PCA) and t-SNE, map high-dimensional inputs into lower-dimensional subspaces while preserving meaningful structure; and density estimation approaches model probability distributions to detect rare or anomalous events. A central concept is the latent space, in which data are encoded into compact representations that capture essential features. These representations may arise from empirical observations or serve as hypothetical abstractions. Weights and biases can be systematically organized using structured matrix formulations that parallel neural computation. Ultimately, unsupervised learning seeks to reveal intrinsic data regularities without external supervision, while its latent encodings provide a transferable foundation for downstream supervised tasks such as classification, regression, and prediction. Once a robust latent representation is obtained, these encoded datasets can serve as the foundation for downstream supervised learning tasks, enabling prediction, classification, or regression on previously unlabeled data. The Algebraic σ-Based (Cekirge) Model presented in this paper allows deterministic computation of neural network weights, including bias, for any number of inputs. Auxiliary σ perturbations ensure a nonsingular matrix, guaranteeing a unique solution. Compared to gradient descent, the Algebraic σ-Based (Cekirge) Model is orders of magnitude faster and consumes significantly less energy. Gradient descent is iterative, slower, and only approximates without careful tuning, resulting in higher energy usage. The method scales naturally with the number of inputs, requiring only a square system with perturbations. Biological neurons exhibit robust recognition, maintaining performance despite variations in orientation, illumination, or noise. Inspired by this, the Algebraic (Cekirge) Model, developed by Huseyin Murat Cekirge, deterministically computes neural weights in a closed-form, energy-efficient manner. This study benchmarks the model against conventional Gradient Descent (GD), a standard iterative method, highlighting efficiency, stability under perturbations, and accuracy. Results show that the Cekirge method produces weights nearly identical to GD while running over three orders of magnitude faster, demonstrating a robust and scalable alternative for neural network training. VL - 9 IS - 2 ER -