-
Research/Technical Note
Tuning the Training of Neural Networks by Using the Perturbation Technique
Huseyin Murat Cekirge*
Issue:
Volume 9, Issue 2, December 2025
Pages:
107-109
Received:
5 June 2025
Accepted:
21 June 2025
Published:
6 July 2025
Abstract: The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into training of artificial neural networks. Perturbation methods are for obtaining approximate solutions with a small parameter ε. The perturbation technique could be used in several combination with other training methods for minimization of data used, training time and energy. The introduced perturbation parameter ε can be selected due nature of training of the data. The determination of ε can be found through several trials. The application of the stochastic random descent method will increase training time and energy. The proper combined use with the perturbation will shorten training time. There exists abundance of usage of both methods, however the combined use will lead optimal solutions. A proper cost function can be used for optimum use of the perturbation parameter ε. The shortening the training time will lead determination of dominant inputs of the out values. One of the essential problems of training is the energy consuming will be decreased by using hybrid training methods.
Abstract: The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into tr...
Show More
-
Research Article
Towards a Set of Morphosyntactic Labels for the Fulani Language: An Approach Inspired by the EAGLES Recommendations and Fulani Grammar
Zouleiha Alhadji Ibrahima,
Charles Moudina Varmantchaonala,
Dayang Paul*
,
Kolyang
Issue:
Volume 9, Issue 2, December 2025
Pages:
110-121
Received:
4 June 2025
Accepted:
27 June 2025
Published:
21 July 2025
Abstract: This paper details the development of a morphosyntactic label set for the Adamawa dialect of the Fulani language (Fulfulde), addressing the critical lack of digital resources and automatic processing tools for this significant African language. The primary objective is to facilitate the creation of a training corpus for morphosyntactic tagging, there by aiding linguists and advancing Natural Language Processing (NLP) applications for Fulani. The proposed label set is meticulously constructed based on a dual methodological approach: it draws heavily from the well-established EAGLES (Expert Advisory Group on Language Engineering Standards) recommendations to ensure corpus reuse and cross-linguistic comparability, while simultaneously incorporating an in-depth analysis of Fulani grammatical specificities. This adaptation is crucial given the morphological richness and complex grammatical structure of Fulani, including its elaborate system of approximately 25 noun classes, unique adjective derivations, and intricate verbal conjugations. The resulting tagset comprises 15 mandatory labels and 54 recommended labels. While some EAGLES categories like "article" and "residual" are not supported, new categories such as "participle," "ideophone," "determiner," and "particle" are introduced to capture the nuances of Fulani grammar. The recommended tags further detail the mandatory categories, subdividing nouns into proper, common singular, and common plural; verbs based on voice and conjugation (infinitive active, middle, passive; conjugated active affirmative/negative, middle affirmative/negative, passive affirmative/negative); and adjectives and pronouns into more specific types based on demonstrative, possessive, subject, object, relative, emphatic, interrogative, and indefinite functions. Participles are divided into singular and plural, adverbs into time, place, manner, and negation, numbers into singular and plural, and determiners into singular and plural. Particles are further broken down into dicto-modal, abdominal, interrogative, emphatic, postposed, and postposed negative. The categories of preposition, conjunction, interjection, unique, punctuation, and ideophone remain indivisible. This meticulously defined tag set was utilized to manually annotate 5,186 words from Dominique Noye’s Fulfulde-French dictionary, creating a valuable, publicly accessible resource for linguistic research and NLP development. Furthermore, the paper outlines a robust workflow for automatic morphosyntactic tagging of Fulfulde sentences, leveraging a Hidden Markov Model (HMM) in conjunction with the Viterbi algorithm. This approach, which extracts transition and emission probabilities from the annotated corpus, enables the disambiguation of morphosyntactic categories within context, considering the specific syntactic and lexical patterns of the Adamawa dialect. Ultimately, this work significantly contributes to the digitization and standardization of the Fulani language, enhancing the performance of linguistic tools and fostering its integration into digital technologies and multilingual systems.
Abstract: This paper details the development of a morphosyntactic label set for the Adamawa dialect of the Fulani language (Fulfulde), addressing the critical lack of digital resources and automatic processing tools for this significant African language. The primary objective is to facilitate the creation of a training corpus for morphosyntactic tagging, the...
Show More
-
Research Article
Research on Application Pathways and Development Trends of Generative Artificial Intelligence in 3D Modeling
Issue:
Volume 9, Issue 2, December 2025
Pages:
122-128
Received:
22 July 2025
Accepted:
31 July 2025
Published:
13 August 2025
Abstract: With the rapid advancement of Generative Artificial Intelligence (AIGC, Artificial Intelligence Generated Content) technology, 3D modeling, as a core component of digital content creation, is undergoing a profound transformation from "human-driven" to "intelligent generation." Traditional 3D modeling relies on specialized software and manual operations by modelers, characterized by complex workflows, inefficiency, and high skill barriers. In contrast, AIGC enables the automatic generation of 3D geometry, topological relationships, and texture mapping information through natural language prompts (Prompt), image inputs, or sketch instructions, significantly enhancing modeling efficiency and creative freedom. This paper systematically reviews the current primary pathways—Text-to-3D, Image-to-3D, and Sketch-to-3D—based on the technical principles of generative models. It conducts an in-depth analysis of the application characteristics of representative platforms such as MeshyAI, Kaedim, Tripo, and Hunyuan 3D. Through case studies, the feasibility and operational workflows of AIGC modeling in character asset generation, scene construction, and teaching practices are examined. Furthermore, the study comparatively analyzes the differences between AIGC and traditional modeling approaches in terms of efficiency, quality, and scalability, highlighting current challenges faced by AIGC, including precision control, limited editability, and copyright compliance. The research posits that AIGC is reconstructing the paradigm of 3D modeling, propelling 3D content production towards a new era of "intelligent collaboration" and "low-barrier generation." Future advancements are expected to be driven by the deep integration of AIGC with Digital Content Creation (DCC) toolchains, the evolution of multimodal large models, and enhanced semantic control capabilities of Prompts. This study aims to provide a systematic reference and trend analysis for the integration of AIGC modeling technology within higher education, industry practices, and AI development.
Abstract: With the rapid advancement of Generative Artificial Intelligence (AIGC, Artificial Intelligence Generated Content) technology, 3D modeling, as a core component of digital content creation, is undergoing a profound transformation from "human-driven" to "intelligent generation." Traditional 3D modeling relies on specialized software and manual operat...
Show More
-
Research Article
An Alternative Way of Determining Biases and Weights for the Training of Neural Networks
Huseyin Murat Cekirge*
Issue:
Volume 9, Issue 2, December 2025
Pages:
129-132
Received:
22 July 2025
Accepted:
4 August 2025
Published:
18 August 2025
Abstract: The determination of biases and weights in neural networks is a fundamental aspect of their performance, traditionally employing methods like steepest descent and stochastic gradient descent. While these supervised training approaches have proven effective, this technical note confidently presents a groundbreaking alternative that eliminates randomness in calculations altogether. This innovative method calculates biases and weights as precise solutions to a system of equations. This approach not only reduces computational demands but also enhances energy efficiency significantly. By strategically incorporating target values during training, we can expand the number of target values within acceptable variance limits, enabling the formation of square matrices. This ensures that input and output nodes remain perfectly balanced through the generation of fictive data, particularly for output nodes. These fuzzy sets guarantee that the variance of neuron target values stays within permissible limits, effectively minimizing error. The generated data is intentionally minimal and can also be produced using random processes, facilitating effective learning for the neural networks. Unlike conventional techniques, the values of biases and weights are determined directly, leading to a process that is both faster and less energy-intensive. Our primary objective is to establish an efficient foundation for the training data of the neural network. Moreover, these calculated values serve as robust initial parameters for integration with other determination methods, including stochastic gradient descent and steepest descent. This presentation showcases a powerful new algorithm, poised to significantly enhance the efficiency and effectiveness of neural network training.
Abstract: The determination of biases and weights in neural networks is a fundamental aspect of their performance, traditionally employing methods like steepest descent and stochastic gradient descent. While these supervised training approaches have proven effective, this technical note confidently presents a groundbreaking alternative that eliminates random...
Show More
-
Research Article
Evaluating the Performance of Selected Single Classifiers with Incorporated Explainable Artificial Intelligence (XAI) in the Prediction of Mental Health Distress Among University Students
Issue:
Volume 9, Issue 2, December 2025
Pages:
133-144
Received:
23 July 2025
Accepted:
4 August 2025
Published:
30 August 2025
Abstract: The mental health of university students has emerged as a critical global public health concern, with increasing prevalence of anxiety, depression, and stress-related conditions reported across diverse academic environments. With the increasing mental health issues among university students all over the world, there is an observable disparity in employing interpretable machine learning models to assess the risks of psychological distress, especially in resource-limited countries such as Kenya. This study bridges this gap by evaluating the predictive performance of selected single machine learning classifiers; Multinomial Logistic Regression (MLR), K-Nearest Neighbours (KNN), Support Vector Machine (SVM), and Naïve Bayes (NB) with Explainable Artificial Intelligences (XAI) in identifying levels of mental health distress (Low, Moderate, and High) among university students in Tharaka Nithi County, Kenya. A structured questionnaire was administered to a stratified random sample of 1500 students, capturing comprehensive data across demographic, academic, psychosocial, and behavioural dimensions. Data were preprocessed, encoded, and partitioned into 70% training and 30% testing sets. Models were developed using 10-fold cross-validation, with hyperparameter tuning performed via grid search. Explainable Artificial Intelligence (XAI) techniques, including SHAP (Shapley Additive Explanations) and model breakdown plots, were integrated to enhance transparency and interpretability of the model. The Support Vector Machine model demonstrated superior performance, with an overall accuracy of 97.6%, a Kappa coefficient of 0.957, and a perfect Area Under the Curve (AUC) score of 1.000 across all levels of mental distress. The model achieved a sensitivity of 1.000 for both High and Low distress, and 0.960 for Moderate, with precision values of 0.880, 0.960, and 1.000, respectively. KNN followed with an accuracy of 73.9% and Kappa of 0.471, while MLR and NB achieved accuracies of 69.7% and 68.8%, respectively. The SVM model emerged as the best model due to its ability to handle nonlinear and complex patterns in the data. SHAP analysis identified "Lifestyle and Health Factors," "Personal and Mental Health," "Academic Pressure," and "Quarter Life Crisis" as the most influential predictors across models. The study concludes that interpretable machine learning approaches, particularly SVM augmented with XAI, can provide highly accurate and actionable insights into student mental health. The study recommends integrating such models into institutional mental health surveillance frameworks to support early detection, personalized interventions, and policy planning, aligning with Sustainable Development Goal 3, which aims to ensure healthy lives and promote well-being for all.
Abstract: The mental health of university students has emerged as a critical global public health concern, with increasing prevalence of anxiety, depression, and stress-related conditions reported across diverse academic environments. With the increasing mental health issues among university students all over the world, there is an observable disparity in em...
Show More
-
Review Article
Integrating Artificial Intelligence into Medical Physics Practice: Promises and Ethical Considerations
Makoye John*,
Rose Mina
Issue:
Volume 9, Issue 2, December 2025
Pages:
145-153
Received:
24 September 2024
Accepted:
7 January 2025
Published:
8 September 2025
Abstract: Artificial intelligence (AI) techniques such as deep learning show great potential to enhance medical physics practice by supporting diagnosis, treatment planning, and other clinical tasks. However, responsible integration of AI requires consideration of both promises and ethical risks to ensure technologies are developed and applied safely and for patient benefit. This research review examines opportunities and challenges of integrating AI across various domains of medical physics. Promising applications are discussed such as using large datasets to help radiologists interpret images more accurately and automating routine analyses to increase efficiency. AI may also expand access to care for rural populations through remote services. Potential ethical issues that could hamper responsible integration are also explored. Ensuring AI algorithms avoid human biases that unfairly impact patient outcomes is imperative. Other considerations include responsible oversight structures, ensuring privacy of patient data, and establishing regulatory and quality standards. This review proposes a framework for multidisciplinary collaboration and rigorous testing prior to clinical adoption of AI tools. It concludes that with ongoing research and development guided by principles of safety, accountability and fairness, AI can potentially enhance medical physics practice while avoiding unintended harms.
Abstract: Artificial intelligence (AI) techniques such as deep learning show great potential to enhance medical physics practice by supporting diagnosis, treatment planning, and other clinical tasks. However, responsible integration of AI requires consideration of both promises and ethical risks to ensure technologies are developed and applied safely and for...
Show More
-
Research Article
Integrating Explainable Machine Learning Models for Early Detection of Hypertension: A Transparent Approach to AI-Driven Healthcare
Issue:
Volume 9, Issue 2, December 2025
Pages:
154-166
Received:
18 August 2025
Accepted:
1 September 2025
Published:
23 September 2025
Abstract: Hypertension is a major public health challenge globally, often undiagnosed until severe complications arise, highlighting the critical need for early and accurate risk prediction methods. Despite advances in machine learning (ML), many models remain black boxes, limiting clinical trust and adoption. This study addresses these gaps by evaluating and interpreting three ML classifiers—Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naïve Bayes—for hypertension risk prediction, emphasizing both predictive performance and explainability. Using a comprehensive dataset of 4,187 participants, demographic and clinical factors, including age, gender, smoking status, blood pressure, BMI, glucose levels, and medication use, were analyzed. Descriptive statistics revealed significant differences between the at-risk and no-risk groups, particularly in terms of age, blood pressure, cholesterol levels, and diabetes prevalence. Chi-square and Welch's t-tests confirmed these distinctions (p <.001), underscoring the validity of the models' inputs. Model evaluation showed SVM as the most balanced classifier with an accuracy of 88.13% (95% CI [86.22%, 89.86%]) and substantial agreement (kappa = 0.7153). It achieved strong sensitivity (92.66%) and specificity (77.78%), alongside a favorable F1-score (0.9157), indicating robust true positive detection while minimizing false positives. KNN demonstrated high sensitivity (94.69%) but lower specificity (69.25%), with moderate overall accuracy (86.95%). Naïve Bayes, though highly sensitive (99.21%), suffered from poor specificity (34.63%), suggesting a high false-positive rate and imbalanced classification. McNemar's test indicated balanced errors only for SVM (p = 0.1036). Receiver Operating Characteristic (ROC) analysis revealed excellent discrimination for all models, with Naïve Bayes achieving an AUC of 0.953; however, this did not translate into practical reliability due to error imbalance. Explainable AI techniques, specifically SHAP values, elucidated key predictors in SVM, notably systolic and diastolic blood pressure, BMI, and heart rate, enhancing interpretability and stakeholder trust. According to the study, SVM offers the best trade-off between accuracy and interpretability for predicting hypertension risk. Integrating explainable ML models into clinical practice can improve early diagnosis, guide interventions, and inform health policies, supporting ethical, transparent, and effective AI-driven healthcare.
Abstract: Hypertension is a major public health challenge globally, often undiagnosed until severe complications arise, highlighting the critical need for early and accurate risk prediction methods. Despite advances in machine learning (ML), many models remain black boxes, limiting clinical trust and adoption. This study addresses these gaps by evaluating an...
Show More
-
Research Article
Bridging Swahili Communication Gaps: Real-Time Audio-to-Text Sentiment Analysis via Pre-trained NLP
Issue:
Volume 9, Issue 2, December 2025
Pages:
167-185
Received:
4 August 2025
Accepted:
18 August 2025
Published:
25 September 2025
Abstract: The global proliferation of digital communication highlights a critical gap in language technologies for digitally under-represented languages, particularly Kiswahili, a language spoken by over 100 million people. While significant advancements have been made in natural language processing (NLP) for high-resource languages like English, a persistent challenge remains in creating robust computational systems for low-resource linguistic contexts. This study addresses this challenge by presenting a novel, end-to-end Kiswahili audio processing pipeline that unifies three core capabilities; real-time speech recognition, sentiment analysis, and text summarization. The system’s novelty lies in its strategic leverage of state-of-the-art, pre-trained machine learning models, including Wav2vec2, DistilBERT, and T5, demonstrating a viable approach to bridging the digital communication gap for Kiswahili in real-world applications. Our methodology involved a rigorous evaluation of the integrated system using the Mozilla Common Voice Corpus. The results revealed key insights and promising performance metrics. The speech recognition component, a foundational element of the pipeline, achieved an exceptionally low Word Error Rate (WER) of 0.3329 with the Wav2vec2 model, highlighting its capacity for accurate transcription in a low-resource setting. This is a significant finding, as it suggests that models specifically fine-tuned for such environments can overcome the challenges of data scarcity and linguistic diversity. The summarization component also demonstrated strong capabilities, yielding a ROUGE-L score of 0.6622, which indicates robust semantic and structural alignment with reference texts. While the sentiment analysis revealed a notable data imbalance with a predominance of negative samples, the model achieved a 60% accuracy, demonstrating its potential for further refinement. These findings underscore both the immense potential and the inherent limitations of applying pre-trained models to a low-resource language like Kiswahili. They provide a compelling proof of concept for the technical feasibility of Kiswahili audio processing and emphasize the critical need for continued investment in dataset expansion and model optimization. The study concludes that this work establishes a foundational groundwork for continued research and the subsequent development of advanced NLP tools specifically tailored for Kiswahili-speaking populations, ultimately aiming to improve access to education, healthcare, and information services, and to foster greater digital inclusion throughout East Africa.
Abstract: The global proliferation of digital communication highlights a critical gap in language technologies for digitally under-represented languages, particularly Kiswahili, a language spoken by over 100 million people. While significant advancements have been made in natural language processing (NLP) for high-resource languages like English, a persisten...
Show More
-
Research Article
Backpropagation Algorithm for Predicting Rainfall in Anyigba, Kogi State, Nigeria
Issue:
Volume 9, Issue 2, December 2025
Pages:
186-197
Received:
20 May 2025
Accepted:
31 July 2025
Published:
26 September 2025
Abstract: Rainfall remains the primary supply of moisture for agricultural activities in Nigeria. Accurate and timely rainfall prediction is also essential for food security, better flood control, water resource management, and the wellbeing of the people. This research proposes a method for rainfall prediction based on metrological data and a machine learning technique. The machine learning technique is a hybrid of Levenberg-Marquardt (LM) back propagation and Artificial Neural Network (ANN) used to construct the rain fall forecasting model. Anyigba, in Dekina Local Government Area, Kogi State, Nigeria was used as a case study. The database from six years (2011-2016) of meteorological parameters made up of air temperature, relative humidity, and pressure were obtained from the Tropospheric Data Acquisition Network (TRODAN) of the Centre for Atmospheric Research, National Space Research and Development Agency (CAR-NASRDA) and used. The rainfall prediction model was trained using part of the data collected. The performance of the model was evaluated using metrics such as precision, recall, F1-score, and confusion matrix. The model achieved an accuracy of 0.88, indicating its robustness and reliability in predicting rainfall patterns. The high accuracy of the model demonstrates its potential application in real-time weather prediction, which can significantly benefit local farmers, water resource managers, and disaster response teams. The study identifies several limitations, including the dependency on the quality and availability of metrological data, and the potential impact of climate change on predictive accuracy. Future research could explore the integration of additional meteorological parameters, the use of ensemble methods, and the adaptation of the model to other regions with similar climatic conditions. This research presents a promising approach to rainfall prediction in Anyigba using the back propagation algorithm, offering a valuable tool for mitigating the adverse effects of unpredictable rainfall and enhancing the decision-making processes in agriculture and water management.
Abstract: Rainfall remains the primary supply of moisture for agricultural activities in Nigeria. Accurate and timely rainfall prediction is also essential for food security, better flood control, water resource management, and the wellbeing of the people. This research proposes a method for rainfall prediction based on metrological data and a machine learni...
Show More
-
Research/Technical Note
Algebraic σ-Based (Cekirge) Model for Deterministic and Energy-Efficient Unsupervised Machine Learning
Huseyin Murat Cekirge*
Issue:
Volume 9, Issue 2, December 2025
Pages:
198-205
Received:
13 September 2025
Accepted:
20 September 2025
Published:
30 September 2025
Abstract: Unsupervised learning is a fundamental branch of machine learning that operates without labeled outputs, aiming instead to uncover latent structures, intrinsic relationships, and patterns embedded in data. Unlike supervised approaches, which rely on explicit input-output mappings, unsupervised methods extract regularities directly from raw, often high-dimensional, datasets. Core methodological paradigms include clustering, dimensionality reduction, and anomaly detection. Clustering techniques partition data into groups according to similarity metrics; dimensionality reduction methods, such as Principal Component Analysis (PCA) and t-SNE, map high-dimensional inputs into lower-dimensional subspaces while preserving meaningful structure; and density estimation approaches model probability distributions to detect rare or anomalous events. A central concept is the latent space, in which data are encoded into compact representations that capture essential features. These representations may arise from empirical observations or serve as hypothetical abstractions. Weights and biases can be systematically organized using structured matrix formulations that parallel neural computation. Ultimately, unsupervised learning seeks to reveal intrinsic data regularities without external supervision, while its latent encodings provide a transferable foundation for downstream supervised tasks such as classification, regression, and prediction. Once a robust latent representation is obtained, these encoded datasets can serve as the foundation for downstream supervised learning tasks, enabling prediction, classification, or regression on previously unlabeled data. The Algebraic σ-Based (Cekirge) Model presented in this paper allows deterministic computation of neural network weights, including bias, for any number of inputs. Auxiliary σ perturbations ensure a nonsingular matrix, guaranteeing a unique solution. Compared to gradient descent, the Algebraic σ-Based (Cekirge) Model is orders of magnitude faster and consumes significantly less energy. Gradient descent is iterative, slower, and only approximates without careful tuning, resulting in higher energy usage. The method scales naturally with the number of inputs, requiring only a square system with perturbations. Biological neurons exhibit robust recognition, maintaining performance despite variations in orientation, illumination, or noise. Inspired by this, the Algebraic (Cekirge) Model, developed by Huseyin Murat Cekirge, deterministically computes neural weights in a closed-form, energy-efficient manner. This study benchmarks the model against conventional Gradient Descent (GD), a standard iterative method, highlighting efficiency, stability under perturbations, and accuracy. Results show that the Cekirge method produces weights nearly identical to GD while running over three orders of magnitude faster, demonstrating a robust and scalable alternative for neural network training.
Abstract: Unsupervised learning is a fundamental branch of machine learning that operates without labeled outputs, aiming instead to uncover latent structures, intrinsic relationships, and patterns embedded in data. Unlike supervised approaches, which rely on explicit input-output mappings, unsupervised methods extract regularities directly from raw, often h...
Show More