| Peer-Reviewed

The Optimal Choice of Hybrid Convolutional Neural Network Components

Received: 4 July 2022    Accepted: 25 July 2022    Published: 4 August 2022
Views:       Downloads:
Abstract

Based on the development vector of modern AI systems and extensional complexity grows of analytical and recognition tasks it is concluded that the perspective class of convolutional neural networks – hybrid convolutional neural networks. Based on research results it’s proved that this type of neural networks permits to supply less mean square error under less overall structure complexity. The generic structure of hybrid convolutional neural network was proposed. It is shown and proved that these networks must include beside traditional components (convolutional layers, pooling layers, feed-forward layers) as well as an additional supportive layers (batch normalization layer, 1x1 convolutional layer, dropout layer, etc.) to achieve best both accuracy and performance results. The important properties of additional supportive layers (blocks) have been determined and researched. Based on the architectural requirements it is considered that modern topologies of hybrid CNNs are the combination of substantive CNNs such as Squeeze-and-Excitation neural network, poly-inception neural network, residual neural network, densely connected neural network, etc. It is listed the performance testing and final accuracy results for each block used both separately and in pairs to highlight it inner parameters, advantages and limitations. It is proposed an example of hybrid convolutional neural network constructed of investigated structural blocks. Calculated average training time shorten based on each of functional blocks, their advantages and integration details.

Published in American Journal of Neural Networks and Applications (Volume 8, Issue 2)
DOI 10.11648/j.ajnna.20220802.11
Page(s) 12-16
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Hybrid Neural Networks, Convolutional Neural Networks, Neural Network Performance, Dynamic Neural Network Structure, Image Representations

References
[1] Iveta Mrazova, Marek Kukacka. "Hybrid convolutional neural networks" IEEE International Conference on Industrial Informatics 10.1109/INDIN.2008.4618146.
[2] Chaitanya Nagpal and Shiv Ram Dubey. “A Performance Evaluation of Convolutional Neural Networks for Face Anti Spoofing”.
[3] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM international conference on Multimedia. ACM, 2014, pp. 675–678.
[4] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014.
[5] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
[6] Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. “Squeeze-and-Excitation Networks”.
[7] C. Cao, X. Liu, Y. Yang, Y. Yu, J. Wang, Z. Wang, Y. Huang, L. Wang, C. Huang, W. Xu, D. Ramanan, and T. S. Huang, “Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks,” in ICCV, 2015.
[8] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” in ICCV, 2015.
[9] L. Shen, Z. Lin, and Q. Huang, “Relay backpropagation for effective learning of deep convolutional neural networks,” in ECCV, 2016.
[10] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten, “Exploring the limits of weakly supervised pretraining,” in ECCV, 2018.
[11] G. Mittal, S. Sasi, “Robust Preprocessing Algorithm for Face Recognition”, Proceedings of the 3rd Canadian conference on Computer and Robot vision, United States of America, 2006.
[12] Krizhevsky, I. Sutskever, and G. E. Hinton. “Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems”, pages 1097–1105, 2012.
[13] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger, “Densely Connected Convolutional Networks”, 2018, arXiv: 1608.06993.
Cite This Article
  • APA Style

    Viktor Sineglazov, Illia Boryndo. (2022). The Optimal Choice of Hybrid Convolutional Neural Network Components. American Journal of Neural Networks and Applications, 8(2), 12-16. https://doi.org/10.11648/j.ajnna.20220802.11

    Copy | Download

    ACS Style

    Viktor Sineglazov; Illia Boryndo. The Optimal Choice of Hybrid Convolutional Neural Network Components. Am. J. Neural Netw. Appl. 2022, 8(2), 12-16. doi: 10.11648/j.ajnna.20220802.11

    Copy | Download

    AMA Style

    Viktor Sineglazov, Illia Boryndo. The Optimal Choice of Hybrid Convolutional Neural Network Components. Am J Neural Netw Appl. 2022;8(2):12-16. doi: 10.11648/j.ajnna.20220802.11

    Copy | Download

  • @article{10.11648/j.ajnna.20220802.11,
      author = {Viktor Sineglazov and Illia Boryndo},
      title = {The Optimal Choice of Hybrid Convolutional Neural Network Components},
      journal = {American Journal of Neural Networks and Applications},
      volume = {8},
      number = {2},
      pages = {12-16},
      doi = {10.11648/j.ajnna.20220802.11},
      url = {https://doi.org/10.11648/j.ajnna.20220802.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajnna.20220802.11},
      abstract = {Based on the development vector of modern AI systems and extensional complexity grows of analytical and recognition tasks it is concluded that the perspective class of convolutional neural networks – hybrid convolutional neural networks. Based on research results it’s proved that this type of neural networks permits to supply less mean square error under less overall structure complexity. The generic structure of hybrid convolutional neural network was proposed. It is shown and proved that these networks must include beside traditional components (convolutional layers, pooling layers, feed-forward layers) as well as an additional supportive layers (batch normalization layer, 1x1 convolutional layer, dropout layer, etc.) to achieve best both accuracy and performance results. The important properties of additional supportive layers (blocks) have been determined and researched. Based on the architectural requirements it is considered that modern topologies of hybrid CNNs are the combination of substantive CNNs such as Squeeze-and-Excitation neural network, poly-inception neural network, residual neural network, densely connected neural network, etc. It is listed the performance testing and final accuracy results for each block used both separately and in pairs to highlight it inner parameters, advantages and limitations. It is proposed an example of hybrid convolutional neural network constructed of investigated structural blocks. Calculated average training time shorten based on each of functional blocks, their advantages and integration details.},
     year = {2022}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - The Optimal Choice of Hybrid Convolutional Neural Network Components
    AU  - Viktor Sineglazov
    AU  - Illia Boryndo
    Y1  - 2022/08/04
    PY  - 2022
    N1  - https://doi.org/10.11648/j.ajnna.20220802.11
    DO  - 10.11648/j.ajnna.20220802.11
    T2  - American Journal of Neural Networks and Applications
    JF  - American Journal of Neural Networks and Applications
    JO  - American Journal of Neural Networks and Applications
    SP  - 12
    EP  - 16
    PB  - Science Publishing Group
    SN  - 2469-7419
    UR  - https://doi.org/10.11648/j.ajnna.20220802.11
    AB  - Based on the development vector of modern AI systems and extensional complexity grows of analytical and recognition tasks it is concluded that the perspective class of convolutional neural networks – hybrid convolutional neural networks. Based on research results it’s proved that this type of neural networks permits to supply less mean square error under less overall structure complexity. The generic structure of hybrid convolutional neural network was proposed. It is shown and proved that these networks must include beside traditional components (convolutional layers, pooling layers, feed-forward layers) as well as an additional supportive layers (batch normalization layer, 1x1 convolutional layer, dropout layer, etc.) to achieve best both accuracy and performance results. The important properties of additional supportive layers (blocks) have been determined and researched. Based on the architectural requirements it is considered that modern topologies of hybrid CNNs are the combination of substantive CNNs such as Squeeze-and-Excitation neural network, poly-inception neural network, residual neural network, densely connected neural network, etc. It is listed the performance testing and final accuracy results for each block used both separately and in pairs to highlight it inner parameters, advantages and limitations. It is proposed an example of hybrid convolutional neural network constructed of investigated structural blocks. Calculated average training time shorten based on each of functional blocks, their advantages and integration details.
    VL  - 8
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Automation and Computer Integrated Complexes, National Aviation University, Kyiv, Ukraine

  • Automation and Computer Integrated Complexes, National Aviation University, Kyiv, Ukraine

  • Sections