Conflict is inevitable when it comes to communication between people from diverse background and settings. Computer systems also experience conflicts in form of bugs. Most naturally, before conflict of any sort occurs, be it ideas or perception, there must be some form of communication. Speech is one of the oldest and most natural means of information exchange between human beings. Humans speak and listen to each other in human-human interface in order to resolve certain conflicts, but computers speak to humans in a computer-human interface. The echo that comes out of a given speech might be understood or perceived differently when presented to different people. This paper is based on a comparative approach and focuses on given a run-down of the successes recorded in conflict resolution using human speech production in contrast to computer speech production. The author registers the conflict resolution practices in computers using a try-catch block pseudocode, its effectiveness in conflict resolution, plus the properties it lacks, and then, compares it to that of human functions as regards conflict resolution, in order to find a better approach. The methodology employed in this research is qualitative in nature. The author explores the stages and techniques of applying an artificial Intelligence system that scans through a given speech production and also how the brain processes information before it is finally voiced out.
Published in | International Journal of European Studies (Volume 3, Issue 1) |
DOI | 10.11648/j.ijes.20190301.16 |
Page(s) | 34-38 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2019. Published by Science Publishing Group |
Artificial Intelligence, Information Exchange, Speech Production, Conflict Resolution
[1] | Beaugrande and Dressler (1992). Nigeria and the role of English language in the 21st century Retrieved from https://eujournal.org/index.php/esj/article/download/1153/1169. |
[2] | Nhlapo, T., Arogundade, E., & Garuba, H. (2014). Things fall apart? reflections on the legacy of Chinua Achebe. |
[3] | Fagyal, Z. (2001). Phonetics and speaking machines: On the mechanical simulation of human speech in the 17th century. Historiographia linguistica, 28(3), 289-330. |
[4] | Nwakanma, I. C., Okpala, I. U., & Oluigbo I. (2014). Text – To – Speech Synthesis (TTS). |
[5] | Webel, C., & Galtung, J. (Eds.). (2007). Handbook of peace and conflict studies. Routledge. |
[6] | Ali, A. M., Asgari, S., Collier, T. C., Allen, M., Girod, L., Hudson, R. E.,... & Blumstein, D. T. (2009). An empirical study of collaborative acoustic source localization. Journal of Signal Processing Systems, 57(3), 415-436. |
[7] | Baxter, L. A. (2006). Communication as dialogue. GJ Shepherd, J. St. John, & TG Striphas (Eds.), Communication as—: Perspectives on theory, 101-109. |
[8] | Bergenthum, R., Desel, J., Lorenz, R., & Mauser, S. (2008). Synthesis of Petri nets from finite partial languages. Fundamenta Informaticae, 88(4), 437-468. |
[9] | Blench, R. (2012). Research and development of Nigerian minority languages. Advances in Minority Language Research in Nigeria, 1, 1-15. |
[10] | Cortadella, J., Kishinevsky, M., Kondratyev, A., Lavagno, L., & Yakovlev, A. (1997). Petrify: a tool for manipulating concurrent specifications and synthesis of asynchronous controllers. IEICE Transactions on information and Systems, 80(3), 315-325. |
[11] | Cook, J. E., & Wolf, A. L. (1998). Discovering models of software processes from event-based data. ACM Transactions on Software Engineering and Methodology (TOSEM), 7(3), 215-249. |
[12] | Demmers, J. (2002). Diaspora and conflict: Locality, long-distance nationalism, and delocalisation of conflict dynamics. Javnost-The Public, 9(1), 85-96. |
[13] | Dudley, H., & Tarnoczy, T. H. (1950). The speaking machine of Wolfgang von Kempelen. The Journal of the Acoustical Society of America, 22(2), 151-166. |
[14] | Dutoit, T. (1997). An introduction to text-to-speech synthesis (Vol. 3). Springer Science & Business Media. |
[15] | Emejulu, O. A., Nwakanma, I. C., & Okpala, I. U. (2019). Digital Language Mining Platform for Nigerian languages (DLMP). |
[16] | Fagyal, Z., Kibbee, D., & Jenkins, F. (2006). French: A linguistic introduction. Cambridge University Press. |
[17] | Henze, R., Katz, A., & Norte, E. (2000). Rethinking the concept of racial or ethnic conflict in schools: A leadership perspective. Race Ethnicity and Education, 3(2), 195-206. |
[18] | Jabri, V. (1996). Discourses on violence: Conflict analysis reconsidered. Manchester University Press. |
[19] | Meyer, P., Rühl, H. W., Krüger, R., Kugler, M., Vogten, L. L. M., Dirksen, A., & Belhoula, K. (1993). PHRITTS-A Text-To-Speech Synthesizer for the German Language. In Third European Conference on Speech Communication and Technology. |
[20] | Monperrus, M., de Montauzan, M. G., Cornu, B., Marvie, R., & Rouvoy, R. (2013). Challenging Analytical Knowledge On Exception-Handling: An Empirical Study of 32 Java Software Packages (Doctoral dissertation, Laboratoire d'Informatique Fondamentale de Lille). |
[21] | Ramsbotham, O., Miall, H., & Woodhouse, T. (2011). Contemporary conflict resolution. Polity. |
[22] | Schrijver, A. (1998). Theory of linear and integer programming. John Wiley & Sons. |
[23] | Zhao, Z. A., & Liu, H. (2011). Spectral feature selection for data mining. Chapman and Hall/CRC. |
[24] | Schröder, M. (2004, June). Dimensional emotion representation as a basis for speech synthesis with non-extreme emotions. In Tutorial and research workshop on affective dialogue systems (pp. 209-220). Springer, Berlin, Heidelberg. |
APA Style
Okpala Izunna Udebuana. (2019). Comparative Analysis in Conflict Resolution: Computer Speech Synthesis and Humans Speech Production in View. International Journal of European Studies, 3(1), 34-38. https://doi.org/10.11648/j.ijes.20190301.16
ACS Style
Okpala Izunna Udebuana. Comparative Analysis in Conflict Resolution: Computer Speech Synthesis and Humans Speech Production in View. Int. J. Eur. Stud. 2019, 3(1), 34-38. doi: 10.11648/j.ijes.20190301.16
AMA Style
Okpala Izunna Udebuana. Comparative Analysis in Conflict Resolution: Computer Speech Synthesis and Humans Speech Production in View. Int J Eur Stud. 2019;3(1):34-38. doi: 10.11648/j.ijes.20190301.16
@article{10.11648/j.ijes.20190301.16, author = {Okpala Izunna Udebuana}, title = {Comparative Analysis in Conflict Resolution: Computer Speech Synthesis and Humans Speech Production in View}, journal = {International Journal of European Studies}, volume = {3}, number = {1}, pages = {34-38}, doi = {10.11648/j.ijes.20190301.16}, url = {https://doi.org/10.11648/j.ijes.20190301.16}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijes.20190301.16}, abstract = {Conflict is inevitable when it comes to communication between people from diverse background and settings. Computer systems also experience conflicts in form of bugs. Most naturally, before conflict of any sort occurs, be it ideas or perception, there must be some form of communication. Speech is one of the oldest and most natural means of information exchange between human beings. Humans speak and listen to each other in human-human interface in order to resolve certain conflicts, but computers speak to humans in a computer-human interface. The echo that comes out of a given speech might be understood or perceived differently when presented to different people. This paper is based on a comparative approach and focuses on given a run-down of the successes recorded in conflict resolution using human speech production in contrast to computer speech production. The author registers the conflict resolution practices in computers using a try-catch block pseudocode, its effectiveness in conflict resolution, plus the properties it lacks, and then, compares it to that of human functions as regards conflict resolution, in order to find a better approach. The methodology employed in this research is qualitative in nature. The author explores the stages and techniques of applying an artificial Intelligence system that scans through a given speech production and also how the brain processes information before it is finally voiced out.}, year = {2019} }
TY - JOUR T1 - Comparative Analysis in Conflict Resolution: Computer Speech Synthesis and Humans Speech Production in View AU - Okpala Izunna Udebuana Y1 - 2019/05/10 PY - 2019 N1 - https://doi.org/10.11648/j.ijes.20190301.16 DO - 10.11648/j.ijes.20190301.16 T2 - International Journal of European Studies JF - International Journal of European Studies JO - International Journal of European Studies SP - 34 EP - 38 PB - Science Publishing Group SN - 2578-9562 UR - https://doi.org/10.11648/j.ijes.20190301.16 AB - Conflict is inevitable when it comes to communication between people from diverse background and settings. Computer systems also experience conflicts in form of bugs. Most naturally, before conflict of any sort occurs, be it ideas or perception, there must be some form of communication. Speech is one of the oldest and most natural means of information exchange between human beings. Humans speak and listen to each other in human-human interface in order to resolve certain conflicts, but computers speak to humans in a computer-human interface. The echo that comes out of a given speech might be understood or perceived differently when presented to different people. This paper is based on a comparative approach and focuses on given a run-down of the successes recorded in conflict resolution using human speech production in contrast to computer speech production. The author registers the conflict resolution practices in computers using a try-catch block pseudocode, its effectiveness in conflict resolution, plus the properties it lacks, and then, compares it to that of human functions as regards conflict resolution, in order to find a better approach. The methodology employed in this research is qualitative in nature. The author explores the stages and techniques of applying an artificial Intelligence system that scans through a given speech production and also how the brain processes information before it is finally voiced out. VL - 3 IS - 1 ER -