Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

Volume 1 , Issue 1 , PP: 32-39, 2020 | Cite this article as | XML | Html | PDF | Full Length Article

Artificial Intelligence enabled virtual sixth sense application for the disabled

Aditya Sharma 1 * , Aditya Vats 2 * , Shiv Shankar Dash 3 , Surinder Kaur 4

  • 1 Information Technology Bharati Vidyapeeth's College of Engg,New Delhi, India - (adityasharma965@gmail.com)
  • 2 Information Technology Bharati Vidyapeeth's College of Engg,New Delhi, India - (adityavats98.av@gmail.com)
  • 3 Information Technology Bharati Vidyapeeth's College of Engg,New Delhi, India - (dashshiv20@gmail.com)
  • 4 Information Technology Bharati Vidyapeeth's College of Engg,New Delhi, india - (kaur.surinder@bharatividyapeeth.edu)
  • Doi: https://doi.org/10.54216/FPA.010104

    Abstract

    The sixth sense is a multi-platform app for aiding the people in need, that is, people who are handicapped in the form of lack of speech (dumb), lack of hearing (deaf), lack sight (blind), lack of judicial power to differentiate between objects (visual agnosia) and people suffering from autism (characterized by great difficulty in communicating and forming relationships with other people and in using language and abstract concepts). Our current product implementation is on two platforms, namely, mobile and a web app. The mobile app even works for object detection cases in offline mode. What we want to achieve using this is to make a better world for the people suffering from disabilities as well as an educational end for people with cognitive disabilities using our app. The current implementation deals with object recognition, text to speech, and a speech-to-text converter. The speech-to-text converter and text-to-speech converter utilized the Web Speech API (Application Program Interface) for the website and the mobile platform's text-to- speech and speech-to-text library. The object recognition wouldn't fetch enough use out of a website. Hence, it has been implemented on the mobile app utilizing the Firebase ML toolkit and different pre-trained models, both available offline and online.

    Keywords :

    Sixth sense , disabilities , Web Speech API , Firebase ML toolkit , cognitive disabilities

    References

    [1]    Bigham, J. P., Jayant, C., Miller, A., White, B., & Yeh, T. (2010, June). VizWiz:: LocateIt-enabling blind people to locate objects in their environment. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops (pp. 65-72). IEEE. 

    [2]    Manduchi, R., Kurniawan, S., & Bagherinia, H. (2010, October). Blind guidance using mobile computer vision: A usability study. In Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility (pp. 241-242). 

    [3]    Ivanchenko, V., Coughlan, J., Gerrey, W., & Shen, H. (2008, October). Computer vision-based clear path guidance for blind wheelchair users. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility (pp. 291-292). 

    [4]    Johnsen, A., Grønli, T. M., & Bygstad, B. (2012). Making touch-based mobile phones accessible for the visually impaired. Norsk informatikkonferanse,(Bodø, Norway, 2012).

    [5]    Matusiak, K., Skulimowski, P., & Strurniłło, P. (2013, June). Object recognition in a mobile phone application for visually impaired users. In 2013 6th International Conference on Human System Interactions (HSI) (pp. 479-484). IEEE.

    [6]    Hermus, K., & Wambacq, P. (2006). A review of signal subspace speech enhancement and its application to noise robust speech recognition. EURASIP Journal on Advances in Signal Processing, 2007(1), 045821.

    [7]    Dimitrov, V., Jullien, G., & Muscedere, R. (2017). Multiple-base number system: theory and applications. CRC press.

    [8]    Huyan, Z., Xu, L., Fang, S., Liu, Z., Zhang, X., & Li, L. (2014). Field information acquisition system research based on offline speech recognition. Int. J. Database Theory Appl, 7, 45-58. 

    [9]    Omankhanlen, A. E., & Ogaga-Oghene, J. (2013). The Dynamics of Global Strategy and Strategic Alliances in International Trade and Investment. INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATION & MANAGEMENT, 3(12), 41-48. 

    [10] Kamble, K., & Kagalkar, R. (2014). A review: translation of text to speech conversion for Hindi language. International Journal of Science and Research (IJSR) Volume, 3. 

    [11] Arora, S. J., & Singh, R. P. (2012). Automatic speech recognition: a review. International Journal of Computer Applications, 60(9).

    [12] BELGHIT, H., & BELLARBI, A. Object Recognition Based on ORB Descriptor for Markerless Augmented Reality. 

    [13] Gill, J. (2000). Personal electronic mobility devices. Information for Professionals Working with Visually Disabled People. http://www. tiresias. org. 

    [14] Chen, C., & Raman, T. V. (2009). Announcing eyes-free shell for Android. Retrieved December, 21, 2016.

    [15] Coughlan, J., & Manduchi, R. (2009). Functional assessment of a camera phone-based wayfinding system operated by blind and visually impaired users. International Journal on Artificial Intelligence Tools, 18(03), 379-397. 

    [16] Coughlan, J., & Manduchi, R. (2007). Color targets: Fiducials to help visually impaired people find their way by camera phone. EURASIP Journal on Image and Video Processing, 2007, 1-13.

    [17]  Kumar, A., & Chourasia, A. (2018). Blind Navigation System Using Artificial Intelligence. International Research Journal of Engineering and Technology, 5(3). 

    [18] Jiang, R., Lin, Q., & Qu, S. (2016). Let Blind People See: Real-Time Visual Recognition with Results Converted to 3D Audio. Report No. 218, Standord University, Stanford, USA. 

    Cite This Article As :
    Sharma, Aditya. , Vats, Aditya. , Shankar, Shiv. , Kaur, Surinder. Artificial Intelligence enabled virtual sixth sense application for the disabled. Fusion: Practice and Applications, vol. , no. , 2020, pp. 32-39. DOI: https://doi.org/10.54216/FPA.010104
    Sharma, A. Vats, A. Shankar, S. Kaur, S. (2020). Artificial Intelligence enabled virtual sixth sense application for the disabled. Fusion: Practice and Applications, (), 32-39. DOI: https://doi.org/10.54216/FPA.010104
    Sharma, Aditya. Vats, Aditya. Shankar, Shiv. Kaur, Surinder. Artificial Intelligence enabled virtual sixth sense application for the disabled. Fusion: Practice and Applications , no. (2020): 32-39. DOI: https://doi.org/10.54216/FPA.010104
    Sharma, A. , Vats, A. , Shankar, S. , Kaur, S. (2020) . Artificial Intelligence enabled virtual sixth sense application for the disabled. Fusion: Practice and Applications , () , 32-39 . DOI: https://doi.org/10.54216/FPA.010104
    Sharma A. , Vats A. , Shankar S. , Kaur S. [2020]. Artificial Intelligence enabled virtual sixth sense application for the disabled. Fusion: Practice and Applications. (): 32-39. DOI: https://doi.org/10.54216/FPA.010104
    Sharma, A. Vats, A. Shankar, S. Kaur, S. "Artificial Intelligence enabled virtual sixth sense application for the disabled," Fusion: Practice and Applications, vol. , no. , pp. 32-39, 2020. DOI: https://doi.org/10.54216/FPA.010104