Personalizing Content Using Voice in a Digital Asset Ecosystem

Peter Stanchev, Alexander Iliev

Research output: Contribution to journalArticlepeer-review

Abstract

Behind any cloud-based service there is a complex infrastructure that varies greatly depending on the industry and the types of services provided. Storing, searching and finding data through deep learning and artificial intelligence is the logical and necessary way forward. In the entertainment industry for example, digital media libraries offer massive amounts of media materials to the public where the big issues are finding, accessing and recommending specific content out of enormous set of choices. One way to approach the issue is through media-descriptive metadata, which comes as plain text (synopsis), sounds (narration), images (cover shots, etc.) or short videos (trailers). This however, is the conventional way that is building even more around the problem of finding specific content easily, hence not directly solving the main issue. This is so not only because specific applications must be developed, but also because they require physical human interaction with the system by using user dependent keystrokes in most cases. This in turn makes the access and extraction of digital content cumbersome and slow. Hence a better, more personalized automatic behind the scenes approach is needed.  
Original languageAmerican English
JournalInternational Journal of Innovative Research in Science, Engineering and Technology,
Volume5
StatePublished - Dec 2016

Keywords

  • Personalization
  • Emotion
  • Speech
  • Voice
  • Recognition
  • Digital Asset Ecosystem
  • Cloud Services

Disciplines

  • Computer Sciences

Cite this