Music is medicine for many of us in times of loneliness, stress, distress, failure and setbacks. It is also a way to express our joys and happiness in life. Research has it that we love to listen to different kinds of music-peppy songs when we are in joy, emotional ones when sad or motivating ones in times of failure-depending on our moods and there are studies supporting and denying the fact that music improves cognitive abilities of the brain. Rather than the user trying to search for songs that suit his/her mood how good would it be when a system recommends a list of songs depending on our current emotional state? Extremely helpful putting us in the right frame of mind! The patent invented here is an intelligent music system that suggests a playlist of songs depending on our emotional state of mind.
The system designed here has at least one bio-signal sensor that’s configured to capture bio-signal sensor data from at least one user. Bio signals are signals that are generated by biological beings that can be measured and monitored. Human brains generate bio-signals such as electric patterns that are measured or monitored using an EEG. The system provided here is one with a database that’s built of a user’s EEG response to certain musical streams. Along with additional information such as the user’s preferred music genre, personality questions and demographic information the system recommends a personalized music list which is solely based on the user’s emotional state and the desired state of the user. Hence, the system designed here responds with a particular music or song depending on the emotion experienced by the user and the system might even start playing the music instantly. While the user has access to many songs in the database there might be scenarios in which user does not have access to play certain music and the system might suggest ways in which the song might be accessed (through purchase or third-party service). The music and the bio-signal database of songs and emotions might be stored in a local computer or on multiple servers (such as in the cloud).
Music is universal and there are no language barriers to love it. People listen to music with different goals in mind-to surpass boredom and be attentive while studying or driving, influence their emotional state with a goal of achieving a desired mood state such as happiness, excitement and sadness or to involve pleasure generally. Users might also be questioned to determine the type of person he/she is and the type of music the person would prefer listening to. Questions asked might include: Think of a song that makes you feel sad; What was your favorite song when you were in love? Think of a song that makes you feel like dancing. Individuals might respond with answers such as: I love sad music or I hate sad music, I work harder than what others think, I’m an emotional person or I don’t get emotional about things, I am slightly shy or I love hanging out with friends. Such questions and answers are additional data that don’t simply rely on the EEG data alone. But the present invention goes beyond asking questions that help judging a personality-it uses bio-signal data and the invention adds EEG data of the user as additional training data to songs that have been labelled as evoking a particular emotion either through the user reporting the emotion via any of the questions or statements above or by tagging a song manually.
The type of song we like to hear depends on us. Some of us listen to sad songs when we are sad while some others listen to happy songs when sad. Intense emotional music releases dopamine in the pleasure and reward centers of the brain just like the effects of food, drugs and sex. This makes us feel good and repeat the behavior. Likewise, more the emotions a song provokes greater is our interest in listening to the song. Some also cry to let off stress and elevate mood. The present invention also determines the user’s emotional response after some time (maybe after 5 seconds) once the music starts to play. The user’s emotional response is fetched throughout playback of the song and the response is associated with the playback position of the song. While EEG might not be the one-stop solution for recognizing all the emotions it is still extremely good at noticing changes in the brain’s state. EEG measures a series of responses to stimuli that occur in the brain. EEG can recognize responses associated with feelings such as recognition, novelty, error, sleepiness, calm and focused attention. The invention here doesn’t stop with detecting these emotions but has the provision to add more sensors to detect data not available in the brain or to also incorporate data from other sensors on other devices that a user is also wearing. While an EEG can sense a negative response to stimuli it is quite difficult for the system to learn what generated this negative response. By providing the prediction based on EEG results the user now has a chance to reject the system’s prediction and correct it with their own experience. In this way, accuracy in predicting emotions can be improved. The patent was published on October 22nd, 2015 and for more details about the patent please visit the following websites:
United States Patent & Trademark Office:http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=2&f=G&l=50&co1=AND&d=PTXT&s1=%22brain-state+data%22&s2=stephanie&OS=%22brain-state+data%22+AND+stephanie&RS=%22brain-state+data%22+AND+stephanie
European Patent Office: https://worldwide.espacenet.com/publicationDetails/biblio?DB=EPODOC&II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=20151022&CC=US&NR=2015297109A1&KC=A1
World Intellectual Property Organization: https://patentscope.wipo.int/search/en/detail.jsf?docId=US152774732&_cid=P20-JYFQUM-28038-1
AVOID FRAUD. EAT SMART
+91 7846 800 800