Podcast: Catching Feelings with 69 percent Accuracy: AI Reads Emotions in Sports

In this episode, we discuss research coming out of the Karlsruhe Institute of Technology to detect the emotions of tennis players better than humans can!

author avatar

25 Jun, 2024. 13 min read

In this episode, we discuss research coming out of the Karlsruhe Institute of Technology to detect the emotions of tennis players better than humans can!


EPISODE NOTES

(0:50) - AI recognizes emotions in real sports situations

Become a founding reader of our newsletter: http://read.thenextbyte.com/


Transcript

What's up, folks? Do you struggle with understanding other people's emotions? If you do, don't worry, you're not alone. But we're talking today about an AI that's been developed that catches feelings with 69% accuracy. I think it's super interesting, so let's jump into it.

I'm Daniel, and I'm Farbod. And this is the NextByte Podcast. Every week, we explore interesting and impactful tech and engineering content from Wevolver.com and deliver it to you in bite sized episodes that are easy to understand, regardless of your background. 

Daniel: What's up friends? Like we said, today we're talking all about an AI that can read human emotions better than most people can. And I think that's just a really interesting challenge to talk about in terms of human society as a whole and how we're interacting with AI, but especially in the context of the research that we're talking about today was being completed inside the realm of athletics. So, it's like knowing and understanding athletes' feelings during the game understanding interactions between players and the other people on their team, and also understanding whether athletes are experiencing burnout. So, it's a really interesting challenge where currently even humans don't have a great understanding of how other humans are feeling. They talked about like tennis players during a tennis match. It's really hard to tell that 100% of the time. And so, they're creating an AI that can read the human emotions during sports better than any human can. And because sports are such fast, unpredictable environments, they've got this maybe if they get good at human observations in the sports realm, they can apply them to other realms of human emotion as well.

Farbod: Yeah, and I think one thing that was really interesting to me was I assumed there's some sort of an AI model or data set out there for human emotions that you could train on or you could build off of. But these folks were like, no, we're actually just gonna focus on tennis players, gather all of our data from that, and then build our model based off of that data. And for a second I was like, that doesn't really make sense because your data set's obviously gonna be a little bit smaller. But then I thought, well you know, from like, from sport to sport, from engagement to engagement, humans show their emotions differently. Like what might be the norm in hockey to show your appreciation by like slapping someone's head, or like what might be the satisfaction by like pushing them through a board that doesn't necessarily apply to the emotions that you might be showing in tennis where it's subtle or maybe even the same thing in golf, where it's like a much more gentler sport. So, it makes more sense that you kind of hone in your data set to the application that you're going after. And yeah, I thought it was really interesting. They studied these 15 tennis players and they were like, okay, so the set went well. Now they're pumping the air, so that means they're happy. Or they just missed their shot and now they look a little bit frustrated or their eyes narrow a little bit. And that's the nuance that this model was really looking for.

Daniel: Well, and it also makes me think back to episode 93. We talked about how AI could predict sports really, really well.

Farbod: In hockey.

Daniel: I think they started, they did hockey and also volleyball.

Farbod: Yes.

Daniel: Is that true? I think so. And when they were first training, they used volleyball because there were set spurts of like, oh, they're going to play for like usually 30 seconds of volleying back and forth and someone's going to score and they were able to use that to train this model, to understand how sports players interact with each other and understand the next possible outcome. And this AI from episode 93, which we can link or you could check out on our show and to hear more about AI and sports, if you're interested, that model got really, really good at predicting the next outcome in sports based off of using volleyball as the training set. And then it went on to be able to predict hockey really, really well. And similarly here, I feel like tennis is kind of similar to volleyball in that there's like set short spurts of volleys where it's like, you can gather hundreds or thousands of like 30 second clips of sports and not deal with a lot of the other interference that you might get in like other sports where there's players hitting each other and then there's players subbing in and out off the bench. Like none of those X factors happen in tennis or in volleyball, which I think make them prime footages for training. And I don't think that's, that's not something that was explicitly mentioned, but I'm wondering if that was one of the part of the calculus here to say, not only are we interested in using tennis as our training ground, so to speak, to get this AI really, really good at understanding how players' interactions change when they win or lose points to help understand whether they're happy or not, or they're burnt out or not, or they're upset or not. I wonder if also tennis was strategically chosen because of the way that the game is structured allows there for being a lot of like really structured training data that is pretty compatible with the way that you would train a convolutional neural network for AI.

Farbod: Yeah, no, that's a very good point. I didn't even consider that. But I guess one of the things I was wondering as a not so big of a sports fan, was why do you care that much about the emotions of players? And one thing they noted is as a coach, you could actually just analyze data in real time or after the fact to see how your players are doing and see what the team dynamics are right, or even like start picking up elements of burnout showing early on, which is wonderful. And then I guess on the consumer side of sports, if you're the type that is really passionate about your team, or maybe you're into sports betting, you could use a tool like this to understand how a player or a team is doing. And those are some of the direct impacts it could have on sports performance overall.

Daniel: Yeah, and I agree. So, they, just to kind of talk specifically about the methodology they used here, their secret sauce to achieve this, they made this AI model that looks specifically at body language of tennis players. They recorded 15 tennis players in a bunch of different matches. They watched specifically how these players moved their body language as well as their faces when they won or lost points. They categorized those, whether they were lowering their head or raising their arms or giving each other high fives, et cetera. They used a convolutional neural network. It's a type of neural network, a type of deep learning AI model that's really good at finding patterns in pictures and videos. So, they use these videos, they've categorized them whether the points were won or lost. And they asked these convolutional networks to train on these videos and trying to understand what patterns exist in the human body language and in the facial language between winning points, losing points, and trying to categorize whether the player was happy or was sad or angry or upset during each play. And this accuracy of the AI model, it was right about 69% of the time, 68.9% of the time, which is interesting. In my mind, I was like, oh, that's pretty low for an AI model.

Farbod: That's what I thought too.

Daniel: Usually when we're sitting here looking at like AI accuracy for things like doing early detection of cancer based off of scans. We're looking at like 99.999% accuracy. AI is way better than humans, et cetera, but humans are only about 68-69% good at detecting the correct emotion on other people during a sports game. So, they said, you know, this AI model was about the same as humans were. And in some cases, a little better than humans are emotions on other players, which is pretty interesting because it shows the challenge that humans have and the friction that humans have in understanding each other's emotions. I imagine that's a much more challenging problem for AI and computers to be able to understand human emotions given that there's so much gray space there.

Farbod: I agree. And another finding that was interesting that kind of extends your point is that both humans and the AI were much better at detecting negative emotions than positive ones. And the paper had a quick note as to why that might be the case. And they're like, oh, as we evolved, it might have been more advantageous to display negative emotions, to ward people off, or threats off, or display your dismay at something. Whereas positive emotions, there's no reward value for really sharing that, which is why we might be bad at expressing positive emotions.

Daniel: Well, and anecdotally, I think it's really, really easy to tell if someone's pissed off. If someone hates you, you know, usually. If someone's angry about you or coming at you with a knife on the street, like your body, deep down…

Farbod: No other way to interpret that.

Daniel: To your fight or flight response, your body to the core understands exactly that someone else is angry at you. Yeah. It's challenging sometimes to tell if someone's happy with you. If they truly like you, right? I don't, again, like maybe it's some way ingrained in human nature or the way that we all express our happy emotions are different. There's a little bit more friction there, but I absolutely, just completely anecdotally, could understand that it's easier for humans to understand when someone else is mad. And I guess that might translate to understanding, or for computers to understand when humans are mad, given that there's probably some-

Farbod: A little bit of cues just to show that.

Daniel: Yeah, some really strong trends and patterns from human to human on how we express anger versus a variety in different ways that we express, whether we're happy or excited about something.

Farbod: I guess not to make this a therapy podcast, but folks, get better at expressing your happiness and, I don't know, showing it to other people. Yeah, maybe you could help the AIs.

Daniel: Yeah. But again, I just think this is really interesting from a technical perspective. They're able to use a training data from sports. Sports is largely because of all the different X factors of people moving around, there being fast movements, lots of changes during games, sports are actually a relatively challenging training method for a convolutional neural network like AI to understand. I'm glad they're able to find tennis, which seems to be framed in a way that allows this network to be successful at understanding different emotions and movements. And the so what here, right? Athletes can better understand their own feelings to better understand their own game, their own teamwork, that can help coaches understand where there's issues with chemistry, can help stop burnout and interestingly, usually people can watch and try and guess emotions, but we're only just about as good as this computer is. So, I wonder if there's some headroom here for computers to get even better at detecting emotions in the athletic field as well.

Farbod: Yeah, definitely. And I guess you mentioned early on, this is kind of the starting point for AIs doing human emotion detection. It made me think, if you were to scale this type of technology, how would it fare as you start looping in different cultural aspects, different ethnicities. I think about myself an Iranian man, I'm quite animated, I use my hands a lot as I'm like expressing myself. I'm wondering if an AI could like kind of pick up the nuance. He's probably not super excited. He's just normal and Iranian and that's why he's doing what he's doing, you know? Or like if you're like, I don't know, maybe you're from a culture where like being loud and like your facial features changing is actually like a term of endearment somehow and now the AI is thinking you're upset. I wonder if those nuances can be caught.

Daniel: No, I agree. I think it's one of the next steps that this team should take, and a lot of folks using AI is trying to diversify their training set as much as possible. So, in this case, I'd love for them to diversify their training set in terms of which sports they use, in terms of which training footage they use. Maybe at some point, and I think it was mentioned as a potential feature work, is like using this as a customer service setting to understand if the customer service employee is actually appropriately resolving this person's problem, the customer. That I think that's an awesome application of this. But in addition to varying the different fields, right? I agree, you need to vary the different cultures because culturally speaking, there's a bunch of different ways in which people express their emotion, whether that be excitement or discontent. I can just as easily imagine there are personality differences as well as cultural differences. I know for instance, my wife, Nellie, if she were upset about like someone sending the wrong salad to her at a restaurant, she would just like get quiet and she's non-confrontational. She'll get quiet about it. Meanwhile, like there's enough people in the world that if that happened to them, they'd like flip the plate and yell at the waiter, right? There, there's a spectrum, even within cultures, just based on personality on how people interact and react differently to things. I think that's why this AI model is only 70% accurate as is. And I think if they're able to train it on a wider variety of different people, different emotions, different cultures, different arenas. And in this case, arenas being both sports and outside of sports, they'll be able to create a model that, interestingly enough, AI might get better at detecting human emotions than humans are, hands down.

Farbod: Yeah, no, that's a great point. My last thing that I wanted to touch on, one thing I would be interested in in terms of application is there's all this tech being used for professional drivers, think like truckers or other folks who drive a lot for work that have like speedometers to make sure that they're not speeding too much and GPS to make sure they're going the right route or whatever, just maximizing how well they're performing as drivers. One thing that I think could be interesting, a lot of these have cameras now looking at the drivers to make sure they're not falling asleep. Imagine analyzing them in real time to make sure that they don't have road rage and if they do, it'd be like, hey, calm down or we're gonna like.

Daniel: Dude, that's awful big brother of you there.

Farbod: I know, I know, but like, I don't know, I guess I got beef with road rage drivers. And I'm like, huh, it would be nice to have that extra layer of safety that says, hey, you're upset right now, how about we calm down? We slow down the car.

Daniel: Take a deep breath.

Farbod: Yeah, cause you know, you could kill a lot of people with this big car or this truck that you're like hauling around.

Daniel: Yeah, dude, I don't know. That's an interesting application.

Farbod: It's definitely on the border of like too much into people's lives.

Daniel: No, I agree. And just to mention, there's another portion of this that they mentioned, which is as in using humans for AI and allowing AI to understand humans potentially better than humans can, there's a lot of ethical considerations as well. They talked about making sure that people's privacy is safe, that data isn't being used wrongly. The study was very important to make sure that the people who were used as the training set, they were protected from a privacy perspective. But then also they said, before we push too far with this technology. Let's collectively agree as a society what the rules should be. What's the fence we're gonna put around the sandbox that we allow AI to play in so that we don't just start to understand emotions and sports better and then AI takes over the world because it can understand humans way better than we can. So just an interesting application here where they're playing at the realm of this AI after their first whack at it is already just about as good as humans. If they get to a point where humans can read. AI can read human emotions way better than humans can. I don't know. It could be an interesting future where I feel like men are pretty like historically awful, like reading each other's emotions. There's all these memes. I think I sent one to you where it's like, oh, like dudes could be going through like the best time of their life or the worst time of their life. And they always just respond like, oh, how are you doing? Good. Like, I feel like maybe we get to a point where like, when we're going out with the guys from college, we all wear these AI glasses that tell us if we're actually happy or not.

Farbod: Yeah, let them do the heavy lifting.

Daniel: Yeah. So, then men don't have to communicate their emotions.

Farbod: That's the product right there.

Daniel: That's the end game.

Farbod: Yeah. Want to wrap it up?

Daniel: Yeah. So interestingly, we've got this challenge, these folks that are trying to create an AI that can read human emotions better than any person can. And it's interesting, they're doing this in the arena of sports. So, it's really challenging to recognize emotions already on humans, especially computers have this challenge, but in fast, unpredictable sports environments it gets even harder to observe humans and understand exactly what their emotions are. But these researchers trained using tennis as a training data, a convolutional neural network, to study real game videos. These neural networks work by finding patterns in the images and then in the videos and through multiple layers of analysis it was able to achieve about 69% accuracy at understanding what the human's emotions were during the sports clip. They think that this AI could help improve sports training and performance, but could also be used in other areas like healthcare and customer service. But they're also looking at the ethical considerations to make sure that we don't let AI take over the world because it already understands human emotions about as good as humans can, and it's gonna get even better.

Farbod: Money. All right, folks, thank you so much for listening. And as always, we'll catch you in the next one.

Daniel: Peace.


As always, you can find these and other interesting & impactful engineering articles on Wevolver.com.

To learn more about this show, please visit our shows page. By following the page, you will get automatic updates by email when a new show is published. Be sure to give us a follow and review on Apple podcasts, Spotify, and most of your favorite podcast platforms!

--

The Next Byte: We're two engineers on a mission to simplify complex science & technology, making it easy to understand. In each episode of our show, we dive into world-changing tech (such as AI, robotics, 3D printing, IoT, & much more), all while keeping it entertaining & engaging along the way.

article-newsletter-subscribe-image

The Next Byte Newsletter

Fuel your tech-savvy curiosity with “byte” sized digests of tech breakthroughs.