The tech industry is always trying to make its AIs think and behave more like humans, and the ever-increasing sophistication of virtual assistants is proof of that. They can now tell jokes, hold short conversations, and even respond to basic courtesies like "please" and "thank you."

However, Amazon isn't one to leave well enough alone, especially when it comes to its Alexa AI (of sorts). Though Alexa already sounds surprisingly human for a machine, Amazon's engineers have introduced "Alexa Emotions" and "Speaking Styles" to the virtual helper today.

Alexa Emotions let the assistant offer happy, excited, disappointed, and empathetic responses based on different prompts. These response types can be used in Skills, but it sounds like they've been blended into the core Alexa experience as well. You can hear some samples of these different emotional responses (and compare them to Alexa's "neutral tone") right here. Notably, there are several different intensities available for testing – Low, Medium, and High for each emotion.

Speaking Styles are similar in the sense that they also aim to make Alexa feel a bit more human and "real," but they accomplish this task in a different way. Starting right now, US users can enable up to two different speaking styles in the US: news and music, both of which "tailor Alexa's voice" to the appropriate content type.

The "News" speaking style places more emphasis on important numbers or figures, just as a newscaster would. The Music style has a slightly more lighthearted, conversational tone, similar to what you might hear from your favorite music station's hosts.

Both Speaking Styles and Alexa Emotions take advantage of a technology called "Neural TTS" (Text-To-Speech), which you can learn more about right here. In short, Neural TTS synthesizes speech from scratch, rather than relying on small pieces of "pre-recorded sounds."