DHRUVI MEHTA

Hi! I'm Dhruvi, a data analyst and visualisation specialist with a background in graphic design based in New York City.


SELECTED WORK


01 Global Landscape of Climate Finance02 Soundand and Sentiment03 Emotional Influences in True Crime04 The Mental Health Divide
2025 by Dhruvi Mehta

Sound and Sentiment
Data Visualisation

For some people, music is more than just sound – it connects people to feelings, a mood and a memory. As someone who creates playlists based on how I am feeling. I have always been fascinated by the emotional undertones in music. This project focuses on the larger question: 

  • Can the emotional landscape of today’s popular music be visualised, and make it feel like the music itself?

The Spark Behind the Project:
We as humans have a tendency to respond to music on instinct. But platforms such as Spotify quantify emotions with metrics like valence ( a measure of musical positivity), danceability, speechiness, tempo, and energy. I wanted to explore what those numbers actually look like, and how they vary across genres, artists and songs. 
Would more popular genres tend to be “happier”? Could I spot emotional patterns in different tracks? And what visual language could best represent these feelings?


Data and Methodologies:
Dataset: I used a cleaned version of the Spotify API dataset, which is publicly available from Kaggle. The dataset contains audio features and attributes for 1000+ songs. 
Some of the attributes include genre, artist, track names, popularity, danceability, and various related metrics.

Software and Libraries: I worked in RStudio and used various libraries. 1. dplyr & tidyverse for data wrangling
2. ggplot2, treemapify, packcircles for static visualisations
3. plotly and shiny for interactive charts and the final dashboard

Design goal: Create an interactive, neon-themed dashboard with a glow-wave aesthetic ( black backgrounds, cyan and magenta highlights) that feels both modern and musical. 
The Process: From Data to Design:

I started by filtering songs with a popularity score of 80 or above to focus on major hits. I then grouped the data by genre to calculate total popularity and mean valence – essentially measuring the popularity and positivity of each genre of music. 

Some of the attributes that were used for these visualisations: 

  • Track ID: The Spotify ID for the track

  • Artists: The names of the artists who performed the track. If there are multiple artists, they are separated by a semicolon. 

  • Track Name: Name of the track

  • Track Genre: The genre in which the track belongs

  • Popularity: A score from 0-100 assigned to a song to show how popular the track is. This is based on how often and how recently the track has been played. 

  • Duration(ms): The track length in milliseconds

  • Energy: This is the value from 0.0 to 1.0, which showcases the intensity of the track. The higher the value, the faster and louder the track.

  • Speechiness: The speechiness measures how much spoken words are in a track.
  •    - Above 0.66 = mostly speech (e.g. podcasts)
              - 0.33 - 0.66 = mix of speech and music (e.g. rap)
             - Below 0.33 = mostly music.

  • Valence: This is a value from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry)

  • Tempo: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration



Design Rationale and Visualisation Principles:

While designing the visualisations, I focused on blending data clarity with a visual aesthetic that feels emotionally resonant and musically inspired. I followed key principles of information visualisation, including intuitive colour encoding and visual hierarchy. For instance, valence, the core emotional variable, is mapped using a cyan-to-magenta colour gradient — chosen both for its strong contrast and its alignment with emotional polarity (cool for positive, warm for negative).

I aimed to use colour and spatial grouping to help users naturally compare patterns across genres, moods, and track features. The neon/glow-wave theme reinforces the modern, digital feel of music platforms while also ensuring visual consistency across plots. Finally, by incorporating interactive tools such as Plotly and Shiny, the dashboard invites users to explore their own questions, making it both visually engaging and user-centred — ideal for a general audience of music lovers, designers, and data enthusiasts.



Treemap: Popular Genres and Their Vibe:





Take a look at my interactive visualisation!

This treemap showcases how emotional positivity (measured by valence) is distributed across popular music at the genre level. In the treemap, each tile represents a genre, where the size of each tile represents the popularity score and the colour shade represents average valence ( where higher values mean more emotional positivity). This was created using R and the treemapify library; the chart distils hundreds of songs into a single, interactive landscape of mood and genre. The colour palette here ranges from magenta (low valence) to cyan (high valence) in order to align with the glow-wave aesthetic of the dashboard while clearly differentiating emotional tones. 

Some interesting points to note: 

- Dance music shines with higher valence and popularity, suggesting its strong emotional appeal. 
- Genres such as pop and reggaeton, although they are popular, show more moderate emotional positivity. 
- It was interesting to see that EDM and alt-rock were smaller and darker, which suggests that their popularity is niche and the genres have a tendency to be more intense. 

The idea of this visualisation is for users to explore how genres and moods work together, and pushes the user to engage while thinking critically about the kind of emotional atmosphere different genres cultivate. It makes they feel when they listen to music.  user question


Scatterplot: Duration vs Danceability





Take a look at my interactive visualisation!

To further explore the relationship between musical attributes and emotional positivity (valence). I created a scatterplot in R Studio in order to study how track duration influences both danceability and valence. In this case: 

- The x-axis represents the track duration in minutes
- The y-axis indicates danceability ( a measure of how suitable the track is for dancing) 
- Each point corresponds to a track
- The color of each point reflects its valence, emotional positivity, using a color gradient from magenta (low valence) to cyan (high valence). 

The scatterplot essentially reveals that more tracks cluster around songs with shorter durations, typically between 2.5 and 4 minutes. This aligns with typical song lengths in popular music. Tracks that have higher danceability also tend to exhibit higher valence (emotional positivity), which is seen through the cyan colored points in the upper-middle region of the plot. 

On the other hand, longer tracks seem to be less common, and they display great variability in both danceability and valence. However, there is no clear linear relationship between duration and valence, which suggests that songs that are longer in duration are not inherently more or less emotionally positive (valence).



Bubble Chart: Valence and Speechiness by Track




Take a look at my interactive visualisation!

The bubble chart explores the relationship between valence (emotional positivity) and speechiness - how much a song resembles spoken word - at the individual track level.

In this case, each circle depicts a unique song, where the colour encodes valence – ranging from magenta for lower valence (darker or sadder mood) to cyan for higher valence (happier or more euphoric tones). The size of each bubble corresponds to speechiness, so larger bubbles indicate songs with more spoken-word elements, also known as rap, experimental vocals, or interludes. 

I created this visualisation in R using the library packcirlces, which enabled the generation of non-overlapping circular layouts based on the speechiness values. After calculating speechiness-based bubble sizes, additional data such as valence was mapped to colour, and selected track names were wrapped and labelled inside the largest bubbles for readability: 

The chart reveals some interesting patterns: 

 - Larger magenta-toned bubbles typically represent tracks with high speechiness and lower positivity, suggesting a spoken, intense, or reflective mood. 

- Smaller cyan-toned bubbles reflect melodic, upbeat songs with low speechiness and high valence – often pop or dance songs. 

One clear takeaway is the emotional clustering that emerges visually – songs with minimal speech and high valence tend to group together, whereas speech-heavy tracks exhibit more emotional variability. 

This chart offers a vivid, engaging perspective on how songs balance mood and vocal style, turning abstract musical qualities into something we can see, compare and feel. 


Bar Chart: Comparing the Audio DNA of Three Popular Songs.






Take a look at my interactive visualisation!

This bar chart visualises the normalised and rescaled audio features composition of three well-known songs – “La Bachata”, “Quevedo: Bzrp Music Sessions, Vol. 52”, and “Unholy (feat. Kim Petras)” – offering a side-by-side look at what makes each track acoustically distinct. 

Each bar represents one song, broken into five stacked segments that show the relative contribution of different attributes in music: 

1. Valence (Emotional Positivity)
2. Danceability
3. Energy
4. Speechiness
5. Tempo

In order to ensure a fair comparison, all features were first normalised and then rescaled so that each song's features are out of 100%. This method underlines the strength of each feature while also focusing on its relative importance to the song’s overall sound. 

This visualisation was coded in R using libraries such as ggplot2 and dplyr, with the same color scheme assigned to make each feature easily identifiable.  The neon-inspired palette (magenta for valence, cyan for danceability, green for speechiness, orange for energy, purple for tempo) reinforces the glowing, music-themed aesthetic similar to what is seen in Spotify Wrapped. 

Some interesting facts: 
- “La Bachata” is high valence and danceability, reflecting its romantic, rhythmic feel. 

- “Quevedo: Bzrip Music Sessions” leans high on energy and tempo – ideal for hype and momentum. 

- “Unholy” has the highest proportion of speechiness and tempo, suggesting a more narrative, stylised delivery, aligned with its experimental, theatrical tone. 

This bar graph helps the user visualise and reveal the attributes of a song, and what makes each track. This bar graph would be useful in showing how danceability and valence vary between tracks. The idea is to invite viewers to explore how energy and mood play out on a granular level.


Dashboard:




The purpose of this music dashboard is to provide a deeper understanding of how our listening habits intersect with our emotional states, favourite genres, and top artists. By tracking and visualising patterns in music consumption, the dashboard uncovers how different moods influence what we listen to—and how, in turn, music might shape our emotional landscape. It captures not only which songs or genres are most frequently played, but also the emotional valence associated with those choices.

Designed for a general audience of music lovers, casual listeners, and data enthusiasts, this dashboard makes it easy to reflect on personal trends through a data-driven lens. Whether you're curious about how your mood affects your music taste, interested in which genres dominate your listening history, or simply want to discover new insights about yourself, this tool offers an interactive and visual way to connect the dots between music and emotion. Ultimately, the dashboard turns passive listening into active self-awareness.


There is significant scope to expand this project by deepening the emotional analysis of music and enhancing its relevance across different listener contexts. Potential directions include:

  • Tracking changes in valence, energy, or danceability over time to understand evolving emotional trends in listening habits.

  • Comparing mood-based metrics across different countries, regions, or languages could reveal cultural differences in emotional expression through music.

  • Analysing listener behaviour—such as engagement levels, skip rates, or replay frequency—based on a song’s emotional tone, to better understand what keeps people emotionally connected to music.

  • Gathering and analysing multiple years of personal listening data to observe long-term trends in music preferences and emotional states, and how they might shift with age, life events, or global moments (like the pandemic).

This project highlights how music analytics can offer nuanced insight into the emotional architecture of songs and shifting patterns in music consumption. By bridging personal mood with musical data, it opens new opportunities for self-reflection, audience segmentation, and even emotionally intelligent recommendation systems.

You can explore the full project, including code and interactive visuals, in my GitHub repository.
.