Last week’s Bonus Round: I know, I know, so easy, and yet so difficult. The answer of course is Cyndi Lauper singing an acoustic version of “Time After Time”. But where? Which performance? Which city? All right, ya filthy animals - here’s my go-to when I’ve written the post, realize I have no idea what the Bonus Round should be and I just want to hit the publish button. I hit up Spotify’s Folk and Americana playlists and scroll and listen until I find something familiar and kinda cool, figure out some clues to go with it and off we go.
Last week I found Cyndi Lauper’s “Time After Time - Live From Spotify NYC” a beautiful, kinda acoustic version with a lap or pedal steel in the mix (maybe a slide, but sounds more like a steel to me). Anyway, great version of one of my favorite songs by a great artist on the compilation album “Cyndi Lauper - Spotify Sessions”.
Found this for readers who may not have a Spotify login. Different version, but with Ms. Lauper playing a lap steel - so cool!!
For today, I wanted to expand a little on a theme from yesterday’s “Special Edition” post, link here if you missed it:
Michael Acoustic Special Edition
In that post I included this note:
“* The “(YouTube algorithm?)” question is actually pretty relevant here. In this week’s regular post on Michael Acoustic I’ll link to a fellow Substack writer’s great insights into how recordings are sort of “normalized” depending on the media. I listened to all three YouTube versions of Second Avenue on a variety of speakers, from fairly high end studio monitors in my home studio, to iPhone, pad and computer speakers and through a home theater system. All of those sounded notably different. I’ll have more details in this week’s regular Friday Michael Acoustic post.”
We’ve talked about recording your own music as an independent artist, especially if you’re recording (like I do), “stems” to be mastered by a pro audio engineer/producer in a professional studio. The “YouTube algorithm” I’m referring to somewhat broadly here is definitely NOT the methodology for recommendations YouTube uses to suggest videos to users. What I’m referring to is the normalization of essentially “loudness” which can affect the dynamics of your song’s YouTube video and audio playback on other streaming services. So, no matter how you record, mix and master your song, on YouTube and most other streaming services, your song is going to be normalized for loudness. There’s actually a reasonable purpose for this, it’s so when you as a user scroll through and listen to videos or audio-only songs, such as on a playlist, the playback volume doesn’t vary widely causing you to constantly be adjusting your device’s volume level. Ok, fair enough. The measurement for normalization is calculated in something called “LUFS”, which stands for Loudness Units Full Scale, and here’s essentially what happens:
“LUFS Targets on Music Services”
“Since services like Spotify and Apple Music have become the gatekeepers of most music, they each come with their own way of standardizing the listeners experience.
One main way they do this is by targeting the same LUFS level for all music, and adjusting the gain as a result.
For example, if you’re making a tune at -6 LUFS, Spotify will turn it down to -14 LUFS if their target is so.
The problem is, Spotify just turns the volume down. But to get to -6 LUFS, you sacrifice a lot of dynamic range.”
Credit: edmprod.com, link to full article here: LUFS: How To Measure Your Track’s Loudness in Mastering - this site is aimed at EDM - Electronic Dance Music production, but the principles apply more broadly than by genre. Worth the read, but be prepared for some technical aspects to decipher.
Here’s another article that gives more insight into normalization: YouTube Changes Loudness Reference to -14 LUFS and is also a bit in the weeds, but unless you plan to record only at a studio and let the audio engineer/producer handle every aspect, it’s stuff you need to know something about.
That leads us to a recent Substack article by fellow SubStacker Dada Drummer on his Substack “Dada Drummer Almanach”:
This is a great read, even if you don’t plan a CD or vinyl release of your song. Why? a couple of excerpts:
First a little background from the “Drummer”: “As we all got used to digital, engineers learned to mix differently for the CD – the high end that went into the master was going to stay that way, so you had to make sure it sounded right at the start. You could also load the bass way more heavily than before, because you didn’t have to worry about bouncing a needle out of its groove.”
Now we get to a nutty problem about the vinyl revival. If an album was originally mixed with digital reproduction in mind, because it was made in the era of CDs… and you now put that master through the analog reproduction process for vinyl without compensating… you get a muddy sounding record, too heavy in the low end and without sparkle in the highs.”
Next, sort of the “heart of the matter” (“them” refers to your song or songs heard on different devices) - again from Dada Drummer:“When we listen to them in the car, they have no bass because the car has so much of its own.
When we listen to them at home, they sound one way in the dining room where we have small speakers, and one way in the living room where we have bigger ones, and one way in our office where we have a boombox.
They sound different on LP and on CD, and via digital download at full resolution.
And they always sound worst streaming! (Because of lossy compression. That’s a story for another day.) “
So that goes back to my “note” from yesterday’s post about how the 3 versions of “Second Avenue” sounded much different, depending on which speakers I was listening to.
Here’s my monitor “stack” in my converted bedroom studio:
From the top, Yamaha HS8, PreSonus E3.5 “near field” speaker, Avantone Cube. Also there’s an old Apple computer speaker that I bought a loooong time ago to go with one of those egg shaped colorful iMacs back in the day. The red thing is a pencil sharpener (yep, I still use them!). There’s an identical stack on the left side of the desk and I’m taking the picture from roughly the “listening point” - the opposite corner of the equilateral triangle formed by the two speaker stacks and where I sit when I listen. I measure the distance to the listening point triangle ideally around 3 to 5 feet away from the speakers. Here’s an article that gives more detailed information on the importance of establishing the listening point correctly:
I listened to each of the songs on each of the speaker pairs separately, muting all but one set of the speakers at a time. The difference was quite large. The Yamaha’s give a bassy response (as you can imagine from the woofer size). The PreSonus give a balanced and reasonably good sound “fit” for the room. The Avantone’s are known for telling you the “truth” of the mix - if it sucks on them, it sucks. I also listened on my laptop, phone and through my home theater television setup . Every one was different. I didn’t bother with home audio since I didn’t have the CDs or vinyl for any of them and would have had to listen via Bluetooth. No point.
Bottom line, if you’re an independent artist recording your own stems for pro production, it will probably be a good idea to have an in depth discussion with your producer/audio engineer (ideally, even before you record your stems - might alter your approach) about how the master will be finalized: more towards streaming, more towards CD/vinyl, or some compromise between the two if that’s possible and ultimately, satisfactory.
Admittedly, this post is going to be a little esoteric for some, maybe many, of my readers (thank you for your patience!!), but if you know someone who is or thinking of recording and releasing their own music, by all means hit the “share” button and maybe help them a bit.
(Convenient much?)
Bonus Round: A great song from a “Jam” guitarist/vocalist
Cheers and keep playing!!
Michael Acoustic