If you are doing anything more than the low cut, I’d suggest putting Pro-Q 3 after Pro-MB. A low cut makes sense in front of it, but if you want to actually equalize the mix you’d want that EQing to happen after compression.
As stated, normalization should happen after all your eq + compression.
I disagree. EQ should always come before compression. You don't want to be compressing based off of information that will not make it into the final mix.
EQ CAN go before compression, but the impact is different.
Compressors are not linear operators. This means that compression > EQ sounds different from EQ > compressor (as opposed to two linear processors, like EQ > sample delay where the order there won’t matter).
The practical difference in using EQ before compression is that EQ boosts will kick a compressor harder (causing it to clamp sooner, and depending on ratio you set the compressor at this changes the compressor’s clamp non-linearly). This can be useful when used with cut filters - a low cut before a compressor helps prevent bass from “mushing up” the compressor, and such a signal path is frequently used as a side chain input like on API’s “thrust” control. In the boost case, this can be useful as getting the compressor to “pump” more based on a certain frequency region dominating the compressor’s triggering.
Putting EQ after a compressor means that the compressor does it’s non-linear gain business first, and any changes to equalization map to a linear boost or cut.
For the thought experiment, imagine you have a band pass filter with 6 dB of gain at 1k with a Q/resonance of .707, and a compressor with a 4:1 ratio. Assuming the signal is 2 dB above the compressor’s threshold, Putting the EQ before the compressor means the compressor will be doing 32 dB of gain reduction; in the reverse order, the compressor is doing 8 dB of gain reduction, with a 6 dB boost at 1k after that fact.
Both methods have their utility. In general, EQ before compression is to color the compressor’s response; compression before EQ is using the compressor to reduce volume spikes, with EQ then used to balance the mix.
Now, if you’re talking about peak limiting that’s a different story, but why you’re peak limiting then normalizing is a bit head-scratching.
Lastly, I’d avoid broad generalizations such as “EQ should always come before compression” as there is so much more nuance to it than such a statement offers. These are tools, and knowing how to use them means familiarity and creativity - breaking the rules a bit can be useful. And this is ignoring the studio world, where if you made that claim you’d be fired on the spot from the vast majority of productions for saying so. I don’t intend to beat you up about this too much, it’s coming more from a place of “I too am guilty of making broad generalizations, and it tends to bite me in the ass more often than naught”; I’d rather offer you some insight into how I think about these things, so we can both improve our craft.
For some context here, I have spent nearly half of my life doing studio production, I hold a doctorate in audio dsp applied to binaural hearing theory and room acoustics, and am currently a high-level filter designer for a certain fruit-named tech company; by the corollary, I am still very green to taping, and I’ve learned more from tapers such as yourself and in the trenches battling wooks than I have from the majority of my former professors. I just love all things sound and talking about them, and it seems you do too