The IEC standard for type II tapes is 70 microseconds, but early Sony decks and some others used 120 microseconds just as with type I tapes. This gave about 4 dB less signal-to-noise improvement but allowed for greater headroom extension in program material that was weighted more toward high frequencies; the tape was that much less likely to saturate.
There was also a long period of time in which Nakamichi defined 70 microsecond equalization differently from every other cassette recorder manufacturer on Earth, with the result that their machines played back type II and type IV tapes about 2 dB brighter than everyone else. That made for interesting A/B comparisons if you were shopping for a cassette recorder and the salesperson played back the same tape on a Nakamichi versus any other cassette deck. Nakamichi even put out a "white paper" explaining why their interpretation was right and everyone else's (including the official DIN standard calibration tapes from BASF) were wrong.
Then without explanation, after a number of years they split the difference, and brought their decks into something much more like conformity with the rest of the world.
Aren't you glad you asked?
--best regards
P.S.: jlykos, I'm fairly sure that any Sony cassette decks that ever had front-panel adjustments for EQ were letting you set the record EQ for flat response, just as in some cases you could set the record bias. The 70 microsecond EQ standard determines the deck's playback characteristics, however, which aren't normally user-adjustable.