Well, plus the PCM-F1 and the SL-2000 (the companion video recorder) each required substantial operating current, and the rechargeable battery that they each worked with could only hold about an hour's charge. So you either had to create a big, heavy external battery arrangement, or else pre-charge and swap out a pair of roughly paperback-book-sized NiCads every hour.
Plus, the mike inputs of the F1 were unbalanced, and could be overloaded rather easily; they weren't designed for professional condenser microphones. And while the "PEAK" LEDs on the meters were reliable*, the meters themselves were not; you would sometimes see "PEAK" indications while the meters registered only -4 dB.
The boundaries between professional and consumer recording equipment were clearer back then, with the F1 being decidedly on the consumer side, even though a lot of professionals (including me) bought and used them. There were other compatibility issues between consumer digital and professional digital as well (see P.S. below if you're interested).
--best regards
______________
* What that meant in those days was: If _three or more_ consecutive digital samples reached full scale, either positive or negative, the PEAK indicator would flash. But the Red Book standard required there to be _no_ full scale values at all in an entire CD tape master (in fact it reserved the lowest-order four bits as a safety zone, so a tiny fraction of a dB below full scale was also forbidden). And the PEAK indicators didn't flash in playback--only in record.
So even if you recorded an entire performance with no PEAK LEDs, you still didn't know whether your recording could be used as a CD tape master or not. To avoid rejections at the pressing plant, tape masters sometimes were recopied with a gain just below unity, even though this added a tiny amount of noise and violated the idea of bit-transparency.
and the promised P.S.:
[1] In the United States, Japan and a few other countries, the frame rate for NTSC color video was slightly modified from the standard frame rate (30 frames per second) of black-and-white video. The "pseudo-video" signal from consumer digital audio adapters such as the PCM-F1 contained no chroma signal, but consumer videotape recorders by and large were designed for color video recording only. So the NTSC version of the PCM-F1 sold in the U.S. and Japan actually ran not at the 44,100 Hz sampling rate of the CD medium, but at 44,056 Hz so that the samples could be encoded at the 29.97 Hz frame rate of NTSC color video. So if you took an F1 recording and "bumped it up" to PCM 1600/1610 format for CD mastering, there was a pitch shift of about 1/50 of a musical semitone. Some musicians (hothouse plants that we are) could actually hear that.
[2] The original version of the PCM-F1 had only one A/D converter which was shared between the two channels. A "sample and hold" circuit alternated between the left and right channels at twice the sampling rate, fed its output to that shared ADC, and the output of the ADC was then buffered, interleaved, and formatted into lines of "pseudo-video" for recording. (If you recorded a pure mono signal, the digital content of the left and right channel of that recording were never exactly identical for this reason.) During playback the process was reversed so that each stored pair of left and right digital samples was fetched from the de-interleave buffer and converted with the same 1/88,200-of-a-second delay between them. Thus the simultaneity of the two channels was restored in the analog output even though it didn't exist in the digital samples stored on tape.
However, in the professional realm there were always separate A/D and D/A converters for each channel, and the left and right channels were sampled simultaneously. Thus any PCM-F1 recording that got "bumped up" to PCM-1600 format for CD mastering always had that tiny time lag between channels. It was too small to be audibly significant in most stereo recordings, but caused some phase cancellation at high frequencies in mono material if the channels were summed in playback--and if you knew about it and were a perfectionist, it was bothersome to the conscience.
Also, on a technical level, the alternation between channels in the consumer adapters, and the switching interval on each half-cycle (during which the sample-and-hold has to be disconnected entirely from incoming signals to avoid garbage from the switching process itself) cut the settling time of the circuit to less than half of what it was in the professional equipment--thus adding several dB of noise to the analog signals prior to conversion. That's part of why these early 16-bit adapters could never achieve the full signal-to-noise ratio of the CD medium.
It's also why we old curmudgeons get testy when you young whippersnappers talk as if 16 stored bits -> a 16-bit dynamic range. That misconception is very convenient for people who sell "24-bit" digital audio recorders, but I can confidently predict that in my lifetime _or_ yours, there will never be one with an actual 24-bit dynamic range from analog input to output, or even from analog input to digital output. It's also why a higher sampling rate, back in the days of linear/ladder A/D converters, meant WORSE audio quality rather than better--something that was audibly demonstrable, but that the audiophile magazines (The Absolute Sound especially) got completely wrong, even though they were the ones who supposedly used their ears rather than specifications to judge everything.
(Damn--that grudge is now almost 40 years old and I'm still carrying some of it ...)