I did this some time back and all seems to be well, just wanted to run it by the board to see if I was on the right track.
I have a speech amp mod driver similar to the 1957 handbook design, crystal mike input, two pentode gain stages and a triode driver. I wanted to drive it with a 0 dBm input signal so I tried a voltage divider on the mike input, sure it worked. Then I thought why take a high level signal, attenuate it, just to run it through a high gain stage. I remember reading “design the speech amp with a little more gain than is needed to fully modulate the carrier”. Sure, copious amounts of unneeded gain just increases the noise level, so why use it.
So on to the next gain stage (pentode). Still to much gain. After some calculations I decided to keep the stock pentode but reduce it’s gain by configuring it as a triode. I Checked the DC operating point of the tube and it looked reasonable for class A. So I applied .700 Vrms at 1 kHz from the signal generator with a coupling cap to the grid, and bingo 95 % modulation of the carrier!
So the .700 Vrms input gives some headroom for a 0 dBm input signal. I checked the symmetry of the test signal, input vs. output, its OK up to about 1.0 Vrms in, and the amp’s -3 db points, are around 100 Hz to 15 kHz.
I would rather not use a transformer for input isolation so I ended up with a 10 uF coupling cap and then a 600 ohm resistor from the grid to ground. The signal generator and speech processor drive it OK fine and it sounds clean.
Effectively I just matched the tube to the previous stage, a source of .775 Vrms with an internal resistance of 600 ohms. Are there any issues, as far as the cathode biased tube is concerned, with using this low of a resistance value across the grid to ground? Just never seen it before. Comments, suggestions welcome.
Thanks Again
Ted
KC9LKE