AV Macs & Framegrabbing - the inside story. ---------------------------------------- Please feel free to correct any points in this message you know to be wrong. Over the past two months I have been agonising over whether to buy an AV Mac for the Physics Lab I work in. The aim would be to grab greyscale images of diffraction patterns and analyse the relative intensities of diffraction spots. This requires digitising into 256 grey levels to provide the necessary greyscale resolution. In order to see whether the AVs were suitable for this I looked in detail at the hardware specs of these machines as described in Apple's developer note. This note gives a circuit diagram of the digitiser hardware, including chip numbers. They have used a number of Philips chips to handle the digitisation. I went out and bought myself the Philips technical notes for these chips, and what follows is a summary of the AV Macs digitisers. It may be that Apple are using these chips in a non-standard way that might invalidate some of the comments I am going to make, but there appears to be no detailed information, other than that found in the developer note, on the built-in digitiser's hardware and performance. (1) The chip set used for digitisation in the AV Macs are primarily used in standard TV applications where quality and faithful rendition of the digitised image are governed by the response of the human eye which is fairly tolerant. (2) The 8-bit (256 levels) analog-to-digital converters (ADC) (Philips chips TDA8708 & TDA8709) that digitise the incoming video signal also digitise the synchronisation pulses in the video signal. In particular the first 64 levels of grey are used for the sync pulses, and the top 16 levels of grey are never used. This means that the effective resolution available for video digitisation is only 176 levels. This may be OK for TV applications but is only suitable for a limited range of scientific purposes. (3) The ADC's have an automatic gain control (AGC) feature which appears to be enabled. This will automatically alter the gain of the ADCs depending on the average level of the video signal. So as the images get darker the gain goes up, as they get lighter the gain goes down. There appears to be no way to adjust the gain or contrast of the digitised image from software (see Apple's own VideoMonitor where the brightness & contrast controls have little or no effect). (4) The digitised video is then passed to another Philips chip (SAA7191) for converting the input digital video standard (PAL, NTSC, SECAM etc) into a standard YUV 4:2:2 signal as defined by the CCIR-601 standard. During this process it maps the digitised luminance (brightness) levels from 64 -> 240 (as produced by the ADCs) to 16 -> 235 which will correspond to the black -> white level range. In other words it interpolates from 170 levels of grey to 220 levels of grey. This is definitely bad news for scientific image grabbing where true 8-bit images are required. The chrominance (colour) levels are mapped in a similar way, however the chrominance is unimportant for greyscale images. (5) The final chip (Philips SAA7186) converts the YUV 4:2:2 digital signal into RGB suitable for stuffing into video memory. It can apply an optional gamma correction of 1.4 to the digitised image, thus changing the linearity of the luminance signal. The gamma correction may be turned off, but this depends on the software being used. Conclusions: (1) The built-in digitisers of the AV Macs are not suitable for scientific image grabbing where a true rendition of the images' INTENSITY is required at 8-bit resolution. However, it is suitable for scientific applications which are not sensitive to brightness levels eg: finding shapes, areas, sizes of objects, detecting particles and counting them, recognising markings etc. (2) There is no hardware or software solution to these problems, other than buying a suitable NuBus frame grabber card. Suggestions for scientists: MAKE SURE YOU KNOW EXACTLY WHAT YOUR DIGITISER DOES WHEN IT DIGITISES YOUR VIDEO SIGNAL. If it is an 8-bit digitiser then its digitisation range must be from the video signal's black level voltage to its white level voltage giving a 0 -> 255 digital range. If it digitises the sync pulses, it will reduce the black -> white level digitisation range. Any gamma correction will degrade the quality of your data (unless you can turn it off). Any AGC will degrade the quality of your data (unless you can turn it off). This summary has been based on a great deal of investigation into the hardware that the AV Macs use for digitisation. I hope it all correct, but I'm more than happy to be corrected if its not. Please mail me with any comments you may have that might clarify or add to those described above. Thanks to Nick Safford for his comments and corrections. Cyrus Daboo, University of Cambridge, UK. Tel: +44 223 337006 e-mail: cd102@phy.cam.ac.uk