Index

On Critical Testing of Central Vision




Contents


Here, the term critical testing refers to the detection of minimal abnormality, when conventional measuring values remain inside normal limits. It it highly legitimate to ask whether abnormality actually can be recognized under such conditions. Provided that the test procedures are tweaked a bit, I hold that the answer is affirmative.

Consider the ultimate task of vision tests in neuro-ophthalmology, namely, to detect defects in the neuro-retinal matrix of receptive fields. It is well known that the retina samples the image of the outside world discretely, by means of a seamless matrix of non-overlapping receptive fields (top left panel in the above figure). Most defects of the neural matrix can be modelled in terms of deformation or dislocation of matrix members (top right panel) or, more commonly, dysfunction or actual loss (bottom panels). The model is a vast over-simplification, of course, but may serve as a first step towards coming to grasp with the immensely complex machinery of normal and abnormal vision. A good next step is to regularly browse the Journal of Vision, which is available for free on the Internet.

In the neural matrix model, deformations and dislocations refer to changed positions and orientations of the neural input units, which result in an impaired capture of light and a distorted neural imagery. Dysfunction, on the other hand, is not restricted to input units but may take place anywhere along the neural chain (the "neural channel") that connects each retinal receptive field with the primary visual cortex. Unless dysfunction is symmetrical across all channels (which is a rare event), dysfunction will result in localized attenuation and temporal scrambling of neural signals. Disconnection, finally, means that receptive fields have lost contact with the cortex, usually by destruction of one or more components in the neural chains. From the vision test point of view, dysfunctional and disconnected neural channels mimic lesions of the receptive fields themselves. In the present context, no distinction needs to be made concerning localization of lesions, except for deformation/dislocation, which can take place in the retina only.

The schematic display below may help to visualize what neural sampling with a depleted matrix might be like. The left panel contains four frames representing various degrees of matrix depletion. Select a frame by clicking and then drag it to the right, over the acuity chart. It can immediately be seen that small degrees of depletion will barely affect the legibility but the more severe the depletion, the more difficult it will be to read the chart. However, note that letters that appear fragmented beyond recognition may become decipherable if the depletion frame is moved back and forth. The clinical corollary is that an eye with a defective matrix may improve its performance if it is allowed to use eye movements to sweep the retinal image over its neural matrix. Here, then, is a first hint for critical testing: prevent gains from scanning eye movements, by limiting presentation time. Short presentation times have the added benefits of reducing any influences of after-images and Troxler effects.

Your browser does not support the canvas element.

The above display illustrates another feature that may help to improve the ability of central vision tests to detect small degrees of damage. Clearly, the visual system has a considerable tolerance to fragmentation of optotypes. Expressed in other words, optotypes (and most other targets used in clinical vision testing) contain a vast excess of information. So, a second hint for improving sensitivity is to reduce the amount of available information. This latter approach is taken to its extreme in rarebit testing.

A bothersome aspect of all vision tests, or at least all vision tests that provide finely graduated scales, is the immense width of normal limits. However refined the test tool, and however refined the measuring protocol, different normal individuals always will present different results. Interindividual differences go back to numerous factors, including true anatomical differences. For example, it is known that normal subjects differ by a factor 3 in their numbers of cones in the macula [1]. Similarly, the surface area of the primary visual cortex is known to vary over a 3-fold range [2]. Add some well-known psychological factors, e g, maturity, degree of attention, and mental agility, and add effects of ageing, and the scene is set for a staggering interindividual variation. The diagram to the right captures the summed effects of these and other individual factors for a plain (but carefully executed) optotype acuity test [3]. Note that an average subject, say, a 35 year-old, has a visual acuity of about 1.4 decimal (20/14). If this subject progressively loses acuity, and previous measurements are lacking, he or she may go on undiagnosed for a long while, until he or she passes through the lower normal bound. The situation is still worse for a subject who normally performs above average. Of course, this is an undesirable state of affairs. Is there a solution? Replacing the plain chart used to collect the data for the above diagram with a currently popular chart like the EDTRS, or using logMAR designations of results, would not change anything: normal variation simply will not go away. The ostrich solution, i e, insisting that normal acuity equals 1.0 (20/20) is not worth discussing. It is much like insisting that everyone must wear size 7 shoes.

Obviously, a reduction of the width of normal limits is highly desirable. One way is to try to identify and remove undesired sources of variation, e g, putting a cap on presentation times. Adjusting results for "mental agility" could also be worthwhile but would require the addition of a second test. A completely different solution is to skip altogether the classical approach of grading performance and instead look directly for evidence of neural matrix abnormality. This is what rarebit testing aims to do: essentially, it probes for holes in the neural matrix. Normally, there should be none, or nearly so, meaning that normality is crisply defined.

There is another important aspect of measuring values, namely, their relation to the severity of any matrix damage. Deficits are commonly expressed in terms like a loss of X lines on an acuity chart, an inability to read Y plates in a pseudo-isochromatic test, or a loss of Z dB in perimetry. While such numbers are easy enough to provide, what do they really mean in terms of neural damage? And how should neural damage best be expressed? From a pathophysiological point of view, the number of matrix holes would seem ideal.

There are several indications that conventional measures are quite inept at reflecting neural channel losses. For example, it has been estimated that less than 60% of the normal complement of foveal neural channels suffices to uphold an acuity level of 1.0 (right, [4]). In conventional perimetry, deviations from average normal of less than 5 dB are usually held immaterial. This may be part of the reason why conventional perimetry is unable to reflect channel losses smaller than 25-50% [5]. How a reduced capacity to read pseudo-isochromatic plates may reflect degrees of damage seems to remain to be explored. Recent advances in photoreceptor imaging [6, 7] may help to fill in this blank in the near future.


 Acuity


Critical acuity testing should always begin with ordinary, standardized equipment [8], aiming to obtain a first indication of the level of acuity at distance with the best optical correction. It is debatable whether one type of equipment (chart, projector, monitor) is better than another as long as it provides several sets of optotypes more difficult than 1.0 (20/20), at high contrast. The debate on optimum types of test targets has been going on for nearly two centuries and continues to polarize proponents of test letters (Snellen's, Sloan's and others) against proponents of other targets like Landolt C, of alleged "purer" qualities. Arguments for and against various types of acuity tasks, including detection, recognition, and hyper-acuity, have acquired renewed attention with the increasing use of computer-controlled displays and with the advent of wavefront-guided refractive surgery and its tantalizing promise of delivering "super vision". An interesting development is a refined, freeware implementation of the Landolt C test, the Freiburg Visual Acuity Test. However, there is little to indicate that one specific type of test target or test procedure generates substantially narrower normal limits than do others, so an analysis falls outside the present scope.

As to actual testing, there is much to speak for right-to-left reading when the right eye is tested, and vice versa: such a procedure minimizes memory gains and maximizes chances to identify the spatial acuity gradients that are common with mid-chiasmal lesions. If the results fall within normal limits, the question arises if there still may exist minimal abnormality. Soft indications of minimal abnormality may already have been obtained from attending to the subject's manner of reading. A lack of fluency, a slow speed, or guessing, are all suspicious features. A finding of a subtle asymmetry between the eyes may indicate a test of binocular acuity to see whether binocular summation has the normal magnitude (about 10%, when measuring values are expressed in angular units [9]). Acuity may actually be worse using both eyes, indicating binocular inhibition. Other important clues as to minimal abnormality may be offered by the patient, who may observe subtle deformations of the test targets, or of the alignment of targets, as expressions of minimal matrix deformation. Some observations may be difficult to interpret but the customer is always right, of course. The peculiar complaint of "visual snow" has been touched upon elsewhere on this site.

Another way to try to clinch a diagnosis of minimally defective acuity is to limit presentation time. The flashing acuity test presented here offers choices of presentation times and numbers of letters shown. The test opens with a non-flashing presentation. Left-click on the display to decrease letter size and right-click to increase letter size. Move the cursor outside the test frame to change the letter combination, without changing letter size. Letters drawn from the HOTV series will be shown in ever-new, random combinations. The range of available sizes is small (= 6) because the test is primarily meant for probing for minimal acuity loss. The step factor equals 0.1 log unit. With a screen pixel size of 0.27 mm, the smallest letter size corresponds to 2.5 decimal (Snellen 20/8) at 4 m viewing distance. The largest letter size corresponds to approximately 0.8 decimal (Snellen 20/25).

The settings menu allows selection of the number of letters shown and the flash duration. Suggested initial settings are 200 ms (which is short enough to prevent re-fixation) and three letters (which is enough to include crowding effects).



Whenever minor abnormality is encountered, the question arises if it is attributable to neural matrix defects, or optical faults, or a combination of the two. Minor optical faults and particularly combinations of two or more minor optical faults may be very difficult to identify. An old-fashioned manual refractometer may be very helpful in such evaluations. An informal test for the Pulfrich phenomenon may also be informative. The Pulfrich phenomenon is fairly robust against optical faults other than opacity.



Top
Page   1   2   3   Next