--


About ColorImage 1.60

Joseph Ayers
Marine Science Center
Northeastern University
East Point
Nahant, MA 01908
Internet: lobster@neu.edu
and
Garth Fletcher
President, JacqCAD International
288 Marcel Road, Mason, NH 03048-4704
What's new in ColorImage 1.60
ColorImage 1.60 (68K) Download (Major Update 9/20/2000)
ColorImage 1.60 (PPC) Download (Major Update 9/20/2000)

Online Reprint
Ayers, J. (1992) Desktop Motion Video for Scientific Image Analysis. Advanced Imaging 7: 52-55.


About NIH Image

NIH Image is a public domain program for the Macintosh II for doing digital image processing and analysis created by Wayne Rasband at NIMH. It can acquire, display, edit, enhance, analyze, print, and animate images. It reads and writes TIFF, PICT, and MacPaint files, providing compatibility with many other Macintosh applications. It supports many standard image processing functions, including histogram equalization, contrast enhancement, density profiling, smoothing, sharpening, edge detection, and noise reduction. Spatial convolutions, with user defined kernels up to 63x63, are also supported.

Image can be used to measure the area, mean density, center of gravity, and angle of orientation of a user defined region of interest. It also performs automated particle analysis and can be used to measure path lengths and angles. Measurement results can be printed, exported to text files, or copied to the Clipboard. Results can also be calibrated to provide real world values.

It provides MacPaint-like editing of color and grayscale images, including the ability to draw lines, rectangles, ovals and text. It can flip, rotate, invert and scale selections. It supports multiple windows and 8 levels of magnification. All editing, filtering, and measurement functions operate at any level of magnification and are undoable. It uses digital halftoning to print images on PostScript printers and Floyd-Steinberg dithering for printing on non-PostScript printers.ports either the Data Translation QuickCapture card or Scion Image Capture 2 card for digitizing images using a TV camera.

Acquired images can be shadingcorrected and frame averaged.

For full operation, Image requires a Mac II(x, cx, ci) with at least 2 megabytes of memory, but 4 megabytes, or more, is recommended for doing animation, for simultaneously displaying more than a handful of pictures, or for running under MultiFinder. NIH Image also requires an 8-bit video card capable of displaying 256 colors or shades of gray.


Go to the NIH Image Home Page

About ColorImage

ColorImage is a enhancement of version 1.60 of the NIH Image Program. It add procedures for color image segmentation, a video interface for frame-by-frame kinematic analysis from videotape and support for the Varispec Tunable Filter. Bear in mind that this program has extensive memory requirements. It will work on the smallest of images at 1.5 mByte but really requires at least 3.

Image segmentation is supported by two new menus, Color and RGB color on the right of the menu bar. The only changes to the internal code of Image are in the LUT window that now has a white stripe down the right, the new color histogram display and some slight modifications to the camera including the modules for acquisition of RGB images. When the 8 bit LUT based segmentation algorithms are in operation, the white stripe in the LUT window will contain horizontal black stripes that indicate which image pixels have been segmented. We have modified Rasband's Histogram to simultaneously display the color look up table with the pixel frequencies. This display is useful for determining whether hue segmentation using Thresholding will work on an 8 bit ColorImage.


The goal of our segmentation algorithms is the quantification of objects within biological images so the analysis typically proceeds in three stages as illustrated in the above figure:

(1) Generation of Color Images. ColorImage is designed to acquire images from RGB video sources. One first uses the Create RGB Files or Create Averaged RGB Files operations to sequentially acquire Red, Green and Blue plane files from the RGB source and then Get RGB Files to read them into ColorImage. Get RGB files will open files created by other programs as long as they are named File_Spec/RED, File_Spec/GREEN, File_Spec/BLUE, etc. Selecting the Make Composite RGB Image procedure will then create a 32x32x32 RGB LUT, an 8-bit indexed color LUT and an indexed composite color image.

(2) Segmentation of objects from ColorImages. At present we support two levels of segmentation. One operates on the 8-bit CLUT of an indexed composite image while the other operates on the lower 5 bits of the Red, Green and Blue components of an RGB image. The latter analysis requires an RGB source and an 8 bit frame grabber that can be switched between the R, G and B planes or the Red, Green and Blue files generated by such a source. The result of this analysis is the generation of a binary image where the pixels within the segmented class are black and the rest of the image white. The RGB region segmentation algorithms currently represent the segments of different classes in different "average" colors but these can be converted into the binaries with the RGB thresholding procedure described below.

(3) Segmentation and quantification of the individual region objects. We originally supported this function in a previous program (ImageSegmenter) as a variation on the Papert's Turtle edge tracing algorithm and Rasband has incorporated this function into the body of Image as the "Magic Wand" and "Analyze Particles" tools that operates on points rather than image pixels. Version 1.60 now supports automatic segmentation of the different objects and deals with the edge bound objects, internal holes, etc..

The functionality within the added menus is as follows:

Stacks Menu

Color Image adds four functions to the bottom of the NIH Image Stacks Menu.


ParticleTracksMOOV.gif (145k)



Particle Tracking

This function permits visualization of particle flow as demonstrated in the animated GIF movie and processed moved demonstrated above (see also Breithaupt and Ayers, 1997; 1997). To perform particle tracking. Small particles such as brine shrimp eggs are suspended into the water column. The field of interest is then illuminated with a plane of bright light at the focal plane such that when the particles enter this plane they become brightly illumnated. If one makes a movie, the particles are tracked throught the individual frames. The particle tracking function is an enhancement of the average frames function. It identifies the brightest value that each pixel has in each of the frames of a movie stack. Where moving particles are present they appear as strings of beads (see right panel above). If one clicks on each of the particle images in this superimposed movie, one can determine the velocity and direction of movement of the particles.



Interpolate Even Field

This function is useful to clean up video images which are acquired from the Sony decks in pause mode. Pause mode images only consist of the odd video field. This filter performs a linear interpolation of the even field lines followed by a median filter. For stacks, this function sequentially steps through all the frames of a movie and performs the even field interpolation described above.

Control Menu

PArgus.gif (567k)

Interactive VideoTape-Based Acquisition and Control

ColorImage also supports motion analysis through interactive video tape control. This allows analysis in the laboratory of data recorded in the field with CamCorders or tape decks We have been quite successful importing images from Sony High-8 recorders and camcorders. The EVS-9000 recorder allows acquisition from interchangeable lens cameras allowing acquisition from microscopes but has an aliased video signal in pause mode. The CCD-V5000 camcorder supports a digital frame buffer to provide complete NTSC frame in pause mode and features an internal time base correcter that provides excellent synchronization with the frame grabber. The CCD-V5000 and Sony VideoDecks which feature Time Base Correction are clearly the devices of choice when frame acquisition must be performed in pause mode and the CCD-V5000 can be used to record from interchangeable lense cameras when in VTR mode.

We have integrated the video tape control toolkit available from the developer of VideoToolkit, (Vidionics 1 (800) 338-EDIT) to both catalog video tapes as well as to control the video deck from ColorImage. Video Toolkit consists of a Hypercard stack that can be used to index a tape as well as a cable which interconnects the Mac Serial port to the Control-L (LANC) and Control-S ports of the Sony decks. To use this interface, you need to purchase the cable from Videonics. The video tape control toolkit implements a direct interface from the Mac serial port to the Control-L and Control-S editing controller interfaces of the Sony decks and camcorders. In addition, the tookit implements a SIMPTE time code reader for more precise synchronization. ColorImage supports two dialog boxes that allow control via the Control-L and Control-S ports that support Sony editing controllers.



L Port Tape Controller

The Control-L interface is bidirectional and is capable of both transmitting control commands from the application to the deck as well as receiving status and counter information from the deck. Our implementation of the Control-S interface allows control of play, stop, record, fast-forward, rewind modes as well as pause and single frame forward and backward motion from pause. The dialog allows selection of the three major counter/timer formats utilized in the Sony Decks. The dialog reports the deck time, counter values and the current deck status continuously The dialog also supports interactive searches for data segments at particular times or counter values (The Control-L dialog is shown below)



S Port Tape Controller
The Control-S interface is a unidirectional interface that allows the application to send control commands to the deck. Control-S devices can be daisy chained and up two three different devices can be controlled through the same port. Our implementation of the Control-S interface allows control of play, stop, record, fast-forward, rewind modes as well as pause and single frame forward and backward motion from pause. The dialog allows selection of one of three devices for control. The Control-S dialog is illustrated below.



Dynomeasureª
Under Construction

Dynomeasureª implements a set of algorithms for time-based analysis of object parameters from movies. These functions generate specialized display lists or tab-delimited files of the selected parameters where the first column is the frame time and subsequent columns are the measured parameter in each of the frames of the movie. The functions feature aural prompts for the features to be acquired from each of the frames of the movie. These files are useful for time-series of motion (eg. joint angle vs time, growth of particles, changes in shape, etc.). The different forms of analysis are selected from the following dialog.



  • Coordinate List This procedure is specialized for the generation of display lists of the body shape of undulating organisms. To use it, make a movie from above with the swimming specimen in a pan with a white background, calibrate the image of one frame and select Digitize Undulations. The user will be prompted for a file header, a file name and for each of the frames of the movie, a frame descriptor, and a click on the position of the nose and the tail. The program then automatically traces two sets of coordinates that define the shape of the left and right sides of the body axis. The algorithm outputs a file that is the basis of a suite of programs for the quantitative analysis of swimming behavior (see Ayers, 1989 for a review).

  • Annunciate Checking this box causes a numerical annunciation of each measurement to be made
  • Dynomeasure Checking this box makes the image occilate between the previous and following images just prior to making the measurement with the mouse. This allows one to employ apparent motion (the thing that makes video so much clearer than stop frame video) to recognize image features.
  • Dynamic Scrolling Often one wants to make detailed measurements on a relatively small object which is moving rapidly through a large field. Checking this box allows dynamic scrolling. In other words, you can zoom in on a feature of an image which may be moving around in an otherwise larger image. Each time Dynomeasure skips to a new frame the Grabber tool is evoked allowing one to center the object in the display window. If one then hits the space bar the object can be measured.
  • Undulations This procedure is specialized for the generation of display lists of the body shape of undulating organisms. To use it, make a movie from above with the swimming specimen in a pan with a white background, calibrate the image of one frame and select Digitize Undulations. The user will be prompted for a file header, a file name and for each of the frames of the movie, a frame descriptor, and a click on the position of the nose and the tail. The program then automatically traces two sets of coordinates that define the shape of the left and right sides of the body axis. The algorithm outputs a file that is the basis of a suite of programs for the quantitative analysis of swimming behavior (see Ayers, 1989 for a review).
  • Shapes This function is similar to the undulations algorithm except that is supports manual tracing of the shape of the specimen.
  • Positions This function allows one to enter a the x and y coordinates of a point using the cursor for each of the frames of a movie. It generates a tab delimited file where the first column is the frame time, the second column is the x coordinate of the point and the third column is the corresponding y coordinate.
  • Distances This function allows one to enter the distance between two points using the ruler tool for each of the frames of a movie. It generates a tab delimited file where the first column is the frame time and the second column is the distance.
  • Angles This function allows one to enter a measured angle using the angle tool for each of the frames of a movie. It generates a tab delimited file where the first column is the frame time and the second column is the angle.



  • Clipfinder

    We use a commercial product VideoToolkitª to index and assemble video tapes of behavioral and electorphysiological data. Video Toolkitª has extensive features for cataloging and accessing video tapes, but also uses the Mac serial port to control the deck so it can't be used simultaneously with ColorImage. We have added a module to ColorImage that allows it to read in video tape logs created by VideoToolkitª. The Clipfinder window above shows what the log looks like when read in to ColorImage. if one clicks on one of the clip names, it is highlighted and the clip information is displayed at the bottom of the window. If one double clicks on a highlighted name the automatic search faciity described above for the Control L dialog, searches out the clip and queues it up for the Make movie feature.

    VideoTape-Based Movies

    Stepping Movie

    This function allows one to digitize sequential images from video tapes on a frame by frame basis. The user pauses the deck on a frame of interest and selects a region for digitization. The program requests the number of frames to be digitized (limited only by free CPU memory) and the number of video frames to skip between each acquired frame. and The program then opens a new window of the appropriate dimensions, grabs a frame, then opens a new window, advances the deck to the next frame and repeats this process until the specified number of frames is acquired. This function only works well with the CCD-V5000 CamCorder which outputs a full frame in pause mode. If the Field Movie box is checked the algorithm performs an even field interpolation and median filter prior to stepping to each subsequent frame.




    Moving Movie

    This algorithm is similar to Rasband's Make Movie function except that it allows timed acquisition from a moving video tape. To utilize this function one pauses the deck on a frame of interest and selects a region for digitization. The program requests the number of frames to be digitized (limited only by free CPU memory) and the interval (seconds) to between each acquired frame. The program then opens a set of new windows of the appropriate dimensions, rewinds the tape and places in play mode and then waits for a trigger. The cycle of video acquisition can be triggered in one of two fashions. The program polls for the method of choice and if counter value is selected polls for the counter value. The standard fashion is to play until the Control-L interface receives the requested counter value. We have tested this with SIMPTE coded tapes and found it to be accurate to within 2 video frames. A more accurrate method of synchronization is to dub a noise burst on one of the analog channels of the tape. When this method is chosen, the A/D converter samples that audio channel until it detects the burst then initiates acquisition after a 500 msec. delay. Once the acquisition cycle is initiated the frame grabber acquires frames at the specified interval (up to 20/sec for small selections) until the appropirate number of frames is acquired. The algorithm then places the deck in pause mode.

    Stereo Analog Acquisition

    We have recently added modules to ColorImage to allow acquisition of stereo analog information which is temporally syncronized with corresponding digital movies. We use this functionality to directly correlate electrophysiological and electro-myographic recordings with kinematic analysis of animal behavior. Analog acquisition is supported with the GW Instruments MacAdios II board through the Turbodrivers libraries. To use these routines one must connect the left channel of the video deck to the A0in channel of the break-out connector and the right channel to the A1in channel of the break-out connector.

    Moving Analog Acquire
    This procedure is the audio equivalent of the Moving High-8 Movie procedure and operates in a very similar fashion. The procedure requests the name of the video tape, an output file specification, the duration of the sample to be acquired, a sampling interval (µsec), the type of triggering to be performed and an optional counter value. The program then opens a set of new windows of the appropriate dimensions, rewinds the tape, opens a "digital oscilloscope" window, places the deck in play mode and then waits for a trigger to initiate acquisition. The acquired data is plotted, rescaled, replotted and annotated.

    Set MacAdios Slot
    The defalut slot setting for the MacAdios board is slot 5. This procedure lets one change the default. We will soon superscede this with an automatic slot finder.

    Change Frame Grabber
    ColorImage supports three frame grabbers, the DataTranslation QuickCapture Frame Grabber, the RasterOps 24STV Frame Grabber and any Quicktime VDIG. This menu selection brings up a dialog which allows you to switch between them if you have more than one installed in your machine.

    Varispec Filter Controller
    Under Construction

    Scan Spectrum
    Under Construction

    Spectral Movie
    Under Construction

    Spectrum of ROI
    Under Construction



    Color Menu

    To enter color information into Image from a video source the user can use the Data Translation 8-bit Frame grabber only if it is connected to an RGB source. The user must hook up the RGB source such that the Red output is connected to the #1 input of the QuickCapture Card, the Green output is connected to #0 input and the Blue output is connected to the #2 input. ColorImage does not support the Scion board for color image acquisition. Color files can be acquired and merged as shown below under Grab RGB Selection and Merge Colors.

    The following algorithms require an RGB video source and/or files digitized separately from the Red, Green and Blue planes of any RGB source.

    Make Composite RGB Image
    This procedure links the Red, Green and Blue files for color segmentation. If three RGB files have been opened in the order red, green, blue, this function will create a 3D 32x32x32 RGB CLUT as well as an 8 bit CLUT by several derivatives of the median cut algorithm and generate an 8-bit indexed color display of the pixel map in a window titled "Indexed Color". The Indexed Color window is subject to 8 bit CLUT segmentation by choices in the Control menu. Be sure to save this file to disk before trying any of the segmentation algorithms.
    The purpose of the 3D RGB CLUT is to allow adaptive region based segmentation in RGB Space. The 3D RGB CLUT can be displayed using the View RGB LUT selection described below while the RGB pixel frequency histogram can be displayed using the View RGB Histograms selection.

    Remake Composite RGB Image
    This command will generate a new 3D RGB CLUT and Indexed Composite image based on the settings selected in the settings window.

    Open New RGB Color Segment
    RGB segmentation breaks up the image into RGB color segments that contain the pixel colors characteristic of a given object class. This command is utilized to start the compilation of a new color segment and to switch between multiple color segments on a random access basis.


    Add To RGB Color Segment
    The color segments are defined by the selection of regions of pixels with one of the "marching ants" selection tools. When the Add to RGB Color Segment function is selected, the pixel colors in the currently selected region are added to the Color Segment histogram.

    Subtract from Segment
    When the Subtract from Segment function is selected, the pixel colors in the currently selected region are eliminated from the currently selected Color Segment histogram.

    Subtract from Other Segments
    When the Subtract from Other Segments function is selected, the pixel colors in the currently selected region are eliminated from all the RGB histograms of all Color Segments.

    RGB Color Segmentation
    This function generates a new window where the pixels in different RGB Color Segments are segmented and displayed in their average color. In this display, ambiguous pixels, i.e. pixels that are members of more than one class are displayed in bright red, although we will probably animate these pixels in a future version.

    Reset Window Assignments
    This function resets the RGB Color Segment histograms, allowing further segmentation., additional images, etc. It must be called every time the Red, Green, and Blue images making up a color set is changed. Failure to do this will lead to very unpredictable results.





    RGB Composite Options
    This selecting brings up a dialog box that allows one to tailor the algorithms used to create the Look Up Tables. The selections affect general color separation parameters the resolution of the 3D look up tables (0...255, 0...Max, Min...Max) and the linkage of the resolution between Red, Green and Blue axes. In addition the methods and parameters utilized in the generation of the 8-bit composite image such as the color segmentation algorithm (Middle Cut, Median Cut, Mean Cut), the number of colors in the Look Up table and the ordering of colors in the Look up table (Value: Vhs, Hue: Hiv, RGB and Pixel Frequency).

    RGB LUT Histograms
    ColorImage supports two forms of 3D RGB hitograms of the image data and region classes segmented from the data. They will not operate on 8 bit indexed color images and require color data in the form of three files that represent the Red, Green and Blue planes of the RGB source. As for generation of Indexed Composite files, they must be opened in that order and appear in that order in the Windows menu.
    The resultant histograms are a plot of red values (x axis) vs green values (y axis) broken up into 32 slices of ascending blue values progressing along rows from the lower left corner to the upper right corner. The colors are segmented on the lower 5 bits of the color data so there are 32 levels represented in each of the three planes.

    View 3-D Color LUT
    This procedure generates an RGB histogram of the RGB boxes that were utilized to generate the 8 bit CLUT. Each box indicates the the distibution of pixel values is represented in their actual color. A submenu selection will allow the user to view RGB color segments in their average color in RGB space.

    View 3-D RGB Histograms
    This function generates a RGB histogram of the image pixels. It is useful for determining whether RGB Threshold segmentation will work for a particular image and establishing the 3D RGB morphology of color classes.

    Binarize RGB Color Segments
    This module generates a binary image of each RGB color segment for binary manipulations, blob quantification, etc.


    Color Segmentation of Benthic Transects

    The color menu supports the segmentation and generation of binarized segmented images from any indexed color file. It also allows the segmented images to be saved as MacPaint style bitmaps that compresses them and allows them to be imported directly into most word processors.

    CLUT Segments
    ColorImage supports the segmentation of objects from indexed color images on the basis of sets of entries from the 8-bit color look-up table (LUT). The LUT window has been modified from Image to indicate the selected entries by a black bar to the right of each entry. The selected segment can be constructed iteratively by a variety of segmentation algorithms supported under the 8-bit color window. On such files RGB threshold and Region segmentation will do a passable job of segmentation. The Sort Palette function under the Video menu will allow Thresholding on the basis of Hue, Saturation or Brightness for files generated by other sources than ColorImage.

    The following algorithms work on 8-bit indexed color files only!

    Add Thresholded CLUT Segment
    This procedure creates a binary image of the image pixels that fall within a specified range of gray scale values or color LUT entries. The procedure functions similarly to Rasbands Make Binary operation but includes the logic necessary for blob quantification of CLUT color segments. The range of gray scale values is normally selected with the Threshold option from the Options menu. To use this algorithm first select the range of pixel values with the CLUT tool in the color window and then select Add Thresholded CLUT Segment from the Color menu. The algorithm then creates a segmented binary image of all the pixels that fall within the LUT range. If the CLUT of a color image is sorted on the basis of hue, intensity or pixel frequency, Threshold Bitmap Segmentation can be used to segment on the basis of those features.

    Subtract Thresholded CLUT Segment
    This algorithm allow the user to subtract a range of CLUT values from from a CLUT Segment. It is the opposite of Add Thresholded CLUT Segment. It requires that thresholding is operational and generates a binary image of the resultant CLUT segment

    Add RGB Cluster Segment

    This algorithm creates a binary image of all image pixels that fall within a RGB cube in RGB space, centered around a selected color. To use it first select a pixel color from the image using the eyedropper tool. The color the users select will appear in the paint brush icon on the tool palette. If the user then selects RGB Threshold Segmentation from the Color menu they will be presented with a Dialog box that requests a RGB radius. This is a distance in RGB space (0 to 64000 in the R, G and B planes) within which pixels will be selected for segmentation. For example if the user selects a color RGB, the algorithm will segment pixels with values within R ±RGB Radius, G ±RGB Radius and B ±RGB Radius. The algorithm then creates a segmented binary image of all the pixels that fall within that RGB cube.

    Add Region CLUT Segment

    Many objects in biological images are represented by heterogeneous colors. For example, a sponge image may be made up of orange, brown, yellow and green pixels. Region bitmap segmentation allows one to acquire and segment on the basis of heterogeneous color sets. This tool utilizes the lasso tool to select a region of pixels. The algorithm will then select which CLUT colors were within the selected region and segment a binary image of the image pixels with the corresponding values. Be careful in selecting regions that contain extremely dark (i.e. black) pixels as these are present in most objects. The user is better off selecting regions that contain pixels that appear unique to the objects to be segmented.

    Subtract Region CLUT Segment

    Region difference segmentation allows the user to specify a region of pixels that are to be excluded from the segment. The resulting binary image will exclude the selected LUT entry pixels that were selected. This selection is useful for removing "ambiguous" pixels that are common to two objects in an image.

    Reset CLUT Segment
    This function clears the entrys in the currently selected CLUT segment.




    RGB Sources

    ColorImage was developed using both the Sony ProMavica (MVR-5500) electronic still video recorder/player as well as the Sony XC-711 CCD RGB Color Camera as RGB sources. The XC-711 has a broad range of wavelengths (~475-625 nm) over which at least two sources respond to the same pixel giving it excellent color resoultion capabiliities. The ProMavica has the advantage that it allows one to use NTSC video as a source and has external synch capabilities. Both of these sources work well with both the Data Translation and Scion frame grabber boards.

    MAC IIfx Users

    Put the IIfx Serial Switch into your system folder and put the IIfx into compatibility mode so it thinks it has SCC chips which is what the Sony commands hammer on.

    NIH Image

    Consult the NIH Image V1.59 Manual for a description of the basic functionality of the version of Image in this version of ColorImage. Image was developed by Wayne Rasband, National Institutes of Health, Bldg. 36, Room 2A-03, Bethesda, MD 20892. (wsr@nihcu.bitnet) and and the current version of NIH Image, ColorImage and other versions of Image are available by anonymous FTP at the NIMH Bulletin board (alw.nih.gov 128.231.128.251 in the subdirectory /Pub/Image) as well as on the MacSciTech Bulletin board (RA.NRL.NAVY.MIL in the subdirectory /MacSciTech).

    References

  • Ayers, J. (1989) Recovery of oscillator function following spinal regeneration in the sea lamprey. In: Cellular and Neuronal Oscillators. J. Jacklet, [ed]. Marcel Dekker, New York, Pp. 349-383.
  • Maney, E.J., Ayers, J., Sebens, K.P, and Witman, J. (1990) Quantitative Techniques for Underwater Video Photography. In: Diving For Science. Am. Acad. Underwater Sci., St. Petersburgh
  • Ayers, J and Fletcher, G. (1990) Color Segmentation and Motion Analysis of Biological Image Data on the Macintosh II. Advanced Imaging 5: 39-42
  • Ayers, J. (1992) Desktop Motion Video for Scientific Image Analysis. Advanced Imaging 7: 52-55.
  • Breithaupt, T. and Ayers, J. (1996) Visualization and quantative analysis of underwater biological flow fields using suspended particles. Marine Behavior and Physiology, in press

  • --
    MSC Home Page MSC Staff Directory Biological Imaging Laboratory

    (Page last changed 9/20/2000)