Path: bloom-beacon.mit.edu!hookup!swrinde!sgiblab!a2i!flash.us.com!flash.us.com!not-for-mail
From: dclunie@flash.us.com (David A. Clunie)
Newsgroups: alt.image.medical,comp.protocols.dicom,sci.data.formats,alt.answers,comp.answers,sci.answers,news.answers
Subject: Medical Image Format FAQ, Part 1/2
Followup-To: alt.image.medical
Date: 8 Jul 1994 10:50:29 +0300
Organization: Her Master's Voice
Lines: 934
Approved: news-answers-request@MIT.EDU
Distribution: world
Message-ID: <2vj0g5$697@britt.ksapax>
Reply-To: dclunie@flash.us.com (David A. Clunie)
NNTP-Posting-Host: britt.ksapax
Summary: This posting contains answers to the most Frequently Asked
Question on alt.image.medical - how do I convert from image
format X from vendor Y to something I can use ? In addition
it contains information about various standard formats.
Xref: bloom-beacon.mit.edu alt.image.medical:1162 comp.protocols.dicom:247 sci.data.formats:535 alt.answers:3508 comp.answers:6214 sci.answers:1357 news.answers:22232
Archive-name: medical-image-faq/part1
Posting-Frequency: monthly
Last-modified: Fri Jul 8 10:48:34 GMT+0300 1994
Version: 1.01
This message is automatically posted once a month to help readers looking
for information about medical image formats. If you don't want to see this
posting every month, please add the subject line to your kill file.
Contents:
part1 - contains index, general information & standard formats
part2 - contains information about proprietary formats & hosts
Changes this issue:
Changed archive name to 'medical-image-faq/partn' at request of mit.
Added qsh information.
Sparc floating point code bug fixed.
Data General network hardware/software.
Using the Vax/VMS DUMP utility to encode for ascii transfer.
More Siemens information.
Philips S5 MRI image data format.
Added lunis information.
Added mailserver section: ftpmail, interfile, medimagex, nucmed.
Changes last issue:
Split into two parts.
GE Genesis information extended.
GE Signa 3X/4X image data format included.
Siemens GBS II, Impact, DR information (limited).
Picker IQ/PQ CT information (limited).
Vax data layout description.
More anti-VMS vitriol added.
Sparc data layout description.
Many FAQs, including this Listing, are available on the archive site
rtfm.mit.edu in the directory pub/usenet/news.answers. The name under
which a FAQ is archived appears in the Archive-name line at the top of
the article.
There's a mail server on that machine. You send a e-mail message to
mail-server@rtfm.mit.edu containing the keyword "help" (without quotes!)
in the message body.
Note: this FAQ has been formatted as a digest. Many newsreaders
can skip to each of the major subsections by pressing ^G.
Please direct comments or questions and especially contributions to
"dclunie@flash.us.com"
or reply to this article.
START OF PART 1
--------
Subject: Index
1. Introduction
1.1 Objective
1.2 Types of Formats
1.3 In Desperation - Quick & Dirty Tricks
2. Standard Formats
2.1 ACR/NEMA 1.0 and 2.0
2.2 ACR/NEMA DICOM 3.0
2.3 Papyrus
2.4 Interfile V3.3
2.5 Qsh
3. Proprietary Formats
3.1 General
3.1.1 SPI (Standard Product Interconnect)
3.2 CT
3.2.1 General Electric
3.2.1.1 CT 9800
3.2.1.1.1 Image data
3.2.1.1.2 Tape format
3.2.1.1.3 Raw data
3.2.1.2 CT Advantage - Genesis
3.2.1.2.1 Image data
3.2.1.2.2 Archive format
3.2.1.2.3 Raw data
3.2.1.3 Scitec/Pace
3.2.2 Siemens
3.2.2.1 Somatom DR
3.2.2.2 Somatom Plus
3.2.2.3 Somatom AR
3.2.3 Philips
3.2.4 Picker
3.2.5 Toshiba
3.2.6 Hitachi
3.2.7 Shimadzu
3.2.8 Elscint
3.3 MR
3.3.1 General Electric
3.3.1.1 Signa 3X and 4X
3.3.1.1.1 Image data
3.3.1.1.2 Tape format
3.3.1.1.3 Raw data
3.3.1.2 Signa 5X - Genesis
3.3.1.2.1 Image data
3.3.1.2.2 Tape format
3.3.1.2.3 Raw data
3.3.1.3 Vectra
3.3.2 Siemens
3.3.2.1 GBS II
3.3.2.2 SP/Vision
3.3.2.3 Impact
3.3.3 Philips
3.3.3.1 S5
3.3.3.2 ACS
3.3.3.3 T5
3.3.3.4 NT5 & NT15
3.3.4 Picker
3.3.5 Toshiba
3.3.6 Hitachi
3.3.7 Shimadzu
3.3.8 Elscint
4. Host Machines
4.1 Data General
4.1.1 Data
4.1.1.1 Integers
4.1.1.2 Floating Point
4.1.2 Operating System
4.1.2.1 RDOS
4.1.2.2 AOS/VS
4.1.3 Network
4.2 Vax
4.2.1 Data
4.2.1.1 Integers
4.2.1.2 Floating Point
4.2.2 Operating System
4.2.2.1 VMS
4.2.2.2 ULTRIX
4.2.2.3 OSF
4.3 Sun4 - Sparc
4.2.1 Data
4.2.1.1 Integers
4.2.1.2 Floating Point
4.2.2 Operating System
5. Compression Schemes
5.1 Reversible
5.2 Irreversible
5.2.1 Perimeter Encoding
6. Getting Connected
6.1 Tapes
6.2 Ethernet
6.3 Serial Ports
7. Sources of Information
7.1 Vendor Contacts
7.2 Relevant FAQ's
7.3 Source Code
7.4 Commercial Offerings
7.5 FTP sites
7.6 Mailservers
7.7 References
8. Acknowledgements
--------
Subject: Introduction
1. Introduction
1.1 Objective
The goal of this FAQ is to facilitate access to medical images stored
on digital imaging modalities such as CT and MR scanners, and their
accompanying descriptive information. The document is designed particularly for
those who do not have access to the necessary proprietary tools or
descriptions, particularly in those moments when inspiration strikes and one
just can't wait for the local sales person to track down the necessary
authority and go through the cycle of correspondence necessary to get a
non-disclosure agreement in place, by which time interest in the project has
usually faded, and another great research opportunity has passed ! It may also
be helpful for those keen to experiment with home-grown PACS-like systems using
their existing equipment, and also for those who still have equipment that is
still useful but so old even the host computer vendor doesn't support it any
more !
There is of course no substitute for the genuine tools or descriptions
from the equipment vendors themselves, and pointers to helpful individuals in
various organizations, as well as names and catalog numbers of various useful
documents, are included here where known.
In addition there are several small companies that specialize in such
connectivity problems that have a good reputation and are well known. Contact
information is provided for them, though I personally have no experience with
their products and am not endorsing them.
Finally, great care has been taken not to include any information that
has been released under non-disclosure agreements. What is included here is the
result of either information freely released by vendors, handy hints from
others working in the field, or in many cases close scrutiny of hex dumps and
experimentation with scanner parameters and study of the effects on the image
files. The intent is to spread hard-earned knowledge gained over many years
amongst those new to the field or a particular piece of equipment, not to
threaten anyone's proprietary interests, or to substitute for the technical
support available from vendors that ranges from free to extortionate, and
excellent to abysmal, depending on who your are dealing with and where in the
world you are located !
Please use this information in the spirit in which is intended, and
where possible contribute whatever you know in order to expand the information
to cover more vendors and equipment.
1.2 Types of Formats
Later sections will deal with the problems of getting the image files
from the modality to the workstation, but for the moment assume the files are
there and need to be deciphered.
Four types of information are generally present in these files:
- image data, which may be unmodified or compressed,
- patient identification and demographics,
- technique information about the exam, series, and slice/image.
Extracting the image information alone is usually straightforward and
is described in 1.3. Dealing with the descriptive information, for example to
make use of the data for dissemination in a PACS environment, or to extract
geometry details in order to combine images into 3D datasets, is more difficult
and requires deeper understanding of how the files are constructed.
There are three basis families of formats that are in popular use:
- fixed format, where layout is identical in each file,
- block format, where the header contains pointers to information,
- tag based format, where each item contains its own length.
The block format is one of the most popular, though in most cases, the
early part of the header contains only a limited number of pointers to large
blocks, the blocks are almost always in the same place and a constant length,
for standard rather than reformatted images at least, and if one doesn't know
the specifics of the layout one can get by assumming a fixed format. I presume
this reflects the intent of the designers to handle future expansion and
revision of the format.
The example par excellence of the tag based format is the ACR/NEMA
style of data stream, which, though never intended as a file format per se has
proven useful as model. See for example the sections dealing with the ACR/NEMA
standards as well as DICOM (whose creators are about to vote on a media
interchange format after all this time) and Papyrus. ACR/NEMA style tags are
described in more detail elsewhere, but each is self-contained and
self-describing (at least if you have the appropriate data dictionary) and
contains its own length, so if you can't interpret it you can skip it ! Very
convenient. Most file formats based on this scheme are just concatenated series
of tags, and apart from having to guess the byte order, which is not specified
(unlike TIFF which is a similar deal for those in the "real" imaging world),
and sometimes skip a fixed length but short header, are dead easy to handle.
To identify such a file just do a "strings |
______________ ______________ ______________ ______________
|XXXXXXXXXXXXXX| | | |
|______________|______________|______________|______________|
15 12 11 8 7 4 3 0
---------------------------
Bits Allocated = 16
Bits Stored = 12
High Bit = 15
|<------------------ pixel ----------------->|
______________ ______________ ______________ ______________
| | | |XXXXXXXXXXXXXX|
|______________|______________|______________|______________|
15 12 11 8 7 4 3 0
---------------------------
Bits Allocated = 12
Bits Stored = 12
High Bit = 11
------ 2 ----->|<------------------ pixel 1 --------------->|
______________ ______________ ______________ ______________
| | | | |
|______________|______________|______________|______________|
15 12 11 8 7 4 3 0
-------------- 3 ------------>|<------------ 2 --------------
______________ ______________ ______________ ______________
| | | | |
|______________|______________|______________|______________|
15 12 11 8 7 4 3 0
|<------------------ pixel 4 --------------->|<----- 3 ------
______________ ______________ ______________ ______________
| | | | |
|______________|______________|______________|______________|
15 12 11 8 7 4 3 0
---------------------------
And so on ... refer to the standard itself for more detail.
2.2 ACR/NEMA DICOM 3.0
ACR/NEMA Standards Publications
No. PS 3.1-1992 <- DICOM 3 - Introduction & Overview
No. PS 3.8-1992 <- DICOM 3 - Network Communication Support
No. PS 3.2-1993 <- DICOM 3 - Conformance
No. PS 3.3-1993 <- DICOM 3 - Information Object Definitions
No. PS 3.4-1993 <- DICOM 3 - Service Class Specifications
No. PS 3.5-1993 <- DICOM 3 - Data Structures & Encoding
No. PS 3.6-1993 <- DICOM 3 - Data Dictionary
No. PS 3.7-1993 <- DICOM 3 - Message Exchange
No. PS 3.9-1993 <- DICOM 3 - Point-to-Point Communication
No. PS 3.10-???? <- DICOM 3 - Media Storage & File Format
No. PS 3.11-???? <- DICOM 3 - Media Storage Application Profiles
No. PS 3.12-???? <- DICOM 3 - Media Formats & Physical Media
DICOM (Digital Imaging and Communications in Medicine) standards are of
course the hot topic at every radiological trade show. Unlike previous attempts
at developing a standard, this one seems to have the potential to actually
achieve its objective, which in a nutshell, is to allow vendors to produce a
piece of equipment or software that has a high probability of communicating
with devices from other vendors.
Where DICOM differs substantially from other attempts, is in defining
so called Service-Object Pairs. For instance if a vendor's MR DICOM conformance
statement says that it supports an MR Storage Class as a Service Class
Provider, and another vendor's workstation says that it supports an MR Storage
Class as a Service Class User, and both can connect via TCP/IP over Ethernet,
then the two devices will almost certainly be able to talk to each other once
they are setup with each others network addresses and so on.
The keys to the success of DICOM are the use of standard network
facilities for interconnection (TCP/IP and ISO-OSI), a mechanism of association
establishment that allows for negotiation of how messages are to be
transferred, and an object-oriented specification of Information Objects (ie.
data sets) and Service Classes.
Of course all this makes for a huge and difficult to read standard, but
once the basic concepts are grasped, the standard itself just provides a
detailed reference. From the users' and equipment purchasers' points of view
the important thing is to be able to read and match up the Conformance
Statements from each vendor to see if two pieces of equipment will talk.
Just being able to communicate and transfer information is of course
not sufficient - these are only tools to help construct a total system with
useful functionality. Because a workstation can pull an image off an MRI
scanner doesn't mean it knows when to do it, when the image has become
available, to which patient it belongs, and where it is subsequently archived,
not to mention notifying the Radiology or Hospital Information System (RIS/HIS)
when such a task has been performed. In other words DICOM Conformance does not
guarantee functionality, it only facilitates connectivity.
In otherwords, don't get too carried away with espousing the virtues of
DICOM, demanding it from vendors, and expecting it to be the panacea to create
a useful multi-vendor environment.
Fred Prior (prior@xray.hmc.psu.edu) has come up with the concept of a
User Conformance Statement to be generated by purchasers and to be satisfied by
vendors. The idea is that one describes what one expects and hence gives the
vendor a chance to realistically satisfy the buyer ! Of course each such
statement must be tailored to the user's needs, and simply stapling a copy of
Fred's statement to a Request For Proposals is not going to achieve the desired
objective. Caveat empor.
To get more information about DICOM:
- Purchase the standards from NEMA (address below) when they
become available around July 1994.
- Ftp the final versions of the drafts in electronic form
one of the sites described below.
- Follow the Usenet group comp.protocols.dicom.
- Get a copy of "Understanding DICOM 3.0" $12.50 from Kodak.
- Insist that your existing and potential vendors supply you
with DICOM conformance statements before you upgrade or
purchase, and don't buy until you know what they mean. Don't
take no for an answer !!!!
What is all this doing in an FAQ about medical image formats you ask ?
Well first of all, in many ways DICOM 3.0 will solve future connectivity
problems, if not provide functional solutions to common problems. Hence
actually getting the images from point A to B is going to be easier if everyone
conforms. Furthermore, for those of us with old equipment, interfacing it to
new DICOM conforming equipment is going to be a problem. In otherwords old
network solutions and file formats are going to have to be transformed if they
are going to communicate unidirectionally or bidirectionally with DICOM 3.0
nodes. One is still faced with the same old questions of how does one move the
data and how does one interpret it.
The specifics of the DICOM message format are very similar to the
previous versions of ACR/NEMA on which it is based. The data dictionary is
greatly extended, and certain data elements have been "retired" but can be
ignored gracefully if present. The message itself can now be transmitted as a
byte stream over networks, rather than using a point-to-point paradigm
excusively (though the old point-to-point interface is available). This message
can be encoded in various different Transfer Syntaxes for transmission. When
two devices ("Application Entities" or AE) begin to establish an "Association",
they negotiate an appropriate transfer syntax. They may choose an Explicit
Big-Endian Transfer Syntax in which integers are encoded as big-endian and
where each data element includes a specific field that says "I am an unsigned
16 bit integer" or "I am an ascii floating-point number", or alternatively they
can fall back on the default transfer syntax which every AE must support, the
Implicit Little-Endian Transfer Syntax which is just the same as an old
ACR/NEMA message with the byte order defined once and for all.
This is all very well if you are using DICOM as it was originally
envisaged - talking over a network, negotiating an association, and determining
what Transfer Syntax to use. What if one wants to store a DICOM message in a
file though ? Who is to say which transfer syntax one will use to encode it
offline ? One approach, used for example by the Central Test Node software
produced by Mallinkrodt and used in the RSNA Inforad demonstrations, is just to
store it in the default little-endian implicit syntax and be done with it. This
is obviously not good enough if one is going to be mailing tapes, floppies and
optical disks between sites and vendors though, and hence the DICOM group
decided to define a "Media Storage & File Format" part of the standard, the new
Chapter 10 which is about to be or has just been voted on.
Amongst other things, this new part defines a generic DICOM file format
that contains a brief header, the "DICOM File Meta Information Header" which
contains a 128 byte preamble (that the user can fill with anything), a 4 byte
DICOM prefix "DICM", then a short DICOM format message that contains newly
defined elements of group 0002 in the default Implicit Little Endian Transfer
Syntax, which uniquely identify the data set as well as specifying the Transfer
Syntax for the rest of the file. The rest of the message must specify a single
SOP instance which can of course contain multiple images as folders if
necessary. The length of the brief message in the Meta Header is specified in
the first data element as usual, the group length.
So what choices of Transfer Syntax does one have and why all the fuss ?
Well the biggest distinction is between implicit and explicit representation
which allows for multiple possible representations for a single element, in
theory at least, and perhaps allows one to make more of an unknown data element
than one otherwise could perhaps. Some purists (and Interfile people) would
argue that the element should be identified descriptively, and there is nothing
to stop someone from defining their own private Transfer Syntax that does just
that (what a heretical thought, wash my mouth out with soap). With regard to
the little vs. big endian debate I can't see what the fuss is about, as it
can't really be a serious performance issue.
Perhaps more importantly in the long run, the Transfer Syntax mechanism
provides a means for encapsulating compressed data streams, without having to
deal with the vagaries and mechanics of compression in the standard itself. For
example, if DICOM version 3.0, in addition to the "normal" Transfer Syntaxes, a
series are defined to correspond to each of the Joint Photographic Experts
Group (JPEG) processes. Each one of these Transfer Syntaxes encodes data
elements in the normal way, except for the image pixel data, which is defined
to be encoded as a valid and self-contained JPEG byte stream. Both reversible
and irreversible processes of various types are provided for, without having to
mess with the intricacies of encoding the various tables and parameters that
JPEG processes require. Presumably a display application that supports such a
Transfer Syntax will just chop out the byte stream, pass it to the relevant
JPEG decode, and get an uncompressed image back. More importantly, an archive
server can store the image and retrieve it without ever having to know anything
about how the image pixel data is encoded. Contrast this approach with that
taken by those defining the TIFF (Tagged Image File Format) for general imaging
and page layout applications. In their version 6.0 standard they attempted to
disassemble the JPEG stream into its various components and assign each to a
specific tag. Unfortunately this proved to be unworkable after the standard was
disseminated and they have gone back to the drawing board.
Now one may not like the JPEG standard, but one cannot argue with the
fact that the scheme is workable, and a readily available means of reversible
compression has been incorporated painlessly. How effective a compression
scheme this is remains to be determined, and whether or not the irreversible
modes gain wide acceptance will be dictated by the usual medico-legal paranoia
that prevails in the United States, but the option is there for those who want
to take it up. There is of course no reason why private compression schemes
cannot be readily incorporated using this "encapsulation" mechanism, and to
preserve bandwidth this will undoubtedly occur. This will not compromise
compatibility though, as one can always fall back to a default, uncompressed
Transfer Syntax. The DICOM Working Group on compression will undoubtedly bring
out new possibilities.
In order to identify all these various syntaxes, information objects,
and so on, DICOM has adopted the ISO concept of the Unique Identifier (UID)
which is a text string of numbers and periods with a unique root for each
organization that is registered with ISO and various organizations that in turn
register others in a hierarchical fashion. For example 1.2.840.10008.1.2 is
defined as the Implicit VR Little Endian Transfer Syntax. The 1 identifies ISO,
the 2 is the ISO member body branch, the 840 is the specific member body
country code, in this case ANSI, and the 10008 is registered by ANSI to NEMA
for DICOM. UID's are also used to uniqely identify non-DICOM specific things,
such as information objects. These are constructed from a prefix registered to
the supplier or vendor or site, and a unique suffix that may be generated from
say a date and time stamp (which is not to be parsed). For example an instance
of a CT information object might have a UID of
1.2.840.123456.002.999999.940623.170717 where a (presumably US) vendor
registered 123456, and the modality generated a unique suffix based on its
device number, patient hospital id, date and time, which have no other
significance other than to create a unique suffix.
The other important new concept that DICOM introduced was the concept
of Information Objects. In the previous ACR/NEMA standard, though modalities
were identified by a specific data element, and though there were rules about
which data elements were mandatory, conditional or optional in ceratin
settings, the concept was relatively loosely defined. Presumably in order to
provide a mechanism to allow conformance to be specified and hence ensure
interoperability, various Information Objects are defined that are composed of
sets of Modules, each module containing a specific set of data elements that
are present or absent according to specific rules. For example, a CT Image
Information Object contains amongst others, a Patient module, a General
Equipment module, a CT Image module, and an Image Pixel module. An MR Image
Information module would contain all of these except the CT Image module which
would be replaced by an MR Image module. Clearly one needs descriptive
information about a CT image that is different from an MR image, yet the
commonality of the image pixel data and the patient information is recognized
by this model.
Hence, as described earlier, one can define pairs of Information
Objects and Services that operate on such objects (Storage, Query/Retrieve,
etc.) and one gets SOP classes and instances. All very object oriented and
initially confusing perhaps, but it provides a mechanism for specifying
conformance. From the point of view of an interpreters of a DICOM compatible
data stream this means that for a certain instance of an Information Object,
certain information is guaranteed to be in there, which is nice. As a creator
of such a data stream, one must ensure that one follows all the rules to make
sure that all the data elements from all the necessary modules are present.
Having done so one then just throws all the data elements together, sorts them
into ascending order by group and element order, and pumps them out. It is a
shame that the data stream itself doesn't reflect the underlying order in the
Information Objects, but I guess they had to maintain backward compatibility,
hence this little bit of ugliness. This gets worse when one considers how to
put more than one object in a folder inside another object.
At this point I am tempted to include more details of various different
modules, data elements and transfer syntaxes, as well as the TCP/IP mechanism
for connection. However all this information is in the standard itself which is
readily available electronically from the ftp sites, and in the interests of
brevity I will not succumb to temptation at this time.
2.3 Papyrus
Papyrus is an image file format based on ACR/NEMA version 2.0. I don't
have much information about it yet, but what I do know, gleaned from Usenet and
a presentation at SCAR 94 is:
- it is from Switzerland,
- there is a library of tools available for handling it,
- it allows multiple images/file,
- it has something to do with the European RACE Telemed project,
- it stores 16 bit integers as big-endian,
and that is all for the moment ! Someone is sending me more information
Real Soon Now so stay tuned.
2.4 Interfile V3.3
Interfile is a "file format for the exchange of nuclear medicine image
data" created I gather under the auspices of the American Association of
Physicists in Medicine (AAPM) for the purpose of transfer of images of quality
control phantoms, and has been subsequently used for clinical work (please
correct me if I am wrong Trevor).
It specifies a file format composed of ascii "key-value" pairs and a
data dictionary of keys. The binary image data may be contained in the same
file as the "administrative information", or in a separate file pointed to by a
"name of data file" key. Image data may be binary integers, IEEE floating point
values, or ascii and the byte order is specified by a key "imagedata byte
order". The order of keys is defined by the Interfile syntax which is more
sophisticated than a simple list of keys, allowing for groups, conditionals and
loops to dictate the order of key-value pairs.
Conformance to the Interfile standard is informally described in terms
of which types of image data types, pixel types, multiple windows, special
Interfile features including curves, and restriction to various maximum
recommended limits.
Interfile is specifically NOT a communications protocol and strictly
deals with offline files. There are efforts to extend Interfile to include
modalities other than nuclear medicine, as well as to keep ACR/NEMA and
Interfile data dictionaries in some kind of harmony.
A sample list of Interfile 3.3 key-value pairs is shown here to give
you some idea of the flavor of the format. The example is culled from part of a
Static study in the Interfile standard document and is not complete:
!INTERFILE :=
!imaging modality :=nucmed
!version of keys :=3.3
data description :=static
patient name :=joe doe
!patient ID :=12345
patient dob :=1968:08:21
patient sex :=M
!study ID :=test
exam type :=test
data compression :=none
!image number :=1
!matrix size [1] :=64
!matrix size [2] :=64
!number format :=signed integer
!number of bytes per pixel :=2
!image duration (sec) :=100
image start time :=10:20: 0
total counts :=8512
!END OF INTERFILE :=
One can see how easy such a format would be to extend, as well as how
it is readable and almost useable without reference to any standard document or
data dictionary.
Undoubtedly ACR/NEMA DICOM 3.0 to Interfile translators will soon
proliferate in view of the fact that many Nuclear Medicine vendors supply
Interfile translators at present.
To get hold of the Interfile 3.3 standard by ftp, see the sources and
contacts listed later in this document.
2.5 Qsh
Qsh is a family of programs for manipulating images, and it defines an
intermediate file format. The following information was derived with the help
of one of the authors (Chip Maguire ):
Uses an ASCII key-value-pair (KVP sic.) system, based on the AAPM
Report #10 proposal. This format influenced both Interfile and ACR-NEMA
(DICOM). The file format is referred to as "IMAGE" in some of their articles
(see references). The header and the image data are stored as two separate
files with extensions *.qhd and *.qim respectively.
Qsh is available by anonymous ftp (see Sources section). This is a
seriously large tar file, including as it does some sample images, and lots of
source code, as well as some post-script documents. Subtrees are available as
separate tar files.
QSH's Motif-based menu system (qmenu) will work with OpenWindows 3.0 if
SUN patch number 100444-54 for SUNOS 4.1.3 rev. A is applied. The patch is
available from sunsolve1.sun.com (192.9.9.24).
The image access subroutines take the same parameters as the older
/usr/image package from UNC, however, the actual subroutines support the qsh
KVP and image data files.
The frame buffer access subroutines take the same parameters as the
Univ. of Utah software (of the mid. 1970s). The design is based on the use of
a virtual frame buffer which is then implemented via a library for a specific
frame buffer. There exists a version of the the display routines for X11.
Conversions are not supported any longer, instead there is a commercial
product called Interformat. Interformat includes a qsh to Interfile conversion,
along with DICOM to qsh, and many others. Information is available from David
Reddy (reddy@nucmed.med.nyu.edu) (see Sources section).
[Editorial note: this seems a bit of a shame to me - hopefully the
current distribution still includes the old conversion stuff even if it is not
supported as there were lots of handy bits of information there, particularly
on driving tape drives. DAC.]
The authors of the qsh package are:
Gerald Q. (Chip) Maguire (maguire@it.kth.se)
Marilyn E Noz (noz@nucmed.NYU.EDU)
The following references are helpful in understanding the philosophy
behind the file format, and are included in postscript form in the qsh ftp
distribution:
@Article[noz88b,
Key=,
Author=,
Title=,
Journal=,
volume=<27>,
month=,
Year=<1988>,
Pages=<229-240>
]
@Article[maguire89e,
Key=,
Author=,
Title=,
Journal=,
volume=<16>,
month=,
year=<1989>,
pages=<818-823>,
comment=
]
END OF PART 1
--
David A. Clunie (dclunie@flash.us.com)
In sunny Riyadh, Saudi Arabia.
"I must see your DICOM 3 conformance statement before I buy."