IEEE
Transactions
on
Nuclear
Science,
Vol.
NS-26,
No.
4,
August
1979
The
Crystal
Ball
Data
Acquisition
System
R.
Chestnut,
C.
Kiesling,
E.
Bloom,
F.
Bulos,
J.
Gaiser,
G.
Godfrey,
M.
Oreglia
SLAC
Stanford
University
R.
Partridge,
C.
Peck,
F.
Porter
California
Institute
of
Technology
D.
Aschman,
M.
Cavali-Sforza,
D.
Coyne,
H.
Sadrozinski,
Princeton
University
W.
Kollmann,
M.
Richardson,
K.
Strauch,
Harvard
University
R.
Hofstadter,
I.
Kirkbride,
H.
Kolanoski,
A.
Liberman,
J.
O'Reilly,
J.
Tompkins,
Stanford
University,
HEPL
T.
Burnett
University
of
Washington,
Seattle
ABSTRACT
The
data
acquisition
system
for
the
Crystal
Ball
project
at
SLAC
is
described.
A
PDP-11/t55
using
RSX-11M
connected
to
the
SLAC
Triplex
is
the
basis
of
the
system.
A
"physics
pipeline"
allows
physicists
to
write
their
own
equipment-monitoring
or
physics
tasks
which
require
event
sampling.
As
well,
an
interac-
tive
analysis
package
(MULTI)
is
in
the
pipe-
line.
Histogram
collection
and
display
on
the
PDP
are
implemented
using
the
Triplex
histo-
gramming
package.
Various
interactive
event
displays
are
also
implemented.
INTRODUCTION
The
Crystal
Ball
is
a
non-magnetic
detector
system
with
a
large
solid
angle
acceptance.
It
emphasizes
the
complete
detection
and
pre-
cise
measurement
of
energy
depositions
of
all
particles
produced
in
positron-electron
anni-
hilation
events.
The
detector
has
five
major
components:
1.
A
compact
672-segment
NaI
detector
of
16
radiation
lengths
which
is
almost
spherical
and
covers
94%
of
the
total
solid
angle.
2. A
set
of
cylindrical
magneto-strictive
spark
chambers
and
multi-wire
proportional
chambers
around
the
beam
pipe
to
define
charged
particle
trajectories.
3.
End
caps
of
magneto-strictive
spark
cham-
bers,
and
mildly
segmented
(60
segments)
NaI
crystals
closing
the
solid
angle
to
98%
of
the
full
sphere.
4.
A
multicounter
luminosity
monitor
to
mea-
sure
the
beam
luminosity
to
about
2%.
5.
A
muon-hadron
selector
consisting
of
pro-
portional
tube
arrays
sandwiched
between
several
layers
of
steel
plates
covering
about
15%
of
the
solid
angle.
The
detector
system
was
designed
and
built
by
the
Caltech,
Harvard,
Princeton,
Stanford,
SLAC
collaboration
and
is
now
running
this
ex-
periment
at
SPEAR
(Stanford
Positron
Electron
Asymmetric
Rings).
An
online-computer
system
was
planned
as
an
integral
part
of
the
experiment.
The
design
goals
of
the
data
acquisition
system
were:
Flexible
computer
access
to
the
hardware
using
CAMAC,
Read-out
and
logging
of
data
onto
tape,
Provide
a
diagnostic
tool
so
that
any
physicist
in
the
group
could
easily
moni-
tor
some
portion
of
the
equipment,
Deliver
an
accurate
overview
of
the
sta-
tus
of
the
detector
and
electronics
at
any
time,
Do
as
much
physics
analysis
online
as
possible
to
efficiently
monitor
the
ex-
periment's
progress.
HARDWARE
A
PDP-11/t55
computer
with
the
full
compliment
of
core,
including
64kBytes
of
fast
core
and
a
fast
floating
point
arithmetic
unit
was
chosen
to
meet
these
goals.
This
configuration
is
ten
percent
faster
than
the
similarly
equipped
PDP-11/70,
according
to
Digital
Equipment
Cor-
poration
benchmarks.
Peripheral
devices
in-
clude
two
1600
b.p.i
magnetic
tape
drives,
an
RK07
(28
Megabyte),
an
RK06
(14
Megabyte)
and
two
RK05
disk
drives,
a
Versatec
prin-
ter/plotter,
three
Tektronix
4013
displays
and
an
IBM
System/7
which
serves
as
a
link
to
the
U.S.
Government
work
not
protected
by
U.S.
copyright.
4395
SLAC
Triplex
(two
IBM
370/168
and
one
IBM
360/91
computers
loosely
coupled).
The
exter-
nal
electronics
are
connected
to
the
online
computer
via
CAMAC.
The
CAMAC
controllers
are
mounted
in
a
Unicrate
(a
modified
CAMAC
crate
with
a
DEC
Unibus
back
plane),
connected
to
the
UNIBUS
of
the
PDP-11
and
communicate
with
the
13
CAMAC
crates
imbedded
in
the
electron-
ics.
The
controllers
in
use
are
ARBOLA
(6),
which
allows
all
legal
CAMAC
commands,
LABRI
(1),
which
services
the
SLAC
priority
encoder
module
(2),
and
MAESTRO,
which
services
the
special
spark
chamber
readouts.
As
well
the
SPEAR
CRT
CAMAC
memory
is
used
for
graphics.
SOFTWARE
System
The
DEC
system
RSX-11M
v3.1
(4)
is
the
ba-
sis
of
the
data
acquisition
system.
This
is
a
priority-driven,
task-oriented
system.
In
our
system,
all
external
devices
are
accessed
via
loadable
drivers,
including
the
sophisticated
CAMAC
interface
and
the
System/7
fast
data
link.
Memory
partitions
have
been
so
defined
that
the
RSX-11M
system
itself,
the
Fortran
resident
library,
all
drivers,
necessary
sys-
tems
partitions
and
common
blocks
occupy
the
lower
64K
words
of
memory.
Fixed
data-acquisi-
tion
tasks
(those
always
waiting
for
an
inter-
rupt)
occupy
a
further
4K,
leaving
56K
for
ac-
tive
data
acquisition
tasks.
The
lower
32K
words
of
memory
are
fast
core;
here
reside
the
system,
the
fortran
resident
library,
the
ca-
mac
driver,
and
the
data
I/O
task.
Deadtime
minimization
was
the
primary
concern
when
the
decision
was
made
on
how
to
parcel
out
the
fast
core.
With
few
exceptions,
all
data
ac-
quisition
tasks
are
written
in
Fortran.
The
DEC
Fortran-4
Plus
compiler
generates
nicely
optimized
code,
so
that
further
optimization
becomes
unprofitable.
As
well,
program
compa-
tibility
with
subroutines
developed
for
off-
line
analysis
on
the
SLAC
Triplex
is
desira-
ble.
Program
development
can
take
place
during
data
acquisition
and
degrades
the
ef-
fectiveness
of
the
latter
only
marginally
if
the
priority
of
the
system
utilities
lies
be-
low
that
of
the
data
acquisition
system.
CAMAC
Driver
All
CAMAC
functions
are
included
in
the
driver,
which
adheres
religiously
to
DEC
spe-
cifications
for
an
I/O
driver.
The
QIO
opera-
tions
included
are
classified
as
follows:
1)
Non-interrupt
data
and
command
transfer.
Programmed
Data
Transfer(PDT)
and
Direct
Me-
mory
Access(DMA)
I/O
packets
are
queued
and
the
task
waits
on
completion;
The
task
is
generally
checkpointable;
The
PDT
may
be
a
long
list
of
CAMAC
com-
mands.
2)
Connect
to
Interrupt
(CCI)
Issue
one
I/O
packet
to
get
permanently
con-
nected
to
interrupt.
Task
must
henceforth
only
wait
for
flag;
Task
is
non-checkpointable.
3)
Connect
to
Interrupt
(ICN)
Issue
one
packet
to
get
permanently
con-
nected
to
interrupt
Task
must
henceforth
only
wait
for
flag;
Task
is
checkpointable
(IOC=O);
(slower
than
CCI)
4)
Connect
to
Interrupt
and
CAMAC
data
and/or
command
transfer
(CPT,CDM)
Perform
action
an
interrupt,
set
flag;
Task
is
non-checkpointable;
Two
packet
technique.
5)Special
Calibration
QIO
(burst
QIO)
One
I/O
packet
performs
'n'
DMA's
Can
service
2
kHz
interrupt
rate.
6)Unsolicited
interrupt
killer
clear
and
disable
goofy
lams
(needed
because
of
Hardware
latch,
encoder)
priority.
The
'Connect
to
Interrupt'
functions
are
necessary
as
well
as
cute.
The
mean
time
from
interrupt
to
service
after
having
connected
to
an
interrupt
is
150
micro-seconds,
as
opposed
to
1.5
milli-seconds
for
an
I/O
packet
to
be
issued.
A
given
task
may
connect
to
up
to
12
interrupts
at
one
time.
CBOLS
System
Structure.
The
data
acquisition
system
itself
(CBOLS
-
Crystal
Ball
OnLine
System)
is,
in
effect,
a
multi-level
dispatcher
oriented
system.
The
Input/Output
runs
at
very
high
priority
(245
of
a
maximum
250)
using
a
double
buffer
to
ov-
erlap
input
and
output.
This
task
reads
the
approximately
1500
16-bit
parameters
from
CA-
MAC,
and
writes
them
to
tape
and/or
the
Sys-
tem/7.
Large
blocks
of
data
are
passed
via
common
blocks;
global
flags
are
used
for
task
synchronization.
User
Interface.
A
16-word
'button
board',
consisting
of
a
collection
of
switches,
thumb
wheels,
and
LAM-producing
buttons
is
used
to
control
the
experiment.
Here
the
experimental
configura-
tion,
I/O
configuration,
pipeline
configura-
tion,
type
of
event
display
and
histogram
dis-
play
selection
are
determined,
as
well
as.
functions
such
as
START
RUN,
END
RUN,
STOP,
CONTINUE,
etc.
Pipeline.
The
pipeline
operates
on
a
sampling
basis
and
consists
of
a
pipeline
driver
and
a
series
of
analysis
tasks.
The
first
pipeline
task
converts
the
digital
data
to
energies;
then
iP
the
data
represents
a
physics
event
(as
op-
posed
to
a
Xenon-flasher
calibration
event
or
an
event
triggered
by
the
luminosity
coun-
ters),
it
finds
connected
energy
regions
and
tags
tracks
as
charged
or
uncharged.
The
pipeline
driver
then
sends
non-physics
data
to
the
proper
monitoring
task
(if
running),
and
physics
data
sequentially
to
selectable
taskq
(PHYS01,
PHYS02
...),
which
are
physicist-
written
and
can
be
diagnostic
tasks,
test
4396