Figures

==============================================================================

  Specifications
 -----------------

   Crucial numbers we must plan for or achieve

   a)number of detectors
   b)number of veto elements
   c)sampling rate, resolution
     (see also  Ray Bunker's Digitizer Page)
   d)`data' event size
   e)number of `slow control/monitoring' bytes per time interval
   f)trigger rate (including calibrations)
   g)deadtime
   h)filter reductions at major rejection points
   i)MC generation times
   j)data production times
   k)disk space and CPU

  Command and Control Vision
 -----------------------------

     z)Simple big picture statement of what we are trying to do.

     By define, generally mean a practical definition, and a list of
 software requirements, such as record blocks `guaranteed', amount of
 space guaranteed for the contents, data base requirements, etc.

     (see also EPICS, and some  EPICS Documentation)

     a)Define Run and `Super-run'
          i)Practical Definition
         ii)Definition in terms of constants and record structure
        iii)Offline Processing controlled in a philosophically identical
            way, with only the option of batch control, and different
            designation of input streams and output streams than online
     b)Define event 
     c)Define `record'
     d)Run Control Panel - Remotely Available, with one Master and
                           Multiple Spies
          i)Configure Run
         ii)Start Run
        iii)Reconfigure Run
         iv)Pause Run
          v)Clear Various Run Hangs Without Boot
         vi)Boot Run Control - want this to be fast (panic button) 
        vii)Continue Run
       viii)End Run
     e)Allow easy configuration of alternate run types, such as
          i)Random Trigger runs
         ii)Asynchronous Calibration runs
        iii)Partial Detector Configuration Runs
         iv)Runs with a few special detectors
          v)offline processing of MC and data with different constant files
            and algorithms
     f)Allow but log configuration changes, ultimately from simple ASCII files
     g)Version Control and Code Management similar to CVS, with
       ability to recover the code as of a given time
     h)Allow easy trace of what events, detector configurations, constants
       files, simulation algorithms correspond
       to a given date and time, or run and event number
     i)some kind of database system
     j)some kind of GUI (TCL? Netscape? Java?)
     k)some kind of data analysis system (Matlab? PAW? ROOT?)

===============================================================================

  What is is that we are commanding and controlling

    Streams of Data Flow
        a)From the active experiment at SUF or Soudan
        b)From the test facilities
        c)From MC generation

    Capacities of Data Flow
        a)bottlenecks
        b)livetime
        c)transfer rates
        d)MC generation times

    Filters of Data - filters are influenced by the Run Mode
        a)Low level triggers
        b)Crate level
        c)Event Level - online
        d)Production Level - offline

    Event Types - may be dependent on what filters are satisfied
        a)orderly record structure
        b)`data events' with variable records associated
        c)monitor events
        d)remote site events
        e)MC `God' Records

    Reservoirs of Data
        a)raw & reduced data for publication
        b)raw & reduced data for calibration
        c)databases of slow control information
        d)databases that describe MC generation information

    Monitors of the Data Flow/Filtering
        a)Scoreboards
        b)Slow control display
        c)Production status
        d)MC generation status

    Finished Products from Data
        a)plots
        b)calibration constants

===============================================================================

  The Programs that do all of the above.....

  a)Those that maintain descriptions of the configurations of stuff
  b)Those that take the descriptions and do `traffic control', like
     search for the addresses of crates, begin of run records, etc.
  c)Those that grab data and move it around
  d)Those that take data and reconstruct something from it
  e)The algorithms that actually do the reconstruction
    f)Those that analyze the reconstructed stuff and decide whether or not
    to alter the status of an event or run (like, trashcan the event,
    change status of the event to `only trigger record retained', etc)
  g)Those that calibrate the detector
  h)Those that do the physics analysis

===============================================================================

  From Steve Eichblatt ----
  --------------------------

  Need a `User Document', that summarizes our goals/needs/
targets for the DAQ.

   0)What are the run modes?

   1)Dead Time of X at Y trigger rate 
     a)Normal Data Taking
     b)Asynchronous Calibration

   2)Remote Control

   3)Version Control & Recovery  (Code Management)

   4)DAQ for ..
      i)Tunnel
     ii)Mine
    iii)Test Sites  ?

   5)Slow Control Scripting Language, plus Batch

   6)GUI... our own, or what?

   7)Documentation


  Its straightforward and natural for Steve to
generate a list of wires that into the DAQ; he
suggests a similar kind of routing and organization
for the entire DAQ.