EPICS V4, FACE-TO-FACE MEETING, 11-MAR-2014, DAY 1

===============================================

AGENDA

========

areaDetector/NTNDArray related topics day

-------------

9:00 - find room, welcome, etc.

9:30 - areaDetector intro - Marty & David

10:00 - areaDetector integration/plugins - David

10:45 - areaDetector V4 demo - Marty

11:00 - application of NTNDArray for two use cases - Daron Chabot

            - "batch", "multi-frame", dark frame operations,

frame-rate decimation?

            - parallel (forked) plugin chains?

            - performance?

11:45 - NTNDArray - David

12:30 - lunch

13:15 - NTNDArray cntd. and revision of (other) NT types,

        union application, fixed size arrays (*David*, Matej)

        Error and units (*greg*)

        Proposal for new type for Static named values (*greg*)

16:30 - areaDetector related work planning, establish priorities

17:00 - (end)

MINUTES

========

Present: TK, GW, MK, DH, MS, MK, RL, DC, BD

Chair: BD/GW

Scribe: GW

Observers: Suzanne Gysin (ESS), Gary Trahern (ESS), Kevin Meyer (Cosylab), Miha Vitorovič (Cosylab), Miha Reščič (Cosylab), Vasu Vuppala (MSU) , et al.

9:30 - areaDetector intro - Marty & David

********

Talk: Marty on AreaDetector

********

V4 support is directed at “layer 3” of MR’s 6 layer model: that is, the plugins layer.

Asyndriver allows device drivers to communicate via standard interfaces.

AI on MK for putting pvaAsyn areaDetector support in EPICS V4 repository, with talk.

********

Talk: David Hickin, areaDetector processing pipeline and EPICS V4 work and status

********

A number of AD/V4 integration matters were brought up. Priority may have to be given to distributing NTNDArray over pva, maybe message service in particular.

DH: Would like to use pvDatabase as basis of processor.

[discussion of requirement and solution for getting sizes and other meta data of parts of NTNDArray. Suggestion is that pvRequest should support requests for metadata].

MS: Notes that all but one of the attributes are largely static valued; maybe it would be better to distinguish the fast moving from largely static [presumably to optimize transfer efficiency, but not transferring slow moving fields].

DH/UP/GW: At network bandwidth limit, the client server exchange is not robust. We do not know why yet, but DH is looking at it, and based on results we will know whether direct support for robust configuration is required in pvAccess.

Performance of compression: with “blosc” + LZ compression, DH gets 140% of throughput without compression. Note though, that one needs many core processor to avoid being CPU bound.

RESOLUTION: We shall add bitset support for arrays to the pvAccess specification. It’s ok for the implementation not to implement this support yet other than the trivial case of all 1 bitset.

*******

Afternoon session. Marty’s presentation about pvAsyn and pvDatabase

*******

-connecting to Asyn ports to generate V4 records

-demo of connecting to an image server via pvAsyn, using different ports

-direct connection to asyn, no EPICS (3) records involved

Question: is the setting of parameters guaranteed to be atomic (e.g., changing the ROI in middle of acquistion)?

MK: Yes. In the limits of an AD plugin. What a camera venrod’s  driver does is a different story.

Question (DC): what makes this a record?

MK: this fits the definition of a pvRecord.

GW: are pvDatabase and pvIOC different?

MK: pvIOC is not involved here. pvIOC is gone…

MK: pvDatabase is a full implementation of pvAccess provider (e.g., implements monitors, etc)

Discussion about dis/advantages versus V3 records. Having 2000 fields instead of 2000 records

MK: all information could be given to the client just knowing the port name

GW: this might enable generic image handling

DC: what about the existing huge base of existing support? Is it possible to hook into existing asyn support?

RL: this is like a “plug-and-play”or zeroconf for devices

(discussion) Are there clients for this? No, but could be generated

BD: Issue to resolve: formalizing the interface to devices. Attribute groups for metadata, standard sets of attributes for reciprocal space, histogram data, etc.

RL: looks like NT type for asyn parameters is needed.

BD: NTNDArray is fine except that it is flat. We need a way to structure it. Like tags in ChannelFinder.

BD: change “name, value” to “name,value,tag”? (NTNameValue)

MS: NTNameValue has the problem that all values must be of the same type.

MS: can the attributes (list) change

suggestion:

structure NTAttribute

  string name

  any value

  opt string[] tags

  opt string description

DH: how to put in source types? (EPICS PV, function, constant, etc.)

DH: two questions: do we use the NTAttribute?

having attributes as

-top level structures

-put a structure below the top level

-replace scalar array with a structure array

Arrays of structures vs. structure of arrays.

UP: prefer an array of structures. You can hide the low-level implementation behind a public API (low level can be either way)

BD: what are the source types used for?

UP,DH: area detector uses and needs them

UP,GW: select the one which has better performance

BD: we know that NDImage is wrong, but we do not know yet what is right. Let us take the time to learn what is the right combination of data, and fix it when we know. To be able to do this, we need the tags.

Tags are just strings, typically. DH suggests coding tags with “tag=value” (e.g., “sourceType=drv”)

MS: we can use only NTAttribute (drop “Tagged”)

---

RESOLUTION: use NTAttribute structure (as above) to attach attributes to the NTNDArray.

For NTNDArray, the instance of NTAttribute must have additional fields named “sourceType” (int) and “source” (string) and must have a description field.

********

Talk: Daron - Data Management

********

Daron will meet with Matej to discuss writing this service.