The
BaBar Long Term Data Preservation
and
Computing Infrastructure
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 2
The BaBar Experiment
BaBar
collab.
1993 - ?
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 3
BaBar Status
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 4
What is needed for new system?
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 5
Analysis Environment - Overview
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 6
Isolation
login node
interactive BaBar VM
(no access to it from anywhere else)
BaBar-To-Go is alternative to use UVic system.
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 7
Batch System
very limited access to anything
outside, no public IP address
login node
and
batch system head node
Openstack cloud providing worker nodes using
BaBar VM image
Openstack worker node VMs are started on demand by cloudscheduler
(https://csv2.heprc.uvic.ca/public/)
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 8
Shared File System
login node
and
batch system head node
Openstack cloud providing worker nodes using
BaBar VM image
interactive BaBar VM
(no access to it from anywhere else)
NFS mounts using AFS paths
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 9
User Accounts and Management
login node
and
batch system head node
Openstack cloud providing worker nodes using
BaBar VM image
interactive BaBar VM
(no access to it from anywhere else)
NFS mounts using AFS paths
NIS server
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 10
Data Access
login node
and
batch system head node
Openstack cloud providing worker nodes using
BaBar VM image
interactive BaBar VM
(no access to it from anywhere else)
NFS mounts using AFS paths
NIS server
xrootd redirector
xrootd
server
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 11
Data
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 12
Data Access
data at GridKa via streaming...
Doable but very slow processing
---> use XRootD proxy system
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 13
Data Access
GridKa XRootD access point
interactive BaBar VM
worker nodes on Openstack
XRootD redirector
XRootD disk proxy
XRootD disk proxy
XRootD disk proxy
XRootD disk proxy
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 14
Documentation
maintain content, for historic purpose
main BaBar documentation
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 15
Collaboration Tools
---> switch to use CERN Indico
---> moved Hypernews to UVic, made read-only, and removed any mailing
feature -> still readable and archive of any communication happened in the past
---> replacement: CERN egoups
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 16
Redundancy/Reliability
Hardware overview:
Redundancy/Reliability:
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 17
Redundancy/Reliability
login machine VM
NIS Server VM
interactive VM
XRootD redirector VM
-----------------------------
hardware raid1 OS
ZFS mirror data disks
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 18
Redundancy/Reliability
login machine VM
NIS Server VM
interactive VM
XRootD redirector VM
-----------------------------
hardware raid1 OS
ZFS mirror data disks
XRootD proxy server
-----------------------------
hardware raid1 for OS
ZFS raidz3 data disks
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 19
Redundancy/Reliability
login machine VM
NIS Server VM
interactive VM
XRootD redirector VM
-----------------------------
hardware raid1 OS
ZFS mirror data disks
XRootD proxy server
-----------------------------
hardware raid1 for OS
ZFS raidz3 data disks
Web documentation VM
Wiki VM
Hypernews (HN) VM
---------------------------------------
hardware raid1 for OS
ZFS raidz3 for data disks
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 20
Redundancy/Reliability
login machine VM
NIS Server VM
interactive VM
XRootD redirector VM
-----------------------------
hardware raid1 OS
ZFS mirror data disks
XRootD proxy server
-----------------------------
hardware raid1 for OS
ZFS raidz3 data disks
Web documentation VM
Wiki VM
Hypernews (HN) VM
---------------------------------------
hardware raid1 for OS
ZFS raidz3 for data disks
4 NFS server:
NFS $HOME
NFS job output
NFS framework
NFS documentation
---------------------------
all use:
ZFS raidz2/3
hardware raid1 for OS
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 21
Summary
To run an old and outdated analysis environment on current infrastructure:
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 22
Conclusion
Running analyses in an old and outdated environment is possible and can be done safely and very well using current infrastructure solutions like clouds.
Big Thanks
to the
GridKa, CERN, IN2P3, INSPIRE, Caltech, and UVic HEP-RC groups!
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 23
Other Collaboration Tools
new system for active analyses and management:
CHEP 2024, October 21 Marcus Ebert (mebert@uvic.ca) 24
Open Data
‘BaBar Associates’ open-access:
Access to BaBar framework: analysis system at UVic, BaBar-To-Go (VM) at home