ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
2
Ceph EC Profile In Prodution
(Locked for editing)
3
4
K+M ValuesFailure Domain Pool Type(RBD, FS etc)EC PluginCache Tier in use?Additional Comments
5
12+1HostFSjerasureRemoved with 12.2.5This is my small home cluster.
6
24+2HostRGWjerasureNo
I inherited these clusters and don't know what these do. jerasure-per-chunk-alignment=false; technique=reed_sol_van; w=8
7
38+3HostRGWjerasureNo
I inherited these clusters and don't know what these do. jerasure-per-chunk-alignment=false; technique=reed_sol_van; w=8
8
43+2HostFSjerasureNoEC pool only assigned to cephfs "backups" subtree
9
5
10
610+4HostRGWjerasureno
It seemed like a good idea, but was not. Also using SMR and that is also a poor idea. OSDs use tons of RAM during recovery, 100G and more at times.
11
75+3hostrbdjerasurenoworks very well, lots of SSDs
12
822+2hostcephfsjerasureyes
seen in the wild, does not really perform, suggested to change to a sane config
13
94+2hostcephfsjerasureno
14
105+2hostcephfsjerasurenofor backups
15
116+3hostcephfsjerasurenoworks well, HPC storage, 240 OSDs, 10 OSD servers
16
126+2hostrbdjerasureyesWith cache removal planned, may be changed to 8+3
17
134+2hostrbdjerasureno
18
148+2RackRbd + cephfsJerasureNo
We're migrating from 3 replicas and seem to be happy. 450+ OSDs. 24osd per node. 40gbps ethernet fabric. Mix of pure hdd and nvme Wal+db+hdd. Nvme use determined by performance requirements.
19
154+2hostFSJerasureNo
works well, HPC storage, 196 OSDs, 6 OSD servers, +1 server with 32 OSDs to be added soon-ish, will soon move Metadata pool and WAL+DB of OSDs to NVMEs (currently on SSDs) for performance
20
166+3RacklibradosJerasureNo
Worked well so far for archival storage, 540 OSD over 9 racks/hosts. All HDD with Bluestore + Luminous
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100