When you’re designing a memetic system, a memeplex, to get people to behave a certain way, and to police one another into behaving in that certain way, the bugaboo is the fraction of people who don’t do what you want, who don’t internalize the memetic system.
Again, it’s the non-coöperaters, the ones who don’t do what they’re supposed to unless forced, who don’t bully other people when they don’t participate, or they even, horror of horrors, undo the work of the people who are on board with the system.
At some level, say around 5%, you have to employ a police force who manages the rebels, actively patrolling, locating instances of bad meme deployment, jailing them, or otherwise relegating them away from the good people and the productive efforts or if the faction of rebels gets high enough, they effectively turn into an opposing faction. Depending upon how badly split up the cohort is, you can start to calculate how efficient your memeplex is going to be, for instance, how much you have to spend on management/policing, how much the “good” people have to work to compensate for the non-work or reverse work of the “bad” people, how much the bad people just existing costs the system, things like average costs and productivity per person, and whether it’s even at all realistic that the system produce the desired output at all, or whether they may only be produced with force majeure practices that take too much of a toll on the rest of the society in general, and so on.
This is perhaps one of the first quantifiable memetic indices, by the way. What fraction of people espouse, are infected with, a given submemeplex? It seems a bit of a fuzzy concept, i.e., how infected one is, but one could easily define a submemeplex, i.e., a collection of memes, e.g., the “Keeping the Kitchen Tidy Memetic Inventory Sample A,” and then observe which of these memes a given person enacts over some specified period of time. One could go on and on, splitting hairs, but for instance one might observe all behaviors in some of the population except for one or two, say, not cleaning the coffee machine or not spontaneously doing the dishes and call these “Keeping the Kitchen Tidy Inventory Sample A-1B” and “KtKTIS A-1C” for example.
In practice, specifying a super-memeplex of desired behaviors, and then observing which are actually deployed by anybody, and then naming sub-memeplexes for those subsets of behaviors is probably a very useful activity. Take an inventory, a memetic inventory first off, and then determine which things one wishes to add to the memeplex, e.g., more people doing the washing up, cleaning the coffee machine at some interval or other, and so forth. One also identifies which groups, or sub-cohorts, exist in the population, or you could also say, in the memetic fabric.
This is what I mean by memetic polarization, i.e., how the cohort is divided into sub-cohorts according to subsets of the memetic inventory. This is entirely empirical — you have to observe and measure it, it cannot be designed or forced or specified or derived. For instance, let’s say we observe the following behaviors in our cohort, a group of people living in a house together, in terms of their kitchen activities.
We then make the observation that there are collections of behaviors that are enacted by certain individuals. By the same token, there are no individuals enacting other combinations of behaviors.
Here we see that there are there are individuals who will put away items they get out to use, and wipe up little messes they make, but will not rinse out their own dishes or do the washing up, etc. This is a subcohort, let’s call it Subcohort A, and they are inured of a submemeplex, say, Submemeplex A.
Another group of individuals is observed to do all of Submemeplex A, in addition to washing up and rinsing out their own dishes. This is a supermemeplex to A, we can call it Submemeplex B. One question we ask here is whether we count those infected with Submemeplex B as also being in the cohort of Submemeplex A. It seems so, but there is a whole body of theory yet to be developed, and we may find it to be more convenient to do otherwise.
Finally we have a group who do what Subcohort A do, i.e., put away and wipe up their own messes, but they also throw out containers when they use up the last scoop or item, as opposed to leaving the peanut butter jar in the cupboard with only a tiny bit of peanut butter left, or the last half-pickle in the jar.
Notice that nobody actually ever cleans the coffee machine. In a sense we could consider this to be a kind of “virtual meme” since we have not ever observed it. In this case we can easily imagine what it might look like, although in other memetic design situations this may not be true.
I promised to talk about designing the revolution, and I’ve perhaps strayed a bit afield. One problem we see above is that there are a couple of subcohorts where people don’t rinse out their own dishes. A kitchen is a bit of a problematic example since it lacks marking and engagement, that is, who did what and what belongs to whom is often hard to nail down (who left this dirty dish out, etc.) and one is often alone in the kitchen so there is little opportunity for one person to respond directly to the kitchen-related actions of another, i.e., deploy memes in response.
There are a couple of things we might try.
First, if possible, each person should have their own dishes, at least some of them, e.g., personal coffee mug, maybe bowl, and so on. This gets us some marking. Next, we might have the regular dish drainer and add a second “dish drainer of shame.” There could be some “rules” along the lines of if you have to wash somebody else’s dishes, because they left them out, and so forth, you can put them into the drainer of shame. This gives a bit more marking, and also some engagement. You could also have a “penalty box of shame” for items left out (and one in the refrigerator, so things don’t spoil).
This is not really what I was talking about in terms of “designing the revolution,” but it can work that way. A “moral” memetic agent will wash her own stuff, put things away, and (grudgingly) do the dishes if the sink fills up or the counter gets covered with stuff. If she wants to shame others, named or unnamed, who don’t do their bit, she can put their items in the penalty box, and wash up their dishes and put them in the drainer of shame. Then others can see this, and have the opportunity to bully the offenders who failed to wash up or put their things away. Naturally the bullying memes (what to say, etc.) can be richly designed.
Now, a rebel may come along and see the same situation, i.e., a mess in the kitchen, and know that the moral agent will sooner or later come along and set everybody up for bullying. The rebel can then decide to do the washing up and not put the offending dishes in the drainer of shame, putting them in the regular drainer instead, or selectively do so, thereby depriving some of the cohort of their bullying opportunities.
The next step is to create “non-bullying,” but still immunomemetic memes for these rebel actions, e.g., “bogarting the washing-up” or such where the moral agent becomes the target of these rebel immunomemes, the cohort (or subcohort) is able to deploy memes in a specific way, i.e., no matter what anybody does, there is a cohort who is now enabled to deploy memes in a specific way, toward a specific group, as a direct result. Marking and engagement are the keys to this design.
We have not looked at submemeplexes of “bad” behaviors, as such. Obviously, leaving items out is the opposite of putting them away, so we can define a bad behavior in terms of its opposite. There are an unlimited number of random bad things that one might do that are not the opposite of any good thing. For example, tracking mud into the kitchen. The point is that you don’t have to track in mud in order to perform any of the kitchen functions, so the act of “mopping up the mud on the floor” is in most cases (like in a Japanese kitchen) a “virtual meme” which only really exists if it is observed to exist. At that point you need to design to eliminate such memes, or design new memes to alleviate their ill effects. This may be the subject for a future essay, since it certainly is relevant.
Polarization is the division of a cohort into multiple subcohorts. A subcohort is defined by some submemeplex of which all of its member agents are inured, and vice-versa. Cohorts are polarized to one degree or another in terms of a given memeplex (is a given agent infected with it or not), which means that a memetic engineer faces the problem of some fraction of the target population not accepting and enacting the designed behavior.
Marking and engagement are key to memetic design. The challenge is to successfully engage the “rebels,” or the people who don’t accept and deploy the primary memes as designed. The trick is to design additional memes for those people to use, and for others to deploy in response to them, in order to engage with one another and also to connect back to the “primary” cohort who are deploying the primary memes.
 Such as The Japanese Mado-giwa-zoku 窓際族 practice of putting unproductive employees off to the sides by the windows so the productive workers could work together in the middle.
 I discovered this concept during my first Blue Shirt Tuesday (Doughnut Day) experiments.
 Remember, a cohort is a collection of people who interact with one another memetically. The “ideal” cohort is a small town where everybody can run into one another, all speak the same language, without major class or ethnic disparities. However, a population spread all over the globe, a collection of languages, but connected by the Internet and possibly shared interests is another.