Published using Google Docs
"Talent Distillation" Policy
Updated automatically every 5 minutes

"Talent Distillation" Policy

These are some of my observations in the context of switching from independent research to running an R&D lab.

  1. Recruiting is inherently noisy. When recruiting, it’s rarely the case that a one-page resume, a couple hours of interviews, and a take-home project can provide enough signal on whether or not the applicant is an excellent fit for the team day-to-day. When being recruited, it’s rarely the case that a one-page job description, a couple hours of interviews, and some “about” copy can provide enough signal on whether or not the team is an excellent fit for the applicant day-to-day. Team dynamics are non-trivial to predict ahead of time, as people are complex.
  2. Talent density is critical. Let talent mass be the sum total amount of expertise on a team. Let talent density be talent mass relative to the actual team size. The two are orthogonal — one can set up a team with low talent mass but high density, or vice versa. The claim here is that density is the more important one to optimize for. “What one programmer can do in one month, two programmers can do in two months,” goes the saying. Integrating information happens faster inside one mind than across a few. Coordination happens faster across a few minds than across many.
  3. Independent research disfavors network. Independent research can provide valuable space for deliberately practicing conceptual work, as well as cultivating technical expertise. However, it often does not naturally provide context for building rapport with other individual contributors over daily interactions — the flip side of the invaluable mental space for exploring odd framings. This means that following a transition from such a starting point towards building a team is likely to take longer. Pitying is pointless, though — might as well reflect on the situation and move forward rationally.

The above observations led me to entertain a “talent distillation policy.” In practice, this means the following.

  1. Contract-to-hire as the norm. We’re onboarding impressive new team members for an initial three-month period at first. This helps both parties get a better sense of whether there’s a match. The organization is actively incentivized to encourage the new hire to independently assess the work environment, as both sides benefit only when things feel right both ways. How exactly the legal context is set up depends on what best fits the team member’s circumstances — we’ve explored both freelance-style contracting and fixed-term employment. If both parties are content with the arrangement at the end of the initial period, they’ll naturally make a longer-term commitment.
  2. Cautious long-term scaling. If you take a step back from specific hiring rounds, talent mass should be growing, but modestly. “You’re dealing with a fundamentally different organization with each doubling of the team,” goes the saying. Unsustainable scaling (i.e., of talent mass) is perilous. First, because it can diminish density. Second, because it induces inertia in terms of organizational structure at a time when the ability to iterate on novel human-AI arrangements is expected to significantly grow in importance. Third, because it can reduce cost-effectiveness at a time when resources scoped for cautious AI development are at a premium.
  3. Ample space for affirming oneself. In general, but especially during the initial period, it’s important to give team members generous room for proving themselves. Generous room need not translate to vague expectations, it should translate to well-scoped whitespace. Things should be set up in such a way that they can accommodate learning by trying things out “loudly” (e.g., prod/dev environment segregation as well as personal dev-alice environments). Staff should be expected to invest time in facilitating orientation, though a solid sediment of documentation should in turn increase the value of sync time.