Friday, May 15, 2015

Why Hire? Underemployment, Autonomy, and Corporate Culture.

Following up on my last post about subordinating compromises, I will share some thoughts about what motivates people to make hiring choices. A lot of this is just my interpretation of experiences that I have had coupled with some of the ideas from principled studies of bureaucracy, such as Moral Mazes. It comes from my perspective as a job candidate with a high aptitude in some hotly demanded skill areas as well as education and work experience that is generally considered "prestigious." At the end, I add an explanation that in some attempts to bring all of these alternatives (except for number 6) under a single umbrella.

So why hire?

1. You are merely a fan of my skill set. You see that in some potential future, such a skill set could possibly add value. You see that having this skill set is a mark of credential or possibly domain-specific trendiness and you want your team or organization to be viewed as "with it." Though you don't have any work for me to do that will exercise this skill set, you like thinking about me as a "latent" resource, waiting to spring forth with all sorts of innovative value creation at the moment that changing political tides or market conditions will allow it (which predictably never comes).

2. You have halo bias about all of the soft skills that this role will require. Because you are a fan of my skill set or otherwise view my credentials and interview performance as impressive, and maybe even you like me, you will make biased inferences about my simulated behaviors and reactions to certain aspects of the job. You will infer that I will not become frustrated. You will infer that, of course, I will "just do what I am told" with no regard for the way my aptitudes and goals match up with what I am told to do. You will infer that regardless of how wildly inappropriate a task might be compared with my skill set, that I will happily just "find a way to get it done." You will infer that the rampant political issues won't bother me, either because you think I will happily accept "junior" status and will somehow lobotomize away any critical thinking skill as they might apply to the political situation I walk into, or because you think that I value wage more highly than dysfunctionality-avoidance, again, due to the halo effect.

3. You are desperate to fill a seat. You're fighting your own political battles and many of them are based on attrition and headcount. Perhaps some of your yearly compensation incentives are based on building a team and the clock is ticking. Pretty much anyone will do as long as they meet some bare bones requirements that make the employment offer appear defensible on paper. You haven't given any thought at all to the impedance mismatch between what I am capable of and what the job will actually require. Nor do you care. You need to say whatever it takes to get my ass in the chair.

4. You are not knowledgeable about the domain-specific requirements of the position. Mostly you evaluate "personal fit" and "culture" or "team player" attributes in a candidate. You look for signs of impressiveness and credential on a CV. You have no idea whether my skills will be used in the job, and it may even be a significant surprise to you that there exists variety between people in my skill area, that we aren't all fungible, and that we may resent being placed in a job that is fundamentally different than what we set out to accomplish. Any discontent I display after being hired will be a surprise to you. You won't be situated to evaluate the domain-based merit of my claims, so you will default to believing that it is a problem with me -- that I am "not a team player" or "not a good fit" or some other HR-approved catch-all buzzword escape valve that lets you continue living in a snow globe of misunderstanding about the makeup of a domain expert.

5. You are fully aware of what you're doing and have ulterior motives or just plain don't care. You plan to bait-and-switch me by selling me a job completely different than the real work I'll be asked to do. You hope you can gain some leverage on me in the meantime that requires me to stay in the job. You might even look for this in my personal characteristics: do I have children, student loans, or a mortgage that might imply financial needs and thereby a need to endure workplace bullshit to service those needs? If I don't, you may very well not hire me because you don't forecast an ability to get leverage. You may try to see if I am motivated by prestige, by eventual high-level promotions, by level-grinding may way to a private office, by attending annual conferences, or whatever other carrots you might be able to dangle.

6. Rarest of all: you have an adequate understanding of the domain expertise demanded by the duties of the job and you are trying to locate a candidate with specifically the right skills and experiences to meet the demands. You're not looking to play games. You're not looking to task me with unrelated or menial work. You know what work needs to be done and you have a real plan for mapping that work onto a candidate's qualifications. You hope that I will be a good cultural fit and you are prepared to make sacrifices, change policies, or provide accommodations if necessary. But you also realize that because you are hiring for the purposes of matching up a domain-expertise need with a candidate's domain knowledge, you don't get to be picky about enforcing your fluffy HR-approved notions of "cultural fit" or "team player" -- you have to collaborate with me to determine if those things will work out. You don't get to dictate them and because you actually care about solving the domain-specific problem, you don't want to dictate them either.

I admit that items 1-5 are written with an angry tone, yet they are accurate depictions of motivations out in the wild. In any real hiring scenario, the motives are likely to be combinations of the various options above. Even in good scenarios when it is mostly number 6 that dominates, there can still be elements of the others items, and it's not always bad or irrational that this is so.

Yet when items 1-5 dominate the picture, which from my own experience, from the testimony of others, and from academic studies of this sort of thing is by all accounts the overwhelmingly dominant case, it creates a very toxic environment -- and truly it's only survivable in the long-run if you are happy to engage in those subordinating compromises I mentioned earlier.

So it might be useful to try to understand the confluence of items 1-5 more systematically, and I believe that the concepts of overqualification and underemployment (specifically underutilization of skill) can help with exactly that.

Basically, if you strip away all of my loaded language and try to see it not as malicious, ignorant, or political, these kinds of hiring problems are at their root an issue of underemployment, except possibly the case when any employee will do to fill a seat, as in item 3. In the other cases, a hiring manager is seeking someone overqualified for the specific duties that await them in the role.

Why should this be a bad thing? In fact some argue it is not. Even just a cursory Google search for overqualification brought up a link to a prominent Harvard Business Review article, The Myth of the Overqualified Worker. The article is pretty weak, but illustrates a pervasive kind of rationalization that managers really want to make. Basing its conclusions on some cursory and poorly controlled research publications, the article says things like, "In addition to achieving higher performance, these cognitively overqualified employees were less likely than others to quit. The researchers point out that many overqualified workers stay put for lifestyle reasons, such as the hours or the company’s values."

It perpetuates the idea that managers want to hear: overqualified candidates will more assuredly produce the baseline amount of labor output necessary for the role. The worry, that they will become discontented with the lack of learning or growth opportunity in the role, is soothed away by arguing that these folks are motivated by other factors, exactly the subordinating compromises that I keep incessantly bringing up.

The HBR article goes on to give a perfunctory nod to a factor that I believe plays a huge role in this issue: autonomy. For instance, the article continues,

"Berrin Erdogan and Talya N. Bauer of Portland State University in Oregon found that overqualified workers’ feelings of dissatisfaction can be dissipated by giving them autonomy in decision making. At stores where employees didn’t feel empowered, “overeducated” workers expressed greater dissatisfaction than their colleagues did and were more likely to state an intention to quit. But that difference vanished where self-reported autonomy was high."

This is backed up by some heavier research too. Generally, this type of work has focused on studying heteronomous goals (goals expected of you from others) versus autonomous goals (goals you choose for yourself). One branch of this theory is called Self-determination Theory (SDT) and one research paper from this approach is On Happiness and Human Potentials: A Review of Research on Hedonic and Eudaimonic Well-Being, by Ryan and Deci, 2001.

Here are some select quotes (see the original paper for the citations; SWB stands for subjective well-being):

"Another actively researched issue concerns how autonomous one is in pursuing goals. SDT in particular has taken a strong stand on this by proposing that only self-endorsed goals will enhance well-being, so pursuit of heteronomous goals, even when done efficaciously, will not. The relative autonomy of personal goals has, accordingly, been shown repeatedly to be predictive of well-being outcomes controlling for goal efficacy at both between-person and within-person levels of analysis (Ryan & Deci 2000). Interestingly this pattern of findings has been supported in cross-cultural research, suggesting that the relative autonomy of one’s pursuits matters whether one is collectivistic or individualistic, male or female (e.g. V Chirkov & RM Ryan 2001; Hayamizu 1997, Vallerand 1997)."

"Sheldon & Elliot (1999) developed a self-concordance model of how autonomy relates to well-being. Self-concordant goals are those that fulfill basic needs and are aligned with one’s true self. These goals are well-internalized and therefore autonomous, and they emanate from intrinsic or identified motivations. Goals that are not self-concordant encompass external or introjected motivation, and are either unrelated or indirectly related to need fulfillment. Sheldon & Elliot found that, although goal attainment in itself was associated with greater well-being, this effect was significantly weaker when the attained goals were not self-concordant. People who attained more self-concordant goals had more need-satisfying experiences, and this greater need satisfaction was predictive of greater SWB. Similarly, Sheldon & Kasser (1998) studied progress toward goals in a longitudinal design, finding that goal progress was associated with enhanced SWB and lower symptoms of depression. However, the impact of goal progress was again moderated by goal concordance. Goals that were poorly integrated to the self, whose focus was not related to basic psychological needs, conveyed less SWB benefits, even when achieved."

Another research paper, If money does not make you happy, consider time, by Aaker, Rudd, and Mogilner, 2011, puts it like this:

"... having spare time and perceiving control over how to spend that time (i.e. discretionary time) has been shown to have a strong and consistent effect on life satisfaction and happiness, even controlling for the actual amount of free time one has (Eriksson, Rice, & Goodin, 2007; Goodin, Rice, Parpo, & Eriksson, 2008)."

"Therefore, increase your discretionary time, even if it requires monetary resources. And if you can't afford to, focus on the present moment, breathe more slowly, and spend the little time that you have in meaningful ways."

(Both of these are part of a much larger review article at LessWrong, covered in the section on the relationship between work and happiness. That whole article is highly worthwhile.)

This can be a disaster in highly specialized jobs, however, because such jobs tend to be extremely demanding of both personal time sacrifices and on-the-job autonomy sacrifices. My experiences have been in the technology and financial sectors and in these places, it's bad. It's arguably even worse in start-ups unless you are sitting at the top of the start-up and personally feel that all of the necessary tasks for growing the business are aligned with your autonomous goals. This is why some start-ups obsess over locating employees who deeply resonate with the company's ethos and purpose. It's not because they want to create a cult of their company (although that does happen), and it's not purely because they want to rip off unsuspecting employees who incorrectly forecast that their enjoyment of the company will compensate them for the reduced salary that the start-up will pay them. It's also because it would be death for the company if they hire a lot of people who are highly skilled, and who need autonomous goals or lots of personal time in order to be happy, and cannot provide them with either. Some start-ups have begun going in the other direction, and trying out things like unlimited (or even mandatory) vacation, since the supply of workers who just so happen to deeply resonate with a particular business idea is necessarily scarce. The success of these kinds of discretionary time approaches seems mixed.

In the end, this is why these underemployment traps are so debilitating and why they often entail above market wages, bonuses, or other compensation benefits: the company believes they are obtaining less volatile, surplus labor, but they have little freedom in allowing the worker to have autonomy, and the nature of the job requires long working hours without much personal time. The job itself often leaves an employee exhausted and without the necessary energy to use limited personal time to undertake the restorative autonomous goal achievement they need to be healthy.

Prolonged states of this surely lead to burnout.

Saddest of all is that, like many things, there is a blame-the-victim culture in this issue. Since not everyone is underemployed or overqualified, and some workers happen to have jobs which afford them adequate free time and energy to pursue autonomous goals outside of work, and higher-level decision makers in a firm often have the most freedom to pursue work-based autonomous goals, it creates a very dangerous in-group versus out-group mentality.

On one side, you have the higher-ups who can access freedom at work, and you have the workers who are happy making subordinating compromises to obey heteronomous goals while at work because they are satisfied with autonomous goals outside of work. Together this collection forms a large group of people who characterizes itself by "being able to get shit done" and "just doing what needs to be done" at work. They view their fortunate ability to not feel cognitively distressed by the lack of work autonomy as their own virtue, earned through their efforts to endure work, rather than considering whether it could just be a lucky coincidence that they have other ways of obtaining the needed autonomous goal achievement to be happy.

On the other side, you have overqualified / underemployed people who for whatever reasons are not able to engage in autonomous goals at work, and whose jobs place such a strain on their discretionary time that they also cannot get autonomous goal satisfaction outside of work, and any potential compensation increases they are paid for this arrangement don't provide them with replacement satisfaction that enables their cognitive health in the circumstance. Take me, for example. The autonomous goals that I want to achieve are all about writing quality-focused scientific software to solve worthwhile applied problems. If I have to write crappy software to solve worthless problems while at work, in a demanding and long-hour job, then I will not have the time, energy, or impetus to even try to pursue the necessary autonomous goals in my personal time. So there is nothing that any workplace can do for me to help with my cognitive health and job satisfaction except provide me with opportunities to write the sort of scientific software that my autonomous goals draw me towards. Raises, bonuses, promotions, lots of vacation, etc., all won't work. Which makes me a villain (or perhaps a whiny, entitled brat) in the eyes of most bureaucratic managers.

As with so many other majority/minority issues, especially when stigmas of cognitive health are involved, the maligned, minority group is used as a scapegoat and vilified for the suffering they must endure. The problem is offloaded from the majority group, so that they need not feel any stress about helping to find a solution, and HR codewords can be created, such as "not a good fit" or "not a team player" that let tightly-wound business managers wrap the issue up neatly in some foil and place it in the trash can like the Anal Retentive Chef :)

The introvert / extrovert spectrum is another great example of this divide, manifested in the prevalence of open-plan offices and vilification of naturally-introverted folks who cannot function normally in such offices. It's not enough to merely fail to provide reasonable accommodations, even productivity-boosting accommodations that are in the business's interests: the in-group has to go further and label the vocal minority as whiny, complacent, or entitled. It can often result in unhealthy workplace gaslighting where you are made to feel like you are the crazy, problem person for having a sane reaction to insane conditions.

I can't draw any useful conclusions other than to point out what a destructive long-term force this type of phenomenon is. Over time, it drives organizations to monoculture. People who express very natural and healthy tendencies, such as a desire to either work on autonomous goals while at work, or else to have enough discretionary time to feel satisfied with autonomous goals outside of work, or people who express natural inclinations, such as an introvert's natural inclination to be more productive in a highly private environment, are punished and weeded out over time. The corporate population converges to a large, dominant in-group made up of people who are willing to subordinate their own urges for the sake of the company, with all kinds of unpleasant side-effects regarding their career motivations, their aptitudes in the actual domain-specific business area, and the prevailing culture of the workplace.

That is the state of affairs in modern first-world employment. A hiring process that seeks to underemploy people tends to produce cultural environments where only those who are happy to find another way to satisfy autonomous needs, or who can compromise those needs away, can achieve the corporate, HR-approved definitions of success. If you are so arranged internally that you cannot get rid of your itch for autonomous goals, and if your job doesn't leave you with enough discretionary time or energy to do it outside of work, then you are a Bad Guy, a toxic, uncooperative whiner that the bureaucratic system will not attempt to accommodate. Your labor productivity, however great it may be, just doesn't matter next to your organizational fealty.


Tuesday, May 12, 2015

Subordinating Compromises and Workplace Cognitive Health

Filed under things I wish I knew when I was younger:
"Honesty was never a profit center on Wall Street but the brokers used to keep up appearances. Now they have stopped pretending. More than ever, securities research, as it is called, is a branch of sales. Investor beware." James Grant, 1999
One of the most depressing things is that this is not unique to finance. Whether it has been in defense labs, general businesses, technology start-ups, consulting, grad school / academia, or finance, I've never experienced or heard of any organization that actually engages in research. I've only met people who want to consume prestige for marketing and sales purposes, to prioritize demoware and chase grant money, and to baselessly assert both that domain-specific first principles cannot possibly be outperformed by algorithmic modeling in any domain and that no resources of any kind (time, money, intelligence) will be dedicated to discovering whether or in what circumstances that assertion is true or not.

What I find most distressing is that the more I learn about myself, the more sure I am that intellectual prosperity is much more important to my health than financial prosperity -- doctors have even agreed with me about this purely from the point of view of how to avoid depression. I'm just not biochemically capable of subordinating myself to short-term business priorities that are not driven by scientific curiosity to learn data-driven answers to questions about some facet of the world [1].

There just appears to be literally zero such modes of employment, regardless of willingness to trade-off salary to have one. There are places which claim to be focused on performing research, for sure, but none that really are. There are only transient projects bounded severely by business jargon and in order to function in a healthy way in such positions, a person must have other motivators that drive them to accept, and even be happy with, a compromise in which they agree to subordinate themselves to otherwise uninteresting business goals on a purely instrumentally rational basis -- a mode of achieving goals in other aspects of life.

But if you are so arranged mentally that you don't have other such goals, and no other aspect of achievement in life prompts any willingness or desire to make that subordinating compromise, then there is just no feasible way to be cognitively healthy in the context of first-world employment.

Life becomes an attempt to avoid burnout and depression, in every boring daily moment, rather than an attempt to achieve happiness despite challenging exogenous events that must be overcome or dealt with. I think this inversion is something not a lot of people can relate to. It's not as if there are "things in the way" of being happy which must be overcome. It is more that, in the context of how I have come to know myself internally, contrasted against the context of basal existence in reality, the idea of even trying to be happy is laughably far away because the immediate problem is just how to not burnout spectacularly in every daily moment.


I plan to write a follow-up to this to describe how the issue is also related to some theories I have about underemployment.

[1]  Even if I could take a magic pill that would help me become more biochemically capable of making that kind of subordinating compromise, I would not want to. The imagined outcome, moving myself biochemically with the aid of a pill towards a brain state in which I am capable of the compromise, is a worse state of existence than the constant torment and suffering that occurs from being unable to make the compromise in the first place. I find that doctors often struggle to understand this perspective. Perhaps it's because they are trained to pragmatically focus on what they can do to help (mend a bone, prescribe a drug, etc.) and they are not necessarily qualified to speak with you about questions of utilitarian value and your personal way of constructing subjective meaning from your sensory input stream of external stimuli. But that is indeed the base of the problem: if you value a certain way of constructing meaning, then it's not a simple proposition to change it. Anything that might alter your mind might also disrupt your ability to construct meaning in your preferred way. 

Counter-intuitively, this type of depression has caused me to withdraw from typical problems that sometimes affect people with depression: for example, I feel markedly less desire to drink alcohol because the root of the problem is that on the one hand I have and prefer one mode of constructing value from my experiences,  and on the other hand first-world societal norms seems to claim that my way of constructing value is not legitimate and will not be accepted, supported, or validated. Something like alcohol (or prescription medicine) has the potential to alter me, chemically, in such a way as to change my preferences. Yet I don't want to change my way of constructing value (e.g. my inability to be cognitively healthy when making a subordinating compromise to spend my labor on worthless business problems) -- if that were to change (if I were to become capable of spending my labor on worthless business problems without thereby suffering poor cognitive health) though it might be better for me in material terms (I could sustain higher-paying jobs and avoid quitting jobs) it would overall be worse for me for the very reason that my way of constructing meaning from experience (which is heavily dependent upon my inability to make that very compromise) would be morphed away from what I value and into some other state of existence that I don't value.

This is a very hard problem to explain, even to experienced professionals or academic philosophers. Most people just have an understandbly animalistic reaction to become angry or annoyed, to say cliché things like "we all have to do things we don't like" and to roll their eyes and dismiss the problem. Sometimes burnout is a real worry just from the sheer exhaustion that comes with trying to explain this and help people to really understand that this issue is nothing at all like the cliché problem of reconciling that you can't spend all of your time doing what you happen to prefer in life.

Thursday, April 2, 2015

The Osmosis Lie: Crowded Trading Floors

From a recent job ad for analytics positions in a Canadian hedge fund:
 
"... no remote workers. We all sit on the trading floor."

Having worked in quant finance, this is now my litmus test. There is absolutely no good reason why researchers and programmers need to be physically co-located in a loud, crowded trading floor, bullpen, or cubicle bank, especially given modern workplace chat programs and convenient issue tracking tools. Any organization that says programmers or quant researchers need to sit in the trading floor so that one can absorb information, learn the tradecraft osmotically, or whatever, is blowing smoke. 

It's business-speak for "our company is mired in legacy tech, bureaucracy, and outdated standards." Such jobs have to pay above market rates, award bonuses, and give disproportionate raises each year, because it's the only way people who are thoroughly burnt out from the sisyphean task of overcoming the daily noise to focus will even bother coming to work. For better or worse it seems that the only people interested in staying in these jobs for long only do it for the money -- there is a huge culture of self-selling and an extreme willingness to bottomlessly compromise on tech culture standards, like a proud martyr. If you join the firm as a technologist with actual skill and some common sense opinions about software culture and best practices, you just get chewed up by the politics and nothing changes. As a result, the steady-state of most of these organizations is a state in which all of the transient signals from good engineers and culturally progressive technologists have been beaten down to noise, either because such folks only last a short while before quitting, or else because they ultimately decide keeping their heads down and collecting the paycheck is better than fighting endless culture battles.


Another red flag is a requirement that a programmer is "comfortable in any language" (what the hell does that even mean?) or possibly a false promise that "you can use whatever language you want." It speaks of bespoke reinventions of wheels all over; no modular, tested central library of common tools. I once interviewed for a firm that highlighted my interest in Haskell. They said one of the perks of the job would be that I am personally in charge of maintaining my own software tools for rapid prototyping and so if I wanted to write it in Haskell I would be free to do so. At first blush this sounds interesting -- being paid well to write and learn Haskell! But on closer inspection, it's total poison. Wait until you're forced to port some of your nifty Haskell constructs into R or Java and interoperate with some legacy system. Then you'll badly wish there had been enforced standards from the beginning, even if said standards had removed your so-called freedom to homebrew all of your own Haskell gadgets in the first place. This is the kind of dubious ploy that is attractive to inexperienced programmers who haven't lived through a shitstorm of work surrounding migration and integration headaches. Progressive technology culture isn't about draconian, bureaucratic standards (the things that led us to enterprise Java) but it also isn't about having zero standards and zero foresight into the modularity of the larger system

Basically, these jobs make you sit physically tethered to a trader or portfolio manager, beholden to that person's real-time whim about what needs to be computed. It's literally retail data exploration. You are those caricatures of scientists and engineers that you cynically mock in Michael Bay movies butchering tech jargon while they show the joint chiefs a monitor full of gibberish that will save us from the Russian alien robots. You don't have time to blink; you just type! If it means hacking something in an ungodly amalgamation of Excel, C++, Python, and R, and slapping some duct tape on it so the trader can use it before lunch time, then by god that's what you must do. Documentation? Who has time? There was a standard library module that already did everything you just spent 4 hours re-inventing? Too bad there was no time to research a rational approach to the system's design. Someone needs to reproduce a result you calculated 6 months ago? Good luck, buddy: here's my 5000 lines of undocumented R code.

The kicker is that there is such a macho attitude of "this is just how finance is" that the company is usually proud that it's like this, instead of recognizing that it's the very definition of developer hell, and, more importantly, it is not at all required or implied as a domain constraint of finance. Instead of allowing traders to have beholden computation assistants, you could just actually force traders to know how to do real programming. Or, you could actually trust data scientists, statisticians, and other inference domain experts who can program to also develop trading strategies, instead of underemploying them as data secretaries. You can organize teams around common tools, common analysis motifs, etc., and build with re-use, modularity, and QA in mind, so that writing best-practices compliant, tested software is not incompatible with meeting intense intra-day deadlines.

I wouldn't feel so much resentment of this sort of thing if it weren't for the macho pride, the attitude that "this is just how finance is" -- crappy hacks, zero unit tests, slinging Excel sheets around with zero data provenance, hacking everything onto 20-year-old legacy C++ and legacy Java, allowing traders and economic domain experts infinite freedom to pick their computing tools (and letting them push the needless headache of integration onto someone else). 

It's something the industry should be ashamed of and should be working exceptionally hard to change. But it's not even admitted as a problem -- it's lauded as if it's heroic. We don't need your wimpy best practices here, we just roll up our sleeves and do whatever it takes to make the system work. It's almost by definition a culture of mediocrity since the only definition of success is whether the system as-is is making money -- very little thought is given to the counterfactual money being left on the table by not doing things in a fundamentally better way. If a thing is good, it doesn't mean it's better.

Sadly there are all sorts of political and market inefficiencies in the industry. Clients don't do a good enough job of holding investment managers accountable for returns, research, or technological best practices. When large institutional clients are involved, the deals made between them and the respective investment managers are often borne out of bizarre political alliances, nepotism, internal career politics, and so on. Firms spend inordinate amounts of money constructing marketing apparatuses to spin every market outcome as though it either could not possibly have been avoided or else proves the investment acumen of the firm, and institutional clients sadly buy it.

It's sad to say but the most likely way for this nonsense to be fixed is outsourcing. Cloud computing frameworks will be created that present simple interfaces to the domain experts, and the start-ups that create such interfaces will be separate and free to employ whatever technological best practices they see fit. Simply by shifting the burden of the problem outside of the stodgy, politically dysfunctional finance firm, it can suddenly be solved. It could easily imply significant layoffs for this sort of commoditized retail data analyst role. It reminds of the Max Planck quote: "Science progresses one funeral at a time."

Thursday, March 12, 2015

¬ Agile

Some of my thoughts on Agile got some Twitter press recently. I reproduced them below, with some light editing to make the series of comments into a single essay.


One complaint I have with Agile is that it takes the avoidance of a long-term plan too far — to the cartoonish extreme that user feedback is basically hooked up as somewhat of a control system. If user feedback was coherent, this might not be a problem, but in general user feedback is all over the map, contradicts itself, and calls for feature implementations or bug fixes that are highly speculative from an engineering investment point of view. If you are lucky enough to work on an Agile team where engineers are actually consulted for the feasibility of this kind of thing, then it might be OK and Agile might really be more of a set of bookkeeping conventions for you. But that’s not at all how it plays out in practice. In practice you’ve got sales-facing and business-facing managers in the loop, pushing various agendas and always looking for an angle to play based on the work in the back log and pointless metrics. The metrics are also a pretty substantial detractor. Burndown is a joke, particularly compared with linear burndown, and also when compared across teams and projects that vary wildly. Again, I suppose in some theoretical vacuum it might be OK if managers actually understood that you can’t just look at half a dozen burndown graphs and generalize that into any sort of meaningful management decisions. But that’s not what many middle managers do: instead they warp, twist, and misrepresent Agile metrics to suit their changing political circumstances and to lobby for their self-promotional agendas. It’s really depressing as an engineer to literally see in real time (as story points are being partitioned out) how some political agenda is being lobbied for with useless metrics and how it is directly affecting your personal work. I’ve always found this article about estimation games particularly enlightening about all that.


Agile is also extremely costly and wasteful, particularly of time. Take an average entry level developer salary, which is not trivial these days, and tabulate up what 15 minutes for a stand-up every day equates to in terms of actual dollars [1]. The false assumption is that such planning was going to have to happen anyway, but it’s not really true. For all the clamoring that Agile is more streamlined with planning, I’ve always found rapid iterations to basically result in more total  planning time over the long run, kind of like the fractal behavior of measuring the coast of Britain or whatever. It artificially feels like mini-planning every few weeks keeps you lean and focused on just the important stuff, but it’s a mistake. You aren’t aware of your global context or what the whole user story feedback loop is steering you towards, and so frequently the rapid planning is needed to pivot away from a bad development path that would have been avoided entirely with more long-term, up-front planning. I’m not saying that there should be 3+ months of Ivory Tower planning before you start a project, but the extreme micro-planning alternative is also too much. It ebbs and flows, changes by personnel and nature of the project. No framework will solve every problem, so why not be open to occasionally approaching something with a longer-term, research-focused type of planning period? Why always preclude it by the very nature of the tightly maintained iteration schedules?


From a philosophical point of view the lack of global context is the dealbreaker for me. Software is a fundamentally creative enterprise and the whole reason why it’s worthwhile to incur the insane cost of the complexity of software is because you can design it, meaning you can use intelligent, optimization power to consider the vast space of possible designs and you can intentionally and deterministically and predictably cause your work to steer you into a pocket of design space that meets your many needs.


When you agree to be steered by a feedback loop essentially made of a lightly business-filtered stream of customer feedback, you throw this optimization power out the window. You throw away something deterministic and efficient, like gradient descent, and you replace it with something random and inefficient like simulated annealing. Quite literally, you are making somewhat random tiny changes and sitting back to witness whether customers randomly accept or reject them. You are evolving your product in a manner more like Darwinian evolution, which is not sensible when the alternative is direct, creative optimization. If the nature of the design problem is so challenging that you are forced to optimize in this way, it should be lamented and not celebrated as if it was a superior business strategy. And you should be vigilant in a search for how to rescope the problems such that they are amenable to direct engineering, rather than control-system-steering-from-user-feedback.


There’s also a lot of the whole “the road to Hell is paved with good intentions” in Agile. Earnest tech managers may really want to prioritize refactoring workflows, re-designs, optimizations, etc. But all it takes is some slippage, some important business deadline for Q2 that has all the sales people in a nervous mood, and suddenly no matter how much you had been promising to take that big refactoring item off the back log and turn it into some legitimate user stories for the upcoming sprint, you’ll have to break the bad news again to everyone that you’ve got to hack even more new features on top of the bad infrastructure. This is not necessarily Agile’s fault, but Agile also doesn’t set up any kind of a framework that actively makes this entropy growth less likely. Part of the reason is that to do that — to create a brand of QA that explicitly makes it hard for short-term business concerns to override refactoring concerns — is antithetical to the Agile metrics (and this is not good for the middle managers who require such stats for their own advancement) and the time frame of Agile workflows.


Lastly, and this one is just my own personal gripe, Agile is infantile. Agile as a system oppresses me: it says that the system doesn’t care if I am clever, or if I have thought of a great new way to implement something, or if I have thought of a clever hack that I want to try out, or a new gadget that I can code up and share with my team. It discourages tinkering and discourages creativity big time, especially big picture creativity about the global optimization of the project. Agile infantilizes you because it says don’t be a whole person. Don’t bring all of your skills to bear on your work: only bring that minimal set needed to solve the specific user stories shoveled onto your plate. In fact, leave every other part of your professional self at home so as to avoid burning out, feeling bored, feeling shunted from progress, or otherwise becoming unhappy with the cog-like way you are expected to churn through discretized, measured progress. This might be necessary if you manage 10,000 programmers, most of whom suck, but it also means you will never ever be able to get those few really good programmers who can make all of the difference.


In response to the inevitable contortions made to defend whatever “pure Agile” might mean apart from Scrum and things like Scrum, I feel this captures my sentiment: “whenever a majority of users tend to misuse a tool, beyond a certain point it becomes the tool’s fault.”


If Agile (or any given framework) can be easily subverted by politics, then regardless of how good its theoretical intentions are, the framework is a failure. Given that nothing about the basic Agile principles addresses a need to explicitly encode a mechanism that makes it hard for short-term business needs or political concerns to supersede legitimate engineering concerns, and in fact that the Agile tenet of “Responding to change over Following a plan” can be viewed as an endorsement of exactly the opposite in many cases, it’s not particularly surprising or even interesting to point out what a failure Agile is, and it’s not disingenuous nor extreme to flatly say that Agile does not add anything to our communal wheelhouse that common sense had not already put there. Yet it does subtract a lot by way of all of the previous criticisms.


To restate it: there is nothing disingenuous about flatly saying that Agile fails, that Agile adds nothing worthwhile, that Agile is very easy to subvert, and that as a result Agile is not worth defending (certainly not defending in contrast to the likewise idiotic doctrine of Waterfall), and I don’t feel the least bit like I am being one-sided or extreme in saying so. I don’t give Agile any kudos. It didn’t fix anything and the creation of Agile does not deserve credit or praise. Young developers should boycott jobs that will debase them with mandatory adherence to Agile or any other fixed, reactive prescriptive framework that short circuits their ability to learn by tinkering on the job. The opportunity cost of accepting such jobs is often too high. Society at large should be pissed off over all of the aggregate value that has been frittered away due to Agile. That some value has nonetheless been created from within Agile implementations (indeed, in spite of Agile) is no credit to Agile.


To me, the many responses that faithfully recite the so-called Agile principles as a defense of the desirability of Agile always read as a No True Scotsman fallacy. If you want to define the term “Agile” to mean “only good things and never bad things” then “Agile” is vacuously always good and never bad.


All of these principles that Agile tries to appropriate for itself are desirable. I don’t know of any formal software development process, whether it is Waterfall or Agile or any other, that would ever claim to explicitly deny any of those points. Where in Waterfall does it say to deliver late, crappy, bad software that doesn’t meet the client’s new specs? Yet some development processes are better than others at meeting these goals.


Agile doesn’t own the copyright on whatever good quality software is. The idea of delighting your customer, planning when you need to plan, requiring specs when you need to require specs, using common sense, collaborating with business domain experts, … these things have been around since the dawn of commercial software itself in one form or another. They weren’t just dreamt up in 2001.


If someone wishes to define “Agile” as == all good software quality standards, more power to them. It becomes a useless definition and reminds me of the Yudkowsky quote, “If you are equally good at explaining any outcome, you have zero knowledge.”


I choose to define Agile by whatever common business practices emerge in its presence and I am confident that if Agile is defined this way, it removes the mask and reveals the ugly truth that as a system it merely pays lip service to quality while being all too easy to politically subvert.


[1] Here's a quick and dirty estimation of the cost of merely just the daily standup meetings that are prescribed in most implementations of Agile. Let's suppose conservatively that you've got about 5 engineers on a single team (in practice this is all over the map, even ranging upwards of 30 in a team I was a part of, though we never officially tried to have all 30 in the same standup). Junior to medium level developer salaries in most major metro areas are going to be between $\$$85k and $\$$120k, let's call the average $\$$100,000. And let's say that you've got two more senior developers, with average salaries maybe in the $\$$150,000 range. Again these are conservative estimates.


If we assume roughly 250 working days per year and 8 hours of useful productivity per day (again, conservative), this means the junior devs are paid $\$$12.50 for their time in a 15 minute meeting. The senior devs are paid $\$$18.75. This amounts to $\$$3125.00 per year per junior developer and $\$$4687.50 per year per senior developer. With 5 junior and 2 senior developers on a team, the team's total annual cost just for the daily standup meetings alone is $\$$25,000.00. That's 25 grand per Scrum team per year just for one daily meeting. And we're not even factoring in the costs of providing insurance, commuter benefits, etc., which will make the actual annual cost considerably higher.


If we assume this ratio of 5 junior devs to 2 senior devs is roughly constant throughout an organization, it means that on average every 7 developers are costing you at least $\$$25k per year just for Scrum standups. If you employ 70 developers, that becomes $\$$250k. If you employ 700 developers, it's $\$$2.5 million. If you employ 10,000 developers, you're talking about more than $\$$35 million dollars. Just. for. standups. (not even counting insurance or other benefit costs, nor whatever productivity drag there might be just from having the meetings.)

How to Write a Technology Job Ad

We are seeking an idealistic, quality-driven technologist to think creatively about unstructured, high-level problems and the trade-offs needed for their pragmatic solutions. If you have the maturity level to avoid considering yourself a ninja or guru of anything, read on!

You'll love working in our private, quiet offices from 8am to 4pm Monday through Friday, where we don't stock any beers or gourmet coffee!

You (an adult human mammal) are expected to figure out your own meals, but don't worry: we pay you enough money to eat and also to afford a structure to protect you and your family from the weather while you eat.

We're looking for someone who gets things done, such as the things in the job description that we mutually agree on and not other, surprise things.

Do you shop at Target instead of Wal-Mart? Well then you probably have more than enough higher education for us.

We don't want to know which buzzwords you selected for your resume. We just want to talk to you about your skills and experience and see if we can collaborate like amicable adult colleagues.

Benefits: actual insurance, actual vacation time, actual help for retirement.

Because it's 2015, please submit your resume online and don't fill out 6 pages of info that we could just read from your resume... we'll do that.

Thanks!