What is born out of frustration with existing languages and environments for system programming?

Early in the pandemic, I received an e-mail from a reader who embraced my writing about the importance of deep work and the need to minimize distractions, but was thrown by my use of the term “productivity” to describe these efforts: “The productivity language is an impediment for me.” Intrigued, I posted a short essay on my Web site that reacted to her message, proposing that the term “productive” could be salvaged if we define it more carefully. There were, I wrote, positive aspects to the idea of productivity. For example, by better organizing administrative tasks that cannot be ignored—paying taxes, filing forms—you can reduce how much time you spend on such drudgery. On a larger scale, the structured “productive” pursuit of important projects, far from being soulless, can be an important source of meaning.

My readers didn’t buy my defense. The comments were filled with a growing distaste for the many implications and exhortations that had become associated with productivity culture. “The productivity terminology encodes not only getting things done, but doing them at all costs,” one reader wrote. Another commenter pushed back against the proliferation of early-pandemic business articles that encouraged workers to stay “productive” even as they were thrown unexpectedly into remote environments: “The true message behind these posts is clear: ignore your growing sense of existential dread, ignore your children, and produce value for our shareholders—or else!” Others advocated for alternative terms, such as “alive time,” or “productive creativity”—anything to cleave the relationship between “productivity” the signifier and all that it had come to signify.

Some of these reactions were amplified because of the unique stresses of the early pandemic, but that alone cannot explain their stridency. A growing portion of my audience was clearly fed up with “productivity,” and they are not alone. The past few years have seen many popular books that elaborate this same point. In 2019, the artist and writer Jenny Odell helped start this trend when she published “How to Do Nothing: Resisting the Attention Economy,” which became a Times best-seller and was selected by Barack Obama as one his favorite books of 2019. This was followed, the next spring, by Celeste Headlee’s “Do Nothing: How to Break Away from Overworking, Overdoing, and Underliving,” then Anne Helen Petersen’s “Can’t Even: How Millennials Became the Burnout Generation,” and, earlier this year, Devon Price’s “Laziness Does Not Exist.” Though these books ultimately present a diverse collection of arguments, they are unified by a defiant rebuke of productivity culture.

A striking element of these books is the degree to which their message is born out of personal experience. Not long after Headlee’s book was published, I interviewed her and asked why she decided to write about this topic. She told me of a TED talk she had given about having better conversations that went unexpectedly viral, gathering over twenty-five million views. “I was inundated with requests for writing and speaking,” she said. She tried to say “no” more often, but found that “the offers got harder and harder to turn down.” She was soon overwhelmed. “I was more stressed out, and more busy, and sick,” she said, describing two prolonged illnesses that laid her low during this period. “That’s what made me realize I was in crisis: I rarely get sick.” Headlee concluded that humans were not wired to maximize activity—she argued that we’re pushed into this unnatural and unhealthy state by cultural influences that aren’t aligned with our best interests, citing “a combination of capitalist propaganda with religious propaganda that makes us feel guilty if we’re not feeling productive.”

It’s understandable that authors such as Headlee, or the commenters on my essay, have become frustrated with the lionization of “productivity”: we’re exhausted and are fed up with the forces that pushed us into this state. But, before we decide whether we need to dispense with the term altogether, we should briefly revisit its history. The use of the word “productive” in an economic context dates back to at least the time of Adam Smith, who used it in “The Wealth of Nations” to describe labor that added value to materials. According to Smith, a carpenter transforming a pile of boards into a cabinet is engaging in productive labor, as the cabinet is worth more than what the original boards cost. As the formal study of economics solidified, “productivity” gained a more precise formulation: output produced per unit of input. From a macroeconomic perspective, this metric is important, because increasing it produces surplus value, which in turn grows the economy and generally improves the standard of living. On long timescales, improvements in productivity can be greatly positive. Writing in 1999, the management theorist Peter Drucker noted that the productivity of the manual worker had grown fiftyfold during the last century. “On this achievement rest all of the economic and social gains of the 20th century,” Drucker concluded. In other words, the increase in productivity is why today most Americans own a smartphone, while a century ago they didn’t have indoor plumbing.

If you accept that increased productivity helps the common good, the question becomes how to reliably achieve these increases. Until recently, the answer to this largely involved optimizing systems. In the seventeenth century, agricultural productivity was increased by the introduction of the Norfolk four-course system, which avoided the need to leave fields periodically fallow. Similarly, the productivity of early-twentieth-century car manufacturing leaped forward with the replacement of the craft method (in which workers moved around a stationary chassis) with Henry Ford’s continuous-motion assembly line (in which the chassis moved past the stationary workers). The relationship between these optimized systems and the people who toiled in them was complicated and often quite dark. The introduction of the industrial assembly line, for example, accelerated the de-skilling of manual labor, and made workers’ tasks more monotonous. Most relevant to this discussion, however, is how these optimization efforts were developed largely outside the scope of the individual employees included in the systems. If you worked on a Ford automotive assembly line, you didn’t need to read about the habits of highly effective people to do your job well.

Then came the rise of knowledge work. By the time this term was first introduced, in 1959, the center of gravity for the American economy had begun moving from fields and factories toward offices, and many of these office-based efforts evolved from rote clerical tasks to more creative and skilled initiatives. The importance of increasing macro-level productivity remained, but the way we pursued these increases changed. Instead of continuing to focus on optimizing systems, the knowledge sector, for various complicated reasons, began to shift onto the individual worker the burden of improving output produced per unit of input. Productivity, for the first time in modern economic history, became personal.

We should not underestimate the radical nature of this shift. Historically, optimizing systems to increase productivity was exceedingly difficult. The assembly line didn’t arrive in a flash of self-evident insight. Ford suffered through numerous false starts and incremental experiments. He had to invest significant amounts of money and develop new tools, including one particularly ingenious mechanism, which could simultaneously drill forty-five holes into an engine block. Now we casually ask individual knowledge workers to undertake similarly complex optimizations of their own proverbial factories, and to do it concurrently with actually executing all the work they’re attempting to streamline. Even more troubling is the psychological impact of individualizing these improvements. In classic productivity, there’s no upper limit to the amount of output you seek to produce: more is always better. When you ask individuals to optimize productivity, this more-is-more reality pits the professional part of their life against the personal. More output is possible if you’re willing to steal hours from other parts of your day—from family dinners, or relaxing bike rides—so the imperative to optimize devolves into a game of internal brinkmanship. This is an impossibly daunting and fraught request, and yet we pretend that it’s natural and straightforward. It’s hard enough to optimize a factory, and a factory doesn’t have to worry about getting home in time for school pickups.

What are the 4 types of programming languages?

5 Different Types of Programming Languages.
Object-Oriented Programming Language..
Logic Programming Language..
Procedural Programming Language..
Functional Programming Language..
Scripting Programming Language..

What is the main reason for the presence of many programming languages?

To sum it up, the main reason why there are many programming languages out there is that different problems require different tools to solve them. Each programming language has certain features and characteristics that make it suitable for specific tasks.

What is the most challenging programming language?

C++ C++ is considered to be one of the most powerful, fastest, and toughest programming languages.

What are the 3 classification of programming languages?

Categories of Programming Languages Machine languages , that are interpreted directly in hardware. Assembly languages , that are thin wrappers over a corresponding machine language. High-level languages , that are anything machine-independent.