Mario Caropreso

"Impossible is a word to be found only in the dictionary of fools" (Napoleon)

Enable your organization to operate with less information

“Confronted with a task, and having less information available than is needed to perform that task, an organization may react in either of two ways. One is to increase its information-processing capacity, the other to design the organization, and indeed the task itself, in such a way as to enable it to operate on the basis of less information”

(Van Creveld, Command in War)

In order to thrive and maintain a competitive edge in environments characterized by rapid changes, the ability to move faster than competitors is fundamental. Movements like the Lean Startup and the various Agile methodologies have popularized the concepts of being nimble and staying lean as a way to lower the cost of change and react faster than competitors. Most of these approaches are based on the concept of speed of decision making enabled by less mass: the less massive is your organization, the less energy is required to change its direction.

While this is certainly true, I’d like to show in this post how focusing just on speed of decision making gives you just half the picture: if you take into account the whole decision cycle, the quantity of information that you need before making a decision is actually more limiting that the absolute speed of decision making. Hence, the thesis of this article: if you want to achieve a superior performance, enable your organization to work with less information.

The decision cycle

We can describe the decision cycle that happens inside any organizations using the OODA loop concept developed by John Boyd. According to this model, decision making is a recurring cycle of four steps:observation, where the entity tries to make sense of all the details connected with the execution of the task at hand; orientation, in which the entity modifies and evolves its mental patterns to match the changing world; decision, where the entity recognizes the occurrence of a pattern and decides to react to it; act, where the response for the selected pattern is executed.

I want to show now that in order to win over your competitors, it’s not the absolute speed that counts, but your relative tempo or rhythm: it’s the entity who is able to go through more cycles that has an increased chance to win.

The central step of the OODA loop is orientation. Orientation is the way we interact with our environment. It is the set of all our patterns and beliefs about the world. Orientation is the element that builds effectiveness in our actions: fast decisions and heroic actions are useless if they follow wrong information because of the inadequacy of our orientation. Since the organization who has the best repertoire of orientation patterns and the ability to select the correct one according to the situation at hand has an increased chance to win, in order to keep being effective, an organization has to keep its orientation updated; it must validate its beliefs and modify them to incorporate new ones, as circumstances demand. This means, to use Boyd’s words, that “he who can handle the quickest rate of change survives“. Since survival is all about the capability to evolve, to adapt and learn, speed is important only in the measure it enables you to evolve and adapt better than competitors. Victory goes to the side that can complete more cycles from observation to action.

How information can slow you down

Having framed the “move faster” mantra in the context of tempo of the decision cycle and rate of change, the next question to answer is: how can you increase your tempo?

It’s easy to see that, before an organization can actually move, it has first to gather all the information relevant to the task at hand and then process them. Thus, performing a task generates a demand for information. One might be tempted to say that in order to move faster, you need to get better at gathering and processing information. I want to show how this is impractical.

Gathering and processing all the information is a time consuming task. First of all, one has to gather all the relevant information, and this poses the first obstacle. Getting the right information is not easy: information can be insufficient, contradictory, superabundant or plainly wrong; you can’t get information on what you can’t observe, for example, your competitors intentions. Also, let’s say that one is able to gather all the relevant information. You still need to process them. Because of the nature of information, the more information you have, the longer the time needed to process it, the greater the difficulty of distinguishing between what is signal and what is noise. This happens because information, in a competitive setting, has a time value: it is perishable. As General Patton said, “Information is like eggs. The fresher, the better”. As you use your time to process information, the information you have already collected is becoming obsolete. This means that you need to collect new information! This creates a negative feedback loop in which the information you have is useless and you are constantly looking for new one.

One way to escape this negative feedback is to realize that one will never be able to achieve certainty. You have to accept that uncertainty is the natural condition of any human endeavor. So, instead of trying to gather and process more information, you structure your task and your organization to work with less of it. Instead of reducing the uncertainty, we reduce the number of things we need to be certain of.

Enable your organization to operate with less information

According to Van Creveld, there are four main factors that influence the quantity of information your organization needs in order to perform a task. By acting on these factors, you can modify your information need and increase your potential performance.

Degree of specialization
The more specialized people in your organization are, the more information they require for coordinating their performance. The cost of coordination grows not linearly but geometrically as the number of specialties increases. One way to mitigate the issue is to assemble cross-functional self-contained teams capable of independent action.

When people share a common frame of references, ideas, experiences and trust, one knows what to do and what one can expect of others, and implicit communication will suffice. In order to let this common frame establish, you need to provide a certain stability and homogeneity in the organizational structure.

Everything else being equal, a larger and more complex task will demand more information to carry out. One way to avoid the the multiplication of communication channels is to divide the task into various parts and assign each of them to a force that is able to deal with it separately and on a semi-independent basis.

Centralized resources increase the need for planning, coordination and internal communication. Degree of specialization can also have an impact on centralization: more specialized the people, less capable separately are of making independent decisions, hence requiring an higher decision threshold.

Real world examples from Yammer

I want to provide you some real examples of how you can work with less information. I’ll give you two examples drawn on my experience at Yammer since our development methodology is inspired to the same principles. The examples are the way product teams are organized and the concept of product specification.

Teams at Yammer are organized in a crossfunctional way. Each engineer is member of one of the five functional teams: frontend, APIs, Core Services, Apps and Infrastructure. But this is not the way we work. When a project is ready for implementation, a crossfunctional team with representatives from each functional area is assembled and assigned to that project. The size of the team is kept small, between 2 and 10 engineers. This has some benefits. First of all, by grouping together all the specialists involved in implementing a project, we ensure that the team has no external dependency and thus it’s full autonomous and independent. The decision threshold is very low, since the team is empowered to make decisions across all the functional areas represented. The communication burden is reduced because all the communications happen inside the small cross-functional team.

The following example has been provided to me by my colleague Marco Rogers, manager of the Frontend Team. The Product Specification describes the product your company will build. Its purpose is to clearly and unambiguously articulate the product’s purpose, features, functionality and behavior. How many information do you need to include into a specification? An obvious goal would be for it to be complete enough to provide the development team the information they need to do their jobs. But how much is “complete information”? The way Yammer tackles this is that we don’t expect the specification to be the single source of truth document. We don’t expect everything to be deeply specified in it. Instead, we design the task of implementing a feature to require less information. The Product Specification roughly defines the scope of the project, but everyone on the project team is responsible for building their own understanding of the scope. The Product Manager is an integral part of the product team. QA people talk to Product Manager to clarify gaps. Developers talk to the Product Manager to negotiate changes in scope if the work is too expensive. Since people work together on the project team and we make collaboration easy, a detailed spec which specifies all the requirements up front is not needed.

Obviously, it is important to find the right balance between too much and too little information. Too little information, and you risk to go in the wrong direction. Too much, and you risk to remain in the same position forever. How much is too much is a judgement call. It can’t be taught, and it is what makes our job so interesting.

Originally posted on the Yammer Engineering Blog: 

Nonlinearities and Success

"A lot of would-be founders believe that startups either take off or don’t. You build something, make it available, and if you’ve made a better mousetrap, people beat a path to your door as promised. Or they don’t, in which case the market must not exist. Actually startups take off because the founders make them take off."

(Paul Graham)

In his “Do things that don’t scale" essay, Paul Graham suggests that a startup’s success is not an automatic thing, but rather startups take off because their founders make them take off. This makes me think about the general properties of success, both at the individual and startup level, and wonder why, even if we have cases and cases of successful individuals and companies, it is impossible to devise any rules or models of success.

Indeed, many people pretend they can explain success: usually they attribute success to specific character traits or to an infallible process or try to build a narrative to retrospectively fit it and explain how it was achieved. In this article I’d like to first show why it’s difficult to build a model to explain success and second how models can actually hurt you.

The nature of success

When we talk about success, we usually mean the accomplishment of an aim or purpose. The accomplishment may be temporary or permanent and the aim may be the original or a new one devised while pursuing the former - nonetheless, we mean success when we achieve something that we consider worthwhile to achieve.

Success is an outcome, it’s the result of a combination of factors that in a given time led to to a positive outcome for the agent achieving it. Given success as the end product of a process, it is obvious that if we want to explain it, we need to know the inputs to that process and the transformation operated by the process on the inputs to generate the successful output. We can call this process the generator of success. So, the problem of understanding success can be reframed in terms of understanding its generator.

The generator of success is nonlinear

But our problems begin here. The generator of success is clearly a nonlinear process. 

A process is said to be linear if the output is proportional to the stimulus given to it. For example, if you double the amount of money you put into your savings account, then you will receive double the interest. In a nonlinear process, this doesn’t happen: the output is not directly proportional to its input and hence a small change in the value of a variable can result in a bigger change in the value of another one. If you work ten percent more hours, would you accomplish ten percent more work? And if you worked twice as many hours? 

Life is full of nonlinearities. I can safely say that all the interesting things in life happen because of nonlinearities. You start to study a foreign language and stare at people uttering random sounds for some months and all of a sudden you listen to a song and you forget it was not in your native language. You spend days looking at a problem and when you decide to step back and give up the solution pops up in your mind. Reid Hoffman tells the example of George Clooney who, despite possessing all the traits generally attributed to successful actors, spent twelve years of auditioning before landing the ER role that made him a star in Hollywood. 

Nonlinearities in the generator make success a path dependent outcome. Path dependency means that small differences in the initial conditions, amplified by positive feedback loops, determine success or failure. This means that success isn’t always the winning of the best (best product, best technology, best skills) because the outcome of being the best is not linearly correlated with success. Take as an example the success of Microsoft. Nobody claimed that Bill Gates had the best software at the time. But nonlinearities, in terms of network externalities and bandwagon effects, amplified the small initial advantage and made Microsoft dominant. And the history is full of example like that.

One could ask, given the strong evidence of absence of linearity in explaining success, why do we keep using linear models? I think this happens for two reasons:

  1. First, linear models are much easier to understand and the mathematics for them is relatively straightforward; under certain hypotheses, they can adequately represent the behavior of many realistic processes;
  2. Secondly, there is a natural fallacy in our brain that makes us simplify the world as a chain of cause-effect relationships and that is something that can’t be avoided. Our brain evolved to give us the means to survive and preserve our genes. In front of a tiger, the simple link between it and the possibility of being eaten by it was enough to trigger the fight-or-flight response that made possible for the hunter-gatherer to give us his genes. But when we use that same brain to try to understand reality and draw a link between being agile and selling your company for 1 billion dollars then problems begin.

Why is using the wrong model worse than using no model at all? 

Many people may object that having a bad model is better than having no model at all. This is bad. Let me explain why.

Would you go on a trip to an unknown place in Borneo with a map of New York City? While the answer to this question is straightforward, it is incredible how many people fail to apply the same logic when dealing with the same problem in other domains like product development or startups.

Using the wrong model is bad for two reasons. First, you don’t know beforehand where the model will be wrong. The most dangerous position to find ourselves in is when the gap between what we know and what we think we know become wide. An example of that are predictions. Nassim Taleb shows that in presence of nonlinearities and uncertainty our ability to predict from the past is completely faulted. The most extreme outcomes are the ones that really matters; yet since they were extreme, they are mostly underrepresented in the past. Hence we have no ideas where our model will be mistaken.

Second, a wrong model can lead us to ignore opportunities for success or disrupt our efforts. Let me explain why. Given the nonlinearity of success, we can’t know in advance whether a course of actions will lead to a positive outcome or not. But for sure we can say in same cases the action will give us some gains, in other cases it will harm us. This means that  performing an action, of doing something, is equivalent to buying an option: our actions today buy us the right to enjoy success in the future, in case success will happen. The value of such an option depends on the shape of the payoff function and the presence of asymmetric returns. Nassim Taleb calls this property convexity. Every opportunity has an associated payoff function. When there is an asymmetry in the payoff function between the gains, that need to be large, and the errors, that need to be small and harmless, the opportunity is valuable for us.


(picture from

The presence of real options on the road to success means that variability and flexibility have a tremendous value for us, but knowledge, or what we think we might know, may force us into variability limiting behaviors. We may discard a certain course of action simply because the information in our possess suggests so, without realizing that the information is at its best partial, or worst, completely wrong. We may limit our options because our knowledge says that something will work and we don’t defer the decision to the last responsible moment. This is why in the past I wrote an article stating that you have to forget in order to be able to do and try things. Path dependency means that small things play a big role and can make a big difference: an unexpected meeting, talking with a customer, discovering an opportunity at a party. Actually, you can get that initial edge because of sheer luck. History is dominated by low probability events.


Explaining success is difficult. Pretending to find a scientific model able to predict it is foolish. And sometimes, behind a great success you can find errors you would have never expected:

"Thus Napoleon at Jena had known nothing about the main action that took place on that day; had forgotten all about two of his corps; did not issue orders to a third, and possibly to a fourth; was taken by surprise by the action of a fifth; and, to cap it all, had  one of his principal subordinates display the kind of disobedience that would have brought a lesser mortal before a firing squad. Despite all these faults in command, Napoleon won what was probably the greatest single triumph in his entire career."

M. Van Creveld

Complexity is path dependent

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

(Gall’s Law)

Last week my colleague Matt Duncan posted this interesting quote about complex systems and it made me think a lot about the nature of complexity and of what qualifies as complex. If “a complex system that works is invariably found to have evolved from a simple system that worked”, I see complexity as a path dependent outcome: you can’t judge complexity in a system without knowing how the system ended up in that state. What is this different path dependent outcome of bottom-up and top-down systems? Why the complexity is different?

What is complexity?

Before diving into the analysis of the path dependency, I think it would be useful to clarify what we mean by complexity. I’ll made mine Nassim Taleb’s definitions in Antifragile.

A complex system is a system in which the interdependencies between its constituents are so severe that it’s impossible to predict the cascade of effects any interventions on the system may cause. Complex systems are characterized by causal opacity: it’s nearly impossible to see the arrow from cause and consequence. Complex does not mean complicated. A complicated system like the engine of a car can still have a predictable response given a set of inputs. Unpredictability and causal opacity are the real differentiators.

Why complexity is path dependent?

Back to the original Guill’s law, there is a difference between top-down and bottom-up complex systems. Let’s see why.

Bottom-up systems are survivors, they are proved-to-work. Top-down systems are brainchildren, they are supposed-to-work. I’ll call the first kind “natural system” and the second kind “artificial system”.

The first difference between a natural and an artificial system is in the knowledge they carry with them. Natural systems incorporate the knowledge of time and experience, of what has made the system survive. A natural system learns. Artificial systems incorporate the knowledge of what the creator thinks will make the system survive. They don’t learn. The reality is that what we know and what is true are really two different things, because of the causal opacity we talked before.

Also, artificial system are optimized: the creator started from a set of constraints and designed the best solution for those. Natural systems are adaptable: they have managed to survive in a world of changing conditions. Artificial systems are optimized to work in a set range of conditions: any deviations from this range generate cascading and unpredictable effects that will bring the system to its collapse. Natural systems learn from complexity: unexpected events bring new information the system uses to adapt and evolve.

Take as an example the human body. It is a very complex system. And now take a 50-story skyscraper. Its structural system responds very differently to static or dynamic loads. A free fall of the upper floors can make the whole structure collapse (look at the WTC collapse). This happens because this case has not been considered by the designer. Now take the human skeleton. It has evolved in response to external stressors and has gained new knowledge in the process. Everybody knows that the human body has this amazing property of healing fractured bones, but another astonishing property of bones is that “bone in a healthy person or animal will adapt to the loads under which it is placed. If loading on a particular bone increases, the bone will remodel itself over time to become stronger to resist that sort of loading” (Wolff’s law). This property has not been designed: it has emerged between several other properties that were less useful for survival.


How we work in Engineering - Yammer Engineering - If you are interested in how we deal with complexity at Yammer, you might be interested in our thoughts on trust, autonomy and decentralization. And if you like it, come join us!

Filed under complexity

Don’t Trust Cause-Effect Relationships

Causality is the relationship between an event (the cause) and a second event (the effect), where the second event is understood as a consequence of the first.

A startup is a human institution designed to create a new product or service under conditions of extreme uncertainty. Uncertainty is the essence of a startup. A startup operates in an uncertain environment: both the problem to be solved and the product to build are unknown. This is why learning is the vital function of a startup. 

Learning means updating our theories about reality, and these theories are expressed as cause-effect relationships. The problem is that our understanding of causality is often broken, and since our decisions are all influenced by our assumptions on causality, they have the potential to sway them.

In this post I investigate the problems of cause-effect relationships and show how we can overcome this obstacle.

The problem with causality

Learning is all about understanding why things happen and why some events lead to other events. Learning is about understanding the casual link between things. 

But this casual link doesn’t exist in reality; it isn’t a property of things. We cannot perceive cause and effect, but as the Scottish philosopher David Hume once said, we develop a habit of mind where we come to associate two types of object and event, always contiguous and occurring one after the other. Contiguity and succession are what we call causality.

Now, we infer causality from sensory information, prior experience and innate knowledge. And here come our problems:

Limited information - Only limited, often unreliable, information is available; in complex situation, many interdependent factors affect outcomes. Sometimes we ignore how factors are linked between themselves, sometimes we ignore what factors affect a situation: we usually deal with partial information;

Limited capacity - Human mind has only limited capacity to evaluate and process the information that is available. Usually, our prior experience and knowledge teach us what to look at;

Attribution errors - Human beings have a tendency to over-value dispositional explanations for the observed behaviors of others while under-valuing situational explanations for those behaviors. For example, we are likely to attribute our success to our insights and skills while downplaying others’ success to good luck or other external factors.

How to overcome these obstacles?

David Hume suggested what he calls mitigated skepticism: since we all have limited experience, our conclusions should always be tentative, modest, reserved, cautious. Nonetheless, anyone “sensible of the strange infirmities of human understanding” should also strive for overcoming these obstacles, and here is my advice. 

Data-informed - Make data-informed, not data-driven decisions. Data give you half the picture: they measure the past, and cannot predict the future. Treat data with caution: you might have measured the wrong thing, you might have measured the right thing in the wrong way. 

Test your assumptions - When too much uncertainty prevents you from a full understanding of your problem, you have only one choice: test your hypotheses. In science, this is known as the scientific method. If you think you’re doing right, test your assumptions. Do it.

Stay flexible - When there is too much uncertainty, flexibility is the key to success. Don’t make long term commitments that harm your capability to adapt to a fast changing environment. Napoleon would order his troops to stay at most 1-day march distance so that they were in mutually supporting positions and able to come to the aid of each other in the event of concentration for battle or to ward of superior forces. Since the battle plan was formulated only when enemy’s intentions were clear, this formation gave the Frenches the flexibility to gain a strategic advantage over their enemies.

I don’t want to encourage risk-taking behavior. In 218 BC, Hannibal invaded Italy by land across the Alps. The task was daunting to say the least. Nobody thought it would be possible. But Hannibal knew it was possible, and did it. Intuition is a highly developed pattern recognition process. It have to be trained, and nothing can train intuition like experience. 


Have you ever followed your gut instinct when odds were against you? Let me know your experience with a comment.

Forgetfullness is a Property of All Actions

If the man of action, in Goethe’s phrase, is without conscience, he is also without knowledge: he forgets most things in order to do one, he is unjust to what is behind him, and only recognises one law, the law of that which is to be. So he loves his work infinitely more, than it deserves to be loved; and the best works are produced in such an ecstasy of love that they must always be unworthy of it, however great their worth otherwise.

(F. Nietzche - “On the Use and Abuse of History for Life”)

Recently, I have been reading an article by Neal Stephenson on innovation starvation. The author speaks about the inability of Western societies to execute on big things. One point that made me think is the thesis that too much information is killing innovation. It is something I have thought about lately, so I want to share my thoughts with you.

Can information kill creativity?

We live in a world where anyone can easily access information; better access to information has surely improved our life increasing our efficiency. But excited by the power information gave us, we haven’t take any attention to the unintended consequences of having too much available information.

My thesis is that too much information, when not properly managed, can kill creativity, and consequently, innovation, by affecting our ability to take risks.

What is creativity?

According to the Merriam-Webster dictionary, creativity refers to the ability to make or bring into existence something new. There is a lot of psychological research about what drives creativity – and also about the exact definition of what creativity is – but I don’t want to bother you, so we will stick to what common sense tells us.

If we define creativity as the ability to bring into existence something new, we must hold that if an individual wants to create something new, she must accept the risk that she might fail. Newness is intrinsically risky, because if something is really new, we don’t have anything telling us that we will succeed. It’s quite obvious that if we don’t accept such a risk, we can’t do creative things. We can do small improvements to existent things (that isn’t so bad, by the way), but we can’t make big, disruptive innovation.

So, creativity is deeply linked to the ability of individuals to accept the risk of failure. I want you to focus on this point because it is the keystone of my thesis. Since creativity is deeply linked to the ability of individuals to accept the risk, it goes without saying that anything which modifies our risk propensity modifies also our creativity.

So far we have understood that when our risk propensity lowers, we are unwilling to take the unknown road for the known one; but how does this affect creativity?

This problem is what in optimization theory is called the local maximum problem (or for the computer scientists between us, the hill climbing problem). Simply put (dear maths, don’t hold it against me), local maxima occur when you try to solve a problem step by step and find a solution that seems optimal, but it is optimal because you are considering only a small subset of the original problem. When something holds you from looking for alternative solutions, your creativity surely suffers.

How can information modify our risk propensity?

Information can modify our risk propensity in several ways; in the following I will discuss two ways information can affect our risk propensity: partial information and social proof.

The available information depicts the effects at the expense of the causes. Most of time, you only know the effects of decisions, while causes remain obscure to you. This is what I call partial information: you have a partial knowledge of the problem space.

Now, a wise man would acknowledge this situation and says: “Hey, this is what I have. I will use it but with the awareness that this is only partial knowledge”. A wiser man would further say: “I don’t mind, I will follow my instinct”.

But typically what we do is to believe in ineluctable certainty and transform that partial knowledge in a full, absolute knowledge. This is where you start avoiding risks; since there is no certainty, we build the illusion of a risk free world: we start taking small steps, the risk free ones, climbing the hill and forgetting the mountain, shaping our world as a risk-free world. We simply negate risks.

The availability of partial information generates the illusion that risk doesn’t exist. Since risk does not exist, how do you call people that take pointless risks (Take the word for yourself)? This is where social proof happens. As social beings, we are constantly responsive to the activities of others. When we face a situation in which we don’t know what to do, we are used to look for clues in others’ behaviors to make our choice. With all the social media available nowdays, we are being constantly flooded with opinions by others. What happens is that all these things might discourage us from continuing on big, bold projects. To say as Nietzche,

“He has lost or destroyed his instinct ; he can no longer trust the “divine animal” and let the reins hang loose, when his understanding fails him and his way lies through the desert. His individuality is shaken, and left without any sure belief in itself; it sinks into its own inner being, which only means here the disordered chaos of what it has learned, which will never express itself externally, being mere dogma that cannot turn to life”

How can we defend ourselves from too much information?

Information is not bad at all; it is bad only in the measure it starts affecting our risk propensity. Here are some simple tips you can use to control how information affects your behavior:

1. Understand that information is, at the best, partial

It is impossible to get a complete knowledge about an event. You only get partial information. What is worse, you don’t know what information you are missing. So, learn what you can learn after a critical assessment and leave a small space for your gut instinct.

2. Follow your gut instinct

Innovation requires you to fail. If you have a hunch that the gamble might pay off in the long run, place that bet. Life is too short to brood over things; trying is the best way to learn new things (of course, you must also minimize the negative effects of failing and setting your mind in learning mode).

3. Know thyself

Organize the chaos inside you and start thinking back to yourself, to your own true necessities, and letting all the sham necessities go.

I want to conclude with the opening quote from Nietzche. Remember that every man of action is without conscience and a bit of oblivion is needed to get things done.

And you? Have you ever been in situations where too much information were holding you back? How did you overcome them? Let me know in the comments.

Don’t Kill the Fun

Programmers are in a race with the Universe to create bigger and better idiot-proof programs, while the Universe is trying to create bigger and better idiots.  So far the Universe is winning.

(Rich Cook)

Computer programming is fun. This is the only truth. Don’t trust who says that it is boring. Or better, just feel sorry for her, because two are the reasons: either she let others mortify her passion or she got into the job just for money. Since this blog is all about passion, I will speak now about the latter.

Programming job pays. In Italy it is one of the few industries in which we can still find a job as a fresh graduate. This means that tons of people are coming into the job just to escape the brutal sickle of unemployment. There is nothing wrong to me, but I would advice those people that programming is just for who loves it. Certain tasks are really boring; much of today’s software development is really glueing, that is linking together libraries that do the work for you. Then, your company gives you a database, and you have to write those silly pieces of code that do Create, Read, Update, Delete. If you are lucky. Otherwise, you will likely be forced to use a CRUD generator and link code generated by it. Stop. There is nothing inherently wrong with this: we live in a world of limited resources, and we must work efficiently and effectively. This is the raison d’être of Corporate IT. If you don’t love programming, this will likely kill the fun. So, where is the fun?

We could ask several thousands of true programmers, and all of them will say that the fun is in the creation. A programmer starts from nothing and creates something. He is the agent the transforms the not-being into being. He received a gift, the gift of God:

Why is programming fun? What delights may its practitioner expect as his reward? First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. I think this delight must be an image of God’s delight in making things, a delight shown in the distinctness and newness of each leaf and each snowflake.

(Frederick P. Brooks, Jr.)

It is that sense of creation that makes programming meaningfull, and we should never forget this truth: we create. So, the antidote for the sense of boredom that kills the fun of programming is simply: have passion.