Table of Contents
- Brook’s Law
- Clarke’s Three Laws
- Conway’s Law
- Gall’s Law
- Hofstadter’s law
- Metcalfe’s Law
- Pareto Principle
- Peter Principle
- Robustness Principle (Postel’s Law)
Brook’s Law is a claim about software project management according to which “adding manpower to a late software project makes it later”. It was coined by Fred Brooks in his 1975 book The Mythical Man-Month. According to Brooks, there is an incremental person who, when added to a project, makes it take more, not less time.
- Ramp up time
- Communication overheads increase as the number of people increases.
- Limited divisibility of tasks. Adding more people to a highly divisible task such as reaping a field by hand decreases the overall task duration (up to the point where additional workers get in each other’s way). Some tasks are less divisible; Brooks points out that while it takes one woman nine months to make one baby, “nine women can’t make a baby in one month”.
Clarke’s Three Laws
- When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
- The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
- Any sufficiently advanced technology is indistinguishable from magic.
Conway’s Law is an adage named after computer programmer Melvin Conway, who introduced the idea in 1967; it was first dubbed Conway’s law by participants at the 1968 National Symposium on Modular Programming. It states that…
“Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations”
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. – John Gall (1975, p.71)
This law is essentially an argument in favour of underspecification: it can be used to explain the success of systems like the World Wide Web and Blogosphere, which grew from simple to complex systems incrementally, and the failure of systems like CORBA, which began with complex specifications. Gall’s Law has strong affinities to the practice of agile software development.
Although the quote may seem to validate the merits of simple systems, it is preceded by the qualifier “A simple system may or may not work.” (p. 70). This philosophy can also be attributed to extreme programming, which encourages doing the simplest thing first and adding features later.
One of the first systems designers to quote Gall’s law was Ken Orr in 1981. Notable were the quotations of Gall’s Law by Grady Booch since 1991, which were mentioned in multiple sources.
Systemantics (a change in typography and underlining indicate that the title is better rendered as “SystemANTICS”) is a commentary on systems theory and general semantics publications by such thinkers as Ludwig von Bertalanffy and Alfred Korzybski.
Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.
Hofstadter’s law is a self-referential time-related adage, coined by Douglas Hofstadter and named after him
Metcalfe’s law states that the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2).
The Pareto principle (also known as the 80–20 rule, the law of the vital few, and the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes. Management consultant Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto, who, while at the University of Lausanne in 1896, published his first paper “Cours d’économie politique.” Essentially, Pareto showed that approximately 80% of the land in Italy was owned by 20% of the population; Pareto developed the principle by observing that 20% of the peapods in his garden contained 80% of the peas.
- In load testing, it is common practice to estimate that 80% of the traffic occurs during 20% of the time.
The Peter principle is a concept in management theory formulated by Laurence J. Peter and published in 1969 in which the selection of a candidate for a position is based on the candidate’s performance in their current role, rather than on abilities relevant to the intended role. Thus, employees only stop being promoted once they can no longer perform effectively, and “managers rise to the level of their incompetence.”
The Peter principle is a special case of an ubiquitous observation: Anything that works will be used in progressively more challenging applications until it fails. This is the “generalized Peter principle”. There is much temptation to use what has worked before, even when this might not be appropriate for the current situation.
Software Peter Principle
The software Peter principle is used in software engineering to describe a dying project which has become too complex to be understood even by its own developers.
It is well known in the industry as a silent killer of projects, but by the time the symptoms arise it is often too late to do anything about it. Good managers can avoid this disaster by establishing clear coding practices where unnecessarily complicated code and design is avoided.
The name is used in the book C++ FAQs (see below), and is derived from the Peter Principle – a theory about incompetence in hierarchical organizations.
- Loss of conceptual integrity
- Programmer incompetence
- Programmer inexperience
Robustness Principle (Postel’s Law)
In computing, the robustness principle is a general design guideline for software:
Be conservative in what you do, be liberal in what you accept from others (often reworded as “Be conservative in what you send, be liberal in what you accept”).
TCP implementations should follow a general principle of robustness: be conservative in what you do, be liberal in what you accept from others.
In other words, code that sends commands or data to other machines (or to other programs on the same machine) should conform completely to the specifications, but code that receives input should accept non-conformant input as long as the meaning is clear.
Among programmers, to produce compatible functions, the principle is popularized in the form be contravariant in the input type and covariant in the output type.