The original article by Dijkstra can be found here (pdf). These notes are mainly to summarize my read of the article for future ref.

There's an audio recording of his Turing Award lecture available on youtube.

Highlights

Paraphrasing in the quotes below is in italic.

[van Wijngaarden, who was then my boss at the Mathematical Centre in Amsterdam] went on to explain quietly that automatic computers were here to stay, that we were just at the beginning and could not I be one of the persons called to make programming a respectable discipline in the years to come?

What about the poor programmer?

He was hardly noticed; machines were so bulky, most focus on running them; you could show machines, but not software;

Most important of all, the programmer himself had a very modest view of his own work: his work derived all its significance from the existence of that wonderful machine. Because that was a unique machine, he knew only too well that his programs had only local significance, and also because it was patently obvious that this machine would have a limited lifetime, he knew that very little of his work would have a lasting value.

A software crisis emerged

Increased demand in programming activities and realizing the importance of reliability .

As the power of available machines grew by a factor of more than a thousand, society's ambition to apply these machines grew in proportion.

"The turning point was the Conference on Software Engineering in Garmisch, October 1968", the first admission of the crisis and of a greater need of reliability in software.

Googling that, it's the NATO Software Engineering Conferences. Wikipedia links to this hefty hundred page PDF report of that conference. Should be interesting to skim through, down the historical rabbit hole.

The result of the conferences were two reports, one for the 1968 conference and the other for the 1969 conference, that defined how software should be developed. The conferences played a major role in gaining general acceptance for the term software engineering. – wikipedia

Programming languages

LISP has jokingly been described as "the most intelligent way to misuse a computer". I think that description a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.

I guess some of the most gifted fellow humans probably refers to John McCarthy, ..

Some of the previously impossible thoughts may be the concepts pioneered by LISP, from wiki: "Lisp pioneered many ideas in computer science, including tree data structures, automatic storage management, dynamic typing, conditionals, higher-order functions, recursion, the self-hosting compiler, and the read–eval–print loop".

Paradigms

Dijkstra highlights a succession of concepts in programming language development: Fortran programmers understand their programms in terms of the specific implementation (oct/hex dumps), Lisp mixes "what the language means and how the mechanism works", finally ALGOL60 defines the programming language in an implementation-independent way.

What does it mean to define a language in an implementation-independent way? The syntax is defined by the language grammar, eg. in BNF-notation. What's left is defining the semantics of the language. I guess one of the initial methods to define semantics was via Syntax-Directed Translation?

Software products vs. usual products

Software seems to be different from many other products, where as a rule, a higher quality implies a higher cost.

This means that lowering the cost producing software can be achieved by increasing software quality. Less bugs means less debugging, so faster building and iteration. Correctness underlies reliability.

Thinking

The cultural tradition, which in all probability has its roots in the Renaissance, to regard the human mind as the supreme and autonomous master of its artifacts.

One of the most important aspects of any computing tool is its influence on the thinking habits of those who try to use it.

The tools we are trying to use and the language or notation we are using to express or record our thoughts are the major factors determining what we can think or express at all!

The best way to live with our limitations is to know them.

Software correctness: testing vs. proving

Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.

I think the zeitgeist in the first generation of computer scientists was the ideal of formally proving programs. Ofc, that still occupies important research space today, but it's not the usual approach a software crafter takes when writing software. We do Test-Driven Development, we don't write formal specs in TLA+ and model check them. 99.(9)% of the time.

Dijkstra gives some guidance on how to do "correct by construction" programming (if I may borrow from CBC Casper).

The programmer should let correctness proof and program grow hand in hand. If one first asks oneself what the structure of a convincing proof would be and, having found this, then constructs a program satisfying this proof's requirements, then these correctness concerns turn out to be a very effective heuristic guidance.

Banter

Increased numbers of named registers in 3rd gen computers complicates subroutine mechanisms … we can only pray that the mutation won't prove to be hereditary.

About 3rd generation computers, starting around 1966, replacing transistors with integrated circuits, before the advent of the microprocessor.

Not sure what that's about but I guess more named registers means more "ceremony" to save their values when calling a subroutine? Is that how those old architectures worked? I guess so, looking at the "subroutine structure" of the IBM S360 – "S/360 programmers typically follow a callee save convention using a save area in memory."

FORTRAN's tragic fate has been its wide acceptance, mentally chaining thousands and thousands of programmers to our past mistakes.

More about "one-liners" and "baroque monstrosities" in the article.

Education

The first effect of teaching a methodology - rather than disseminating knowledge - is that of enhancing the capacities of the already capable, this magnifying the difference in intelligence.

Conclusion

Automatic computers have been with us for a quarter of century. Their influence has been a ripple on the surface of our culture compared with the much more profound influence they will have in their capacity of intellectual challenge, which will be without precedent in the cultural history of mankind.

Again, more in the article. Loved the description of the "depth" of the "abstraction hierarchy". In computers it appears to span on so many levels.