Dev meeting – 25th Jan – TCR, NativeScript, Tentai Show and XQuery for IDEA

Chris talked about “Test-Commit-Revert”, an idea from Kent Beck building on TDD. Kent Beck was using a process of “Test, Commit every time that all the tests pass”, so you are never left with a state where the tests used to work and you’re not quite sure why they stopped working. But the extreme extension to that is “Test-Commit-Revert” – commit when the test pass, but revert all your changes if the test fail! This forces you to have a very short commit cycle, making tiny incremental changes each time. It’s even more interesting if you doing it in a team, and everyone is doing this simultaneously, as you each do tiny commits and then build on top of each others changes. Chris (and Kent Beck) aren’t suggesting that we do this all the time, but it’s an interesting idea that makes you think about how your development cycle works.

Alex talked about NativeScript – a way of writing code for mobile devices from VueJS. It isn’t just a web browser shell – it’s very fast. It doesn’t use HTML – instead, it uses “NativeXML” which has a lot more control over Android-style layout. There is a very good short project as the introduction, which is a great place to start and walks through all the basics. Alex showed us a very simple chunk of Vue code and NativeXML that produced a simple application. There is code in his Github account at https://github.com/xmakina/CrisisMH. He recommends it highly for mobile development (the only issues are with the general complexity of the Android ecosystem). We discussed how easy it would be to convert an existing VueJS app to NativeScript – not that easy, since it doesn’t use HTML.

Chris also talked about some code that he wrote over Christmas to solve a Japanese puzzle called “Tentai Show” – this is apparently a pun around astronomy and rotational symmetry. There is a grid of squares, containing stars that are in a corner or in the centre; and you need to divide up this grid into jigsaw pieces that have rotational symmetry. There is an example at http://www.nikoli.co.jp/en/puzzles/astronomical_show.html. Chris wrote a solver for this in Scala, on his Github at https://github.com/nespera/tentaishow.

Reece then talked about his XQuery plugin for IntelliJ IDEA. This is available from the JetBrains plugin repository at https://plugins.jetbrains.com/plugin/8612-xquery-intellij-plugin. It supports XQuery 1.0, 3.0, and 3, also with support for MarkLogic and BaseX extensions. At present, he’s working on splitting out the XPath support from XQuery – so the features of his plugin can also be used in XSLT as well. He has already got Run Configurations working for a range of processors including Saxon and MarkLogic, so you can specify which query processor it uses. It’s already the best XQuery plugin available – but there is also a huge set of additional features that he is working on!

Nullable references in C#

Tim B went to see Jon Skeet at the recent Oxford .NET talk. Jon Skeet is the most prolific and highly voted Stack Overflow contributor ever. He was talking about the plans for C# 8.

Jon Skeet mostly talked about nullable reference types – what will happen, and his opinions about them. C# has value types and reference types, similarly to Java. C# currently has nullable value types – so you can have int? to represent a nullable integer. In C# 8, there will be “nullable reference types”. This seems a little odd, since reference types are already nullable – but it is to change the default behaviour of reference types so the compiler will warn if null is assigned to a reference type. This is only a language level change, not a CLR change.

The compiler will attempt to identify when a nullable object is referenced – and it tries to do this cleverly, by checking for things like “xx != null” that mean that a null reference might be accessed.

There are other changes coming around pattern matching in switch statements – similar to the way that this works in Scala.

We discussed the way that the C# language is evolving compared to the way that Java is evolving. We agreed that the C# language maintainers were much more willing to make significant language changes, whereas Java has been much more focussed on backwards compatibility and a much slower evolution.

Following this, we discussed Kubernetes, and our experience with it.

Testing Test Driven Development

In our dev meeting, we discussed Test Driven Development. So, of course, we did it TDD-style: we started off by asking whether everyone knew what TDD is; and then testing that; and then asking more questions about TDD until we got a failing test – i.e. several people who didn’t know. Then we updated our knowledge of TDD until we didn’t have a failing test, and iterated…

We agreed that there are many benefits to using TDD, of which “ending up with a working test suite” is only the most obvious:

  • Allows incremental development, so you are able to run the code as you’re going along
  • Provides documentation for how the code works
  • Helps you to get a feel for how the code will work as you’re writing the test
  • Proves safety, confidence, and a greater ability to refactor
  • Writing a test beforehand means that you are forced to have a failing test
  • Forces a different mindset – because you’re thinking about your design and API as you write
  • Helps you concentrate on what’s relevant to the task at hand, rather than doing unnecessary work
  • Encapsulates features nicely – encourages better organization
  • Helps keep focussed on the problem, and keeping the solution simple, rather than solving unimportant problems

But, we concluded that while many of us used TDD for some of the time, it wasn’t true that all of us were using TDD for all of the time. So, given all these great things about TDD, why aren’t we using TDD for everything?

  • We’re lazy
  • It takes longer to get to a result – longer term benefits but a short term cost
  • Not as good if you’re exploring
  • Some types of test are hard to write
  • Slow feedback loop for some types of test – e.g. some integration tests
  • Limited framework support
  • Working in an existing codebase that has been developed without TDD
  • Working with frameworks that have been written without testing in mind – particularly older frameworks
  • Working in very data-heavy projects, based on mutable source data
  • TDD is a bit less useful when the language is more focussed on the problem domain (e.g. C#) rather than more focussed on technical issues (like C++)

We discussed the level that we were “pure” in our application of TDD – whether we were always using a strict, Kent Beck approved “write the most minimal test, write a dumb implementation that just passes that test, then add more tests and refactor”. We concluded that we mostly weren’t – the range of TDD-like approaches that we use are:

  • Pure
  • Use a hybrid approach – write some minimal tests, write a real implementation, and extend the tests to cover edge cases
  • Write TDD-style  when it will actively speed up implementing a feature due to better turnaround time
  • Write out the specs first, and then implement the code based on that –
  • Write tests to help the code reviewer understand what’s been written: sometimes before writing the code, and sometimes after

We discussed a potential pitfall we had sometimes hit with TDD – that sometimes it is tempting to keep hammering at the code until the tests pass, rather than to step back from it to reconsider the overall strategy.

In conclusion – we use TDD a lot of the time, but we tend to use hybrid approaches rather than a strict pure approach.

Nothing to fear except hackfear itself

Nikolay talked about the #HACKFEAR hackathon  that he had recently been to. It was organized by Karen Palmer, a film maker and parkour practitioner. She is interested in fear – Parkour is at its essence about fear management. She discussed a future in which technology has gone bad – for example, if you wanted by the police and you get into an automated car, it will take you straight to the police station and instead wants to use technology to help guide and empower people.

She has an art piece called “Riot” – which is a webcam watching you, while watching a video, and attempting to identify the emotion that you’re portraying using a neural network. If you show “appropriate” responses for the situation, then the video progresses; if you show inappropriate responses, then the video ends and you have as many attempts as you need.

At the hack, most of the hacking was about concepts, rather than fully working products. Nikolay’s group looked at fear of public speaking; taking the technology from Riot to analyze your posture and speech (e.g. how much you say ‘um’) to help provide feedback on your speaking.

Another team used a VR system to analyze your emotions and show you “scary” things, as exposure therapy to them. other teams that tackledmanagement of memory loss, fear of self expression as well as fear management through journals or improvenent of communication for early school-leavers.

We also discussed other types of fear, such as writers’ block, and acrophobia; and the difference between climbing a ladder, versus jumping off a cliff with a paragliding harness attached. While the latter should be scarier, it’s not, because you know that you have a harness.

Karen Palmer has TED talks about this topic.

Lead Developer Conference 2018

I attended the Lead Developer conference in London a couple of weeks ago. I enjoyed it and came back with lots of ideas buzzing around in my head. It’s a single track conference, which is good because you don’t have to make decisions about what to see and what to miss, but also you get to see some things you might not have chosen just based on the title. Many of the speakers have given longer versions of the talks elsewhere, or have written articles on the subject, so if particular topics are of interest it is possible to go and dig in further. You can think of it like a taster menu at a fancy restaurant.

Photo by White October Events

I talked about some of the talks I had seen at our developer meeting on Friday. I couldn’t cover all of them (23 in total I think), so concentrated on a few that had particularly resonated. The full set of conference videos are available to view on YouTube, so go and check them out. Here are some details of the handful of talks I discussed with the team:

Alex Hill – Giving and receiving code reviews gracefully

Alex has written up a longer form in this blog post while the video of her talk is on YouTube.

This talk was about the psychology of code reviews and how to take that into account to get the best outcomes. People sometimes feel defensive about code reviews as it feels as if they are being criticized rather than the code under review.

She talks about dividing up code review comments into 4 quadrants along 2 axes: High vs Low Conflict & High vs Low Reward. The Low Reward, High Conflict things tend to be preferences like where to put brackets and so on. The best way to handle these things is to agree code format standards and automate them away. The Low Conflict things don’t cause problems between team members because they are non-contentious. Things like obvious bugs (in the High Reward area) and debug statements (in the Low Reward area). It’s the High Reward, High Conflict things that are tricky. She suggests considering Conflict Resolution Archetypes- Avoiding vs Yielding vs Competing vs Collaborating. We are aiming for collaboration and she has some suggestions on how to achieve that.

These include: Doing more pair programming and having more discussion before implementing a feature. Ensuring everyone reviews and is reviewed, so there is a level playing field. Using “we” rather than “you” or the passive voice to keep the whole tone of the review more neutral. Asking questions rather than making demands. Just being positive rather than negative or confrontational.

As the receiver of the review, say thank you and also think about how you think someone else would respond.

Adrian Howard – Points don’t mean prizes

There is a longer version of this, in video form from the ACE conference while the short version from Lead Dev is on YouTube.

Adrian works in the intersection between development, UX and product helping companies build the right things. This talk was about various dysfunctions he sees in the way people think about Scrum, Agile and requirements.

The default scrum model that people use is kind of broken. Someone comes up with the vision that everyone is heading to. Someone comes up with the user journeys to get to that place, that gets split up into stories. Those stories are given to the developers and everyone lives happily ever after. But that’s a lie.

Problems arise because the different stories are different sizes. So it’s hard to put them into fixed sprint-sized boxes or to get flow in a Kanban approach. So break them up into smaller ones and we get smoother flow.
Give those to the developers and we’re done. Again that’s a lie.

Stories focus on size and effort not on actual value. So we may have split
up the story and actually delivered little value. So think about:

  • Bin – can we discard or postpone a story?
  • Thin – can we deliver less and still get value?
  • Split – can we break up a story and still get value from the pieces?

Once that is done, give the stories to the development team and we are done. Once again, it’s a lie.

The problem is that often the people who want to follow this approach don’t have the authority to make it happen.

Adrian recommends User Story Mapping as a way to get good stories and keep the big picture in mind. He particularly likes the book that describes it, because if you give someone a book, it has much more weight than just, “hey try this technique”. The output is a map rather than a flat backlog. People tend to do this at the start, but it’s best to keep refining. Some of the ideas of this approach are described in Jeff Patton’s blog post that predates the book.

Nickolas Means – Who destroyed Three Mile Island?

The final talk I discussed from Day One of the conference was about the nuclear reactor meltdown at Three Mile Island. I recommend watching the video of this as he is a good story teller and I am not going to retell it in detail here.

He first outlined the events that lead to the partial meltdown occurring and then discussed the ideas of the “first story” and “second story” as described by Sidney Dekker’s book “Field Guide to Understanding Human Error“. The “first story” is written with hindsight and outcome bias and generally seeks to blame someone for the results. The “second story” seeks to look at what happened through the eyes of those who were there and what they knew at the time. The idea is to start with the assumption that everyone was doing the best they could with the information they had at the time, so human error is never the cause of the event. This leads into the idea of blame-free post-mortems as a way to discover and fix systemic problems rather than seeking someone to blame.

Uberto Barbini – Legacy Code – Big Rewrite or Progressive Rejuvenation?

The first talk I discussed from the second day of the conference was this one about legacy systems. The video of this talk is on YouTube.

A legacy system is old, but it works and usually makes money for the company, or it would have been retired.  One of the options for dealing with such a system is to just keep patching it as changes are required. The downside to this is that the system slowly degrades as more and more changes are added.

Another option is the big rewrite. This rarely works out. The thing you are replacing was successful, so not as simple to replace as you might think.
The old system contains quite a bit of knowledge that can be lost in the transition. Finally, data migration is nearly always harder than expected

The best approach seems to be the “Strangler” pattern as described by Martin Fowler whereby the new application wraps the old one and then slowly replaces it over time. This has the advantage of showing results quickly and not requiring a risky “big bang” switchover.

Uberto Barbini has a similar technique which he calls “Alchemical Rejuvenation” – Turning legacy code into gold.  It has the following steps:

    • Seal with external tests. First of all you need some high-level assurance that the system is working after you make changes. These tests may be discarded later, once there is better testing in place.
    • Split into modules. Start improving the internal architecture to separate into logical pieces.
    • Clean the module you need to work in, adding tests as you go.
    • Repeat as needed

He had an interesting take on code quality – It’s not clean code, TDD, or patterns etc. Those are just tools to get code quality. The real test is if your application has been running for 10 years and you can still add features and fix bugs quickly, then you have high code quality.

Kevin Goldsmith – Using Agile to Build Inclusive Teams

The final talk I discussed was about using agile techniques to improve the way teams are run. The video for this talk is also available on YouTube.

He talked about using post-its to work with one of his reports to work out want they each expected of each other. Similar to the idea of the “Manager Read Me

In similar theme he talked about mentoring a lead. Again, working out where different responsibilities lie. Is the manager keeping it, Does the manager approve it, Does the new lead inform of their decisions, or Does the new lead take full responsibility?

He also talked about improving team meetings. When it comes to making a decision he has two approaches: Polling – everyone gives their opinion, but in the end the manager decides. Voting – everyone votes. In the end the Manager has to accept and defend the decision. He talked about having a collaborative team meeting agenda in a shared Google Doc. For larger groups he recommends the Lean Coffee approach.

Finally he talked about having more inclusive meetings. The lead needs to resist talking as other people will yield to them. He also suggested having an observer who points out interruptions, people not getting credit etc. This role should be rotated though to avoid people not contributing.

Release It – 2nd edition – part 2!

Chris talked again about Release It 2nd edition.

Last time, Chris talked about “Creating Stability” – things that can go wrong, and how to prevent that.

The next section “Living in Production” is about how a system works in production. Part of this is physical (networks, IPs, etc.). There can be clock problems particularly with VMs. It covers “12 factor apps” – which we’ve discussed before in the context of microservices, coming from the microservices ideas, this is all about making the app not depend on things on the box.

We discussed “Stucco apps” – where if you install subsequent versions 1, 2, 3 of an app on a box, then there will be bits of version 1 and 2 left over – so the app isn’t exactly any of those versions. Instead, you should rebuild from scratch each time (you could use Nix and NixOS for this…). We also discussed configuration – getting environment-appropriate configuration onto each box.

We had a digression about ambient sound from services – like putting microphones in the JET torus – so you can tell whether the system is running normally. Because humans are good at recognizing unusual noises or unusual changes in noise patterns, this can let you pick up on patterns of behaviour that aren’t otherwise obvious.

We talked about setting up logging to demonstrate that the high-level goals of the system are being met. For example, in some systems, it might be really important if page loads have become slow, or users cannot log in, or if the number of purchases per hour has significantly dropped; and these are the important business needs rather than just whether a box is up.

We briefly discussed the merits of the Unix command “uniq -n” for monitoring services; for example to find the counts of unique ID addresses. This is very useful for spotting patterns in your logs.

When upgrading data in SQL databases, the upgrade path is typically straightforward – you migrate it all via a migration, or the apps don’t work. In NoSQL databases, there is no schema, there may be multiple clients using the data imposing their own restrictions, and so it’s not so straightforward. The author suggests a “trickle then batch” approach of first converting the high priority items, then after a while, converting all the other items.

We talked about API changes, and contract tests created by consumers, and versioning of APIs.

The final part of the book is about systemic problems – a grab bag of issues that didn’t come up elsewhere. Load testing scripts can be overly polite and well behaved, and then sites break when hit with real users that aren’t well behaved – so the load testing scripts should be more impolite. We talked about chaos – we’ve discussed chaos monkeys before, but there are various refinements to this idea. For example, a default “opt-in” for chaos monkeys, with the ability to opt-out if your service cannot tolerate chaos. Also, a “zombie apocalypse” – you send home a bunch of people, and see whether any of them are indispensable or not.

Managing state with VueX

Tim talked about Flux, Redux and VueX, for managing state in JavaScript applications.

The big change between Flux and Redux was that in Flux there was one store per feature; in Redux the entire application state is stored in one central object. This makes it easy to serialize and debug. Changes to the store are made via a “reducer” – which is a function that takes the current state, and transforms it into a new state – similar to the way that state is managed in Elm. To help organize the state, then you can break it down into a set of smaller views of that state. Because you’re abstracting all the data into one place, then the components themselves can be simpler and dumber. This makes the components themselves easier to test.

VueX is a reimplementation of Redux, but focussed on Vue. While you can use Redux with Vue, it makes more sense to use VueX since it integrates better with Vue’s properties and events. The VueX dev tools have built in support for VueX, and they list the actions that are triggered at any point, showing when the state changes, and allowing you to step back before that. Also, there are libraries to serialize your entire state to local storage whenever there is a change, which makes it harder to lose state if you close your browser.

We discussed exactly what “state” means, and how much you want to have stored in the VueX state. It comes down to what is useful to you – you don’t necessarily need to store everything, just the things you care about.

Rich talked about a current project that’s using VueX; with a wizard that collects data from the client across multiple pages. This uses VueX to manage state.

We discussed when VueX was worth using. It adds flexibility, but it also adds complexity and means that you are distributing the code to manage state to more places. The recommendation from the Gitlab blog is to use VueX always. Tim doesn’t completely agree – it does add some extra boilerplate, but on the other hand it’s good discipline to make what your application does clearer. He’d use VueX for a new single page app, perhaps whatever the side. Rich on the other hand thought that it was more important to consider whether there is state on the client side that needs to be remembered between pages – so the wizard app needs that, but an app where the state is always synchronized with the server doesn’t need it.

Rich also mentioned a prefetch option in Vue Router, which can potentially make apps using the router perform more quickly.

Release It!

This week, in our dev meeting, Chris talked about Release It! (second edition) by Michael Nygard. He’d read the original years ago, before “dev-ops” existed, and it was the first time he’d really thought about putting applications into production. It was an eye-opener 10 years ago, so he was excited by the second edition.

The book discusses stability patterns and antipatterns; then about things to consider in production; then about deployment; then about evolution of applications. So far, Chris has only read the first two parts of the book.

It discusses “Designing for Production” rather than “Designing for QA”; starting with a case study about how a small problem escalated into a huge problem. Some approaches are:

  • “Testing for Longevity” – identifying things that will fail after X continuous days running, which might not be picked up by standard CI approaches
  • limiting chain reactions between servers if there’s a failure causing domino effects (often by blocked threads building up)
  • self-denial attacks like sending out mass emails that trigger everyone to look at your site simultaneously
  • dog-piling, which is a bit like the thundering herd problem, e.g. if everything server is updated on a schedule exactly on the hour it causes peaks in load
  • automated tools going out of control, so applying limits to them is good;

Positive approaches to avoid these are:

  • timeouts
  • circuit breakers – isolate your system from the remote system when the remote system is down, providing an immediate “fail” response rather than blocking and running out of threads
  • bulkheads – e.g. reserving a few threads for the admin tool to fix the problems, or the operating system saving some disk space so the root user can fix out-of-space problems
  • steady state – e.g. log files shouldn’t become infinite in size, they should be tidied up; temporary files should be deleted
  • fail fast – do validation as soon as possible, if there are expensive operations that require data, then get that data upfront
  • for websites, distinguish between 40x errors and 50x errors – a user entering bad data shouldn’t trigger a circuit breaker
  • let it crash – if something goes wrong, let it crash and then restart it, but this requires other architectural choices like supervisors
  • handshaking – typical REST services don’t have any handshake involved; sometimes it would be useful if they could return an HTTP 503 to their caller to tell it to slow down
  • asynch approaches – a pull system can choose the rate at which it receives content
  • shedding load – dropping load if it can’t take any more
  • back pressure – telling your caller “back off, I can’t take any more work”
  • governor – applying limits to the automation tool, so it can’t fire up 10,000 boxes immediately – if it’s outside the standard range of what it does, then it will do it more slowly

When building a system using an Agile approach, then you need to decide when to start applying these techniques – since they may not be expressed directly in user stories, and they may require significant architectural choices that are hard to refactor into an existing application. We discussed that some of these patterns are directly supported in newer frameworks, such as Akka HTTP; which has direct support for back pressure, circuit breakers and asynch methods. This makes the decision about when to implement them easier.

Dev meeting – Data intensive applications; and Rust

At our dev meeting this week, Rich talked about ‘Designing Data Intensive Applications‘, that he’d been reading, and Inigo talked about ‘Talking to your TV using Rust‘.

‘Designing Data Intensive Applications’, from O’Reilly, describes various tools and approaches for storing and processing data. It’s split into four sections. Section 1 discusses the basics of databases and how they store content, for SQL databases, NoSQL databases, flat file stores. It covers in-memory stands, b-trees, indexing structures, and the best approaches to use in various situations. Section 2 talks about distributed databases – clustering, transactions, serializing commits at a point in time so shared logs can work, and so on. Section 3 discusses distributed batch processing, using tools like Spark. The first three sections were all very interesting and informative. The fourth section was not as good – it covered the future of data, and was a lot woollier than the previous sections. Rich found the book on the whole good.

To talk to his TV using Rust, Inigo used a Google Home Mini, an RM 3 Mini IR blaster, an Open Source library to interface with the IR blaster, and some Rust on a Raspberry Pi. The Google Home part was setting up a Google Home App via https://developers.google.com/actions/building-your-apps – you configure a set of actions and intents online, and then assign a ‘fulfillment’ to it, which is a web service you make available via HTTPS. This part is fairly straightforwards – it requires filling in various web forms, and it’s easy to publish a test version of an app that’s only available via your own Google account.

To actually use this for a home application, you need to have SSL, so you need a domain name and a certificate. Inigo used Amazon Route 53 for the domain name, and Lets Encrypt for the certificate. There are various other services that will provide a domain name for a dynamic IP address for free, although the original dyndns is no longer free.

THe ‘fulfillment’ part is a web service that Inigo wrote that accepts JSON from Google, and then calls out to a Python library that interacts with the IR blaster. He wrote the first version in Scala, to get something simple working fast, and then rewrote it in Rust. He learned from O’Reilly’s Programming Rust book, and used IDEA’s Rust plugin. The experience was positive, although it was a bit tricky to get some of the memory management working correctly as a first-timer using Rust. The code is on Github at https://github.com/inigo/googlehome-remotecontrol.

How to read more interesting books

In our dev meeting, we talked about interesting development related books we had read, and how to read more interesting books. Various people each talked about a book that they had read recently or recommended.

Chris talked about “The Nature of Software Development“, by Ron Jeffries.

It’s by Ron Jeffries, one of the signers of the Agile Manifesto signer. It’s a good book to give to people who don’t know about Agile, and who might be scared of Agile jargon – it’s entirely jargon free. It explains the fundamentals, and the reasons why software development is better done Agile. It cuts through the cargo culting that is common with Agile implementations.

Tim talked about “What to look for in a code review“, by Trisha Gee.

The author is from Jetbrains, and there is a little undercover marketing for Jetbrains tools, but it’s not excessive. It’s written pragmatically (although not from the Pragmatic Press!). It’s very short, which is a plus, particularly if you have a young child and limited reading time. It is split into chapters based on things to look for in code reviews, like tests, security, performance, good principles. Although it is fairly Java focussed, it is still useful for any language. The tips are fairly obvious to most experienced developers, but there is some good stuff in there.

Pasquale talked about “Clean Coder” by Bob Martin.

This is about “saying no” and “saying yes” – the language used when committing to a task – you should say what you’re going to do and then do it. It recommends and explains Test Driven Development. It recommends coding practice, using katas. It discusses time management and managing your own time; and doing estimation. It is good for intermediate developers.

Loic talked about “Designing the user interface“.

This is a classic of Human-Computer Interaction. It’s primarily aimed at students. It covers a little of everything you need to know about HCI; usability, mice, collaboration, etc. It is useful for all developers to learn a bit about usability. It’s 20 years old, and a bit dated – but still relevant.

Reece talked about “The Design and Evaluation of C++“.

This book describes how the early C++ compiler was created, and how various features were designed, and the compromises made to overcome problems. It goes up to C++ 98. It is good for anyone interested in the history of programming languages and how they are created and evolve.

Simon talked about “My job went to India” (now renamed to “The Passionate Programmer“).

Simon read this about 10 years ago, and still recommends it. It is organized into 6 sections, with 52 tips across that. It is useful to think about how to do more than just what’s needed right now in the short-term. Some particular tips are: “Be a generalist” – don’t get bogged down in particular technologies; “Be the worst” – have better people around you; “Be a mind reader” – think about the software being developed and spike out future requirements that haven’t yet been considered, and think about what a customer might need, but they aren’t saying; “Be where you’re at” – do your job well; “How much are you worth” – was the work I did today worth what I am paid? The book could be read by anyone working in tech, but it’s aimed more at programming than other tech jobs. The ‘outsourcing’ part of the title is just a marketing vehicle.

Stephen talked about the “Mythical Man Month

It was written in 1975, but is still true, although a bit dated. Its core message is that a job that takes two months for two people, does not take one month for four people – because of communication, training, etc. It famously coined “Brooks’ Law” – adding more people to a late project makes it later. That’s still true, although perhaps less so nowadays since people it’s easier to split up tasks and make them independent. Sometimes you think “I could have done that in an hour” – but it actually takes a day: because it takes nine times longer to write good quality deployable shareable code, than it does to write something for yourself only. A modern insight on this is that reuse of code, better tools, and TDD can all help.

We had technology problems talking to Bart, but he passed on his notes on “Algorithms To Live By“.

  • It is a book on algorithms that either technical and non-technical people can get an insight on most important algorithms in computer science. Algorithms are presented in form of brief history and application in daily life. For people with computing background this could be a great supplement to already gained knowledge , i.e in the university. For non-computing people this would explain that computer science is not only about installing/fixing Windows. Some of the chapters most interesting to Bart are:
  • Optimal stopping – you will learn the most efficient methods to employ secretary, find a wife or parking space, or sell the house
  • Explore and exploit – you will learn how medical trials with unknown drugs are conducted, when to stop looking for a good restaurant, how the founder of Amazon explains Regret Optimism
  • Caching: How to store data, clean up the house, and human memory issues
  • Scheduling – importance of time, various methods to resolve problem with tasks dependency, how Pathfinder stop responding and was fixed while on Mars
  • Overfitting: how not to over-learn (self and machines)
  • Randomness – how to calculate PI by dropping a pin, how playing cards helped in building the atomic bomb, how to find gigantic prime numbers
  • Games theory – recursion in humans decisions, the “Prisoners’ dilemma” explained

Rhys also wasn’t present, but passed on his recommendation for “Functional Programming in Scala

This discusses how to solve problems in a functional way, how to think about combining functions in algebraic way. It includes exercises too. Particularly when coming from a Java / procedural background, it’s quite a mind shift.

We also briefly discussed why those particular books:

  • Chris had seen it on PragProg.com, and read about it on the mailing list, and liked Ron Jeffries as an author.
  • Tim has a young child and wanted to read a short book.
  • Pasquale realized that human processes around coding are important, and had already read and liked Clean Code.
  • Loic studied HCI at university, and it’s a heavily recommended book from ACM curriculum.
  • Reece is interested in language design, and this was written by the creator of C++. It was interesting to see thought processes of language designers.
  • Simon was doing a lot of Ruby development at the time, and reading blog posts by the author of the book, and found his attitude of interest – and the book was unexpectedly good.
  • Stephen read Mythical Man Month because it was famous.