Posted on

Martin Luther King Jr. Day

Martin Luther King Jr.

As I write this, I sit in an apartment in Atlanta’s Old Fourth Ward, just a few blocks from where the Rev. Dr. Martin Luther King Jr. preached to his congregation. I cannot express in words the gratitude I feel toward Dr. King and all the thousands of people who marched with him to demand equal rights for all Americans. Since its birth, America has been a nation that aspired to high ideals of equality, and since its birth, America has struggled and failed to live up to those ideals. People like Dr. King are the most important people in America, people who serve as a national conscience, who remind us of the ideals we aspire to, and insist that we try harder to live up to them. Dr. King made us better as a nation.

As the events of 2014 made painfully evident, although segregation and racism are no longer the law of the land, they often remain ingrained in the structure of our society. Too many people of color continue to struggle for fair treatment and fair opportunities. We have a long way to go, and a lot of work to do, before we can say we are living up to our American ideals.

But growing up, as I did, in the South during the 1970s, attending elementary school in the recently desegregated school system, I learned what a huge impact a few dedicated people can have on society and culture. The cultural difference between the older generation segregationists and the children who were educated in integrated schools was nothing short of stunning to me. While racial prejudice has not disappeared from our culture, that early experience gives me hope that it really can.

The Martin Luther King Jr. Holiday was set aside to “serve as a time for Americans to reflect on the principles of racial equality and nonviolent social change espoused by Martin Luther King, Jr.”

More than a mere day of reflection, the King holiday has evolved into a national day of service toward the realization of his great dream. The video below explains the King legacy of service, and how you can honor his memory and your community through service.

Dr. King was a great orator, and although his written words are powerful, I don’t believe you can truly understand the power of those words unless you have heard them as he spoke them. It was not merely the words, but the passion of his presentation, that motivated everyday people to extraordinary action during the civil rights era. I believe everyone should take the time on Martin Luther King Jr. Day to listen to the man speak.

If you have children, you owe it to them to teach them about Dr. King, about the struggle for racial equality, and about the nonviolent methods he used to create such great change. If we are going to make this world a better place for all of us, we need Dr. King’s leadership and ideals to live on in future generations.

Here are some places to hear Dr. King speak.

Posted on

Component Contracts in Service Oriented Systems

PRINCIPLE: Relationships must be governed by contracts that are monitored for performance.

In order to build a reliable system that is composed of many services, we need to have some guidelines for making the services reliable, both in the technical sense, and in the more psychological sense of people having confidence that things will work.

In a system of services, just like in a society, business relationships should be governed by contracts that are monitored for performance. Wherever a dependency exists between services, components, or teams, a contract needs to exist to govern that dependency. That contract comprises an agreement that defines the scope of responsibility of the service provider and the service consumer. Here’s a description of the contracts each service should provide to its customers.

Interface Contract

Every service must guarantee that its interface will remain consistent. Assuming the service is delivered over HTTP, the interface includes:

  • Names and meanings of query string parameters.
  • Definitions of what HTTP headers are used or ignored.
  • Format of any document body submitted in the request
  • Format of the response body.
  • Use of HTTP methods.

Note that in this context, “consistent” does not have to mean unchanging. It only means that no backwards incompatible changes can be made. If your service is designed on the same RESTful hypermedia principles of the web, your interface can remain consistent while growing over time.

The Interface Contract must be documented and available to both your customers and your delivery team. In fact, I would strongly recommend that the Interface Contract be created and delivered before you begin writing code for your service. It serves not only as documentation, but as the specification for developers to work from, and as the starting point for your test plan.

If changes require breaking compatibility, the best policy is to expose a new version of your service at a different endpoint. You must then establish a deprecation cycle to ensure clients have time to move to the new version. Only after all clients have migrated to the new version can you stop providing the old version. Such deprecation cycles can be very long, depending on the complexity of the service and the velocity of client development. Avoid backwards-incompatible changes in your interface if at all possible.

Service Level Agreement

Where your Interface Contract defines what your service will deliver, the Service Level Agreement (SLA) governs how it will be delivered (or how much). Things that need to be documented in your SLA include:

  • Availability: Uptime guarantees, scheduled maintenance windows, and communication policies around downtime.
  • Response time: What is the target for acceptable response times? What is the limit beyond which you will consider the service unavailable?
  • Throughput: How many requests is the service expected to handle? How many is the client allowed to send in a given time window?
  • Service classes: Are there certain kinds of requests that have non-standard response time or throughput requirements? Document them explicitly.

Your SLA should also describe how you monitor and report on conformance with the agreement. Measurements of these aspects of performance are usually called Key Performance Indicators (KPIs), and those measurements should be made available to your customers as well as your delivery team. These might be circulated in a regular email, or made available as a web-based dashboard.

If there is a financial arrangement involved in using the service, your SLA should also include remedies for non-conformance. However, even for services designed for internal consumption only, the SLA should be explicitly documented and agreed on by the service provider and the service consumer.

Internally, you should also monitor the error rate of your application and subtract it from your availability. A server that throws a 500 Internal Server Error was not available to the customer who received the error. If a high percentage of requests result in errors, you have an availability problem.

Communication and Escalation Policy

The key to any relationship is communication. When you provide a service, you must have a communication plan around delivering that service to customers. Some of that communication is discussed above. Issues to cover in your communication plan include:

  • Notification of changes and new service features.
  • Notification of deprecation cycles.
  • Reporting on service level performance.
  • Notification of incidents and how problems that affect customers are being managed.

In addition to these important communications from you to your consumers, it is also important to establish how your customers will communicate to you.

  • How can your customers contact you with questions or concerns?
  • How do they report problems?
  • What are the business hours for normal communications, and what is the policy for after-hours emergencies?

Establishing these policies up front will help people remain calm when an emergency does occur. A clear communication plan can ensure that you can focus on solving problems rather than fielding complaints. It also ensures that the customer feels confident that you have things well in hand.

Conclusion

At any point where dependencies exist between systems (or teams), that relationship must be governed by a contract. That contract comprises an agreement that defines the scope of responsibility of the service provider, including the interface for the service, a Service Level Agreement that establishes Key Performance Indicators along with targets and limits, and a Communication and Escalation Policy to ensure good support for the running service.

With these parameters defined and clearly communicated, all parties should have confidence in the reliability of the service (or at least a clear path to getting there).

Posted on

Toward a Reusable Content Repository

There are a plethora of web-based content management systems and website publishing systems in the world. Almost all of them are what you might call “full stack solutions,” meaning that they try to cover everything you need to cook up a full publishing system, from content editing to theming. WordPress is the most obvious example, but there are hundreds of such systems varying in complexity, cost, and implementation platform.

So many of the available products are full stack solutions that the market seems to have forgotten the possibility of anything else. What would it look like if you could assemble a CMS from ready-made components? What might those components be, and how would they interoperate?

Every web CMS that I have seen can be divided into three major components. They are:

  • Content Repository
  • Publishing Tools
  • Site Presentation

Each of those major components could further be described with a feature set that might be implemented with sub-components. The Site Presentation component might provide Themes or Sidebar Modules. The Publishing Tools might be as simple as a bare textarea, or might include WYSIWYG with spell checking and media embedding. The Content Repository is, almost universally, a relational database.

The Content Repository, I believe, is the reason that so many systems ship as full-stack solutions. There is no reusable Content Repository component that meets the general needs of content management systems. Without that central component, implementors are forced to bind both their Publishing Tools and their Site Presentation systems tightly to their own custom repository.

I would suggest the following feature set for a reusable Content Repository.

  • Flexible and extensible information architecture, with a sensible default that will work out of the box for most users.
  • Web API for content storage and retrieval (not just a native language API).
  • Fielded search and full-text search over stored objects.
  • Optional version history for content objects.
  • Optional explicit relationships between content objects.
  • Pluggable backends, allowing for implementations at different scales.

Most internal repositories are quite weak in this feature set. For example, very few embedded repositories implement full-text search. Of those who do implement it, the implementation is often naive (SQL LIKE % queries), leading to poor performance and poor scalability. 

Most embedded repositories implement only a native-language API, not a web API, which prevents access to the content unless you also have access to the code (some see this as a feature rather than a bug). 

Relational databases are notoriously bad at flexible information architecture, so it has taken a lot of time and effort for content management systems to add flexibility. Tools like Drupal’s Content Construction Kit and WordPress Custom Post Types are getting there, but without a common base architecture to build on, every implementation is custom and incompatible with the next.

Regardless of the features listed above, there are two key requisites that a reusable Content Repository must fulfill:

  • A published (and preferably simple) protocol for accessing its features.
  • A common base information architecture for content objects.

A Content Repository with these features would serve as a good backing store for Publishing Tools and Site Presentation systems alike, and would be agnostic to both. Any tool that understood the information architecture and protocol used by the Content Repository could build on it easily. Tools could ship with an embedded Content Repository, or connect to an external one that might be hosted on a provider’s servers. Most importantly, your Site Presentation would no longer need to be bundled with your Publishing Tools.

Content Repository Protocol

There are very few contenders for content repository protocols. I only know of two that might be reasonable to build on: AtomPub, and CMIS. To my mind, neither of these is a solution, but examining them might help us develop a solution.

CMIS, despite its name, is geared more toward Document Management than Content Management (IMHO). 

CMIS is far from being simple, and it makes some assumptions about information architecture that make it awkward to use in many cases. For example, it assumes a distinction between documents and folders, and assumes that there is a single folder hierarchy for all content. This is a restrictive and unnecessary constraint that does not fit all use cases. 

It also requires repositories to implement a SQL-like query language, forcing them to map content to a relational model even when it is not stored that way. This makes implementations expensive, and makes certain kinds of queries difficult to craft.

AtomPub, on the other hand, is simple and well-crafted, but defines only a fraction of the feature set we would want in a protocol. For example, it has no defined search protocol at all. Some implementations extend the protocol to mimic Google’s GData search protocol, but it is not a standard, and Google is no longer using it.

Implementers of a reusable Content Repository will have to face the challenge of a new protocol for accessing it. That’s a pretty high barrier.

Common Information Architecture

When I talk about a common information architecture, I am referring to standardizing the shape of content objects in terms of the semantics of its field structure. We need a commonly understood set of metadata, so that tools can share content in a sensible way. Some metadata will be required by Publishing Tools, other metadata will be useful to Site Presentation systems, and some will be needed internally by the Content Repository itself.

Atom and RSS are format standards, but each also defines a base information architecture, and they are mostly compatible with each other. Neither is sufficient for a full Content Repository, but any information architecture incompatible with these formats is a non-starter.

The IPTC has done a huge amount of work in developing interoperability standards for the news industry, which is all about content management. Their G2 News Architecture is documented implicitly in the specifications for their XML exchange formats, and in their rNews metadata format for HTML. I think the G2 News Architecture is a great start on a common information architecture, but a reusable Content Repository would need to define a simpler useful subset of it if it wanted to gain wide adoption.

Conclusion: A Hole in the Market? or No Market?

There are no real conclusions to draw from this, only questions to ask. Namely, is there an under-served market for a reusable Content Repository out there? Perhaps everyone is content with their vertically integrated solutions, and no one is interested in mixing and matching their presentation layer with different publishing tools.

I suspect, however, that the market for a reusable Content Repository will emerge as a result of the proliferation of Internet accessible devices. As people want access to their CMS across desktops, tablets, smart phones, and other devices, the utility of separating the presentation from the repository will become obvious.

Of course, the only way to know is to put in the hard work to build it, and see who bites.

Posted on

CASTED: Cooperative Agents, Single Threaded, Event Driven

The past looked like this: A User logs into a Computer, launches a Program, and interacts with it.

The future looks like this: The Computer on your desk runs a Program (in the background) that collaborates with a Program running on the Computer in your pocket and another Program running on a Computer in the Cloud, operating on your behalf without the need to interact.

In the past, a Program and an Application were the same thing. More and more, the Applications of today and tomorrow are made up of multiple Programs running on multiple Computers but cooperating with each other to achieve some utility for You (formerly the User).

The web development community has lately been very excited about single-threaded event-driven servers like Node.js. These processes are very good at maintaining a large number of connections, each of which requires only a small amount of work. (These servers are not very good at the inverse case, a small number of clients asking for very hard work to be done. For that, you want a different model.)

This paradigm of a large number of connections and small amounts of work fits neatly into the world where large numbers of processes collaborate to create a useful result. Each process does a relatively small amount of work, but the value emerges from the coordination of the processes through their communication.

Example: There is a process on your phone that displays emails. There is a process on the mail server that sends the messages to your phone. There is a process that examines messages as they arrive at the server to filter out junk mail. There is another process that examines the messages to rank them by importance and places some in your Priority Inbox. These processes are constantly running, on multiple servers, operating on your behalf in the background.

Years ago, Tim O’Reilly was writing about software above the level of a single device as part of his Web 2.0 concept. Tim’s classic example is the iPod-iTunes-iStore triumvirate. You have servers on the Internet, a desktop or laptop computer, and a small handheld device all coordinating your data for you.

As more devices have computers embedded into them, there are more opportunities for cross-device applications. And as more such applications emerge, users will expect applications to coordinate across devices like this. If you are designing a new application today, you’d better be thinking about it as a distributed system of cooperating processes.

Posted on

Evolving Systems vs Design Consultants – A Recurring Pattern

I often think of systems architecture as analogous to this word game I played as a child. I don’t know if the game has a name, but it is begun by selecting two words, say “cat” and “dog”. The goal is to begin with one word, and end with the other. The rules are, you can change only one letter each turn, and at the end of every turn, you must be left with a true word. Hence, one way the game might play out is CAT -> COT -> COG -> DOG. You might also get there through CAT -> COT -> DOT -> DOG. Either path is valid, but there is no direct “upgrade” from CAT to DOG.

This is an apt analogy for the problem of systems architecture when dealing with an operational system. The constraints of the system’s operation almost always prevent you from changing more than one component at a time. Every change to any component must result in a system that continues to operate. Real life systems also tend to have far more components than the three-letter word, in fact they comprise sentences, paragraphs, even whole novels.

In my work, I have occasionally had the good fortune to work with some great outside consultants. To date, I have always found these interactions to be productive and educational on multiple levels. It is a remarkable luxury to pick the brain of someone who is truly an expert in their field, and I try to take advantage of such opportunities whenever I can. In those interactions, I have noticed a curious recurring pattern.

Because of my role, I am often dealing with a consultant who is a systems designer. This expert comes in to help us improve the design of our systems. Unfortunately for her (or me), evolving operational systems tend to be more organically grown than designed, and the consultant must infer a design intent from examining the system as built, because the original design intent is lost in the mists of time.

Invariably, a conversation will occur that goes something like this.

“I see that you are using a COT in this part of the system,” the consultant will say, attempting to hide a smirk. “A DOG would be much more appropriate. Why don’t you try using a DOG?”

Of course, the consultant is being tactful here. No person in his right mind would use a COT as a replacement for a DOG. We, who built the system, are embarrassed even to be showing anyone this particular mangled part of our system. My response, when I have sufficient presence of mind to compose a rational one, always has a similar pattern.

“Well yes, ideally you want a DOG there, but when we were building this aspect of the system, we didn’t have enough budget left for a pre-built DOG component. It would have taken us several months to build a custom DOG, which would have caused us to miss our launch deadline. But we had a well-tested CAT component we had built for a different system, and that mostly did the job. We found we could use that if we made some adjustments to the FOOD component to accommodate the CAT, and we could do that faster than building a whole new DOG.”

Pause for a breath. Here’s where the explanation gets messy. “After we launched, we wanted to come back and fix this to use a DOG, as originally designed, but of course we couldn’t switch from a CAT to a DOG without changing the FOOD component again. Since we can only change one component at a time, during the upgrade process either the CAT or the DOG would get the wrong FOOD at some point, breaking the system.” Remember that constraint about changing only one component at a time?

“We can’t afford to break the system, we have live customers to support now.” Here’s that other constraint, every change must result in an operational system. Paying customers enforce that pretty strictly. It’s hard to say you’re lucky if you don’t have paying customers, but sometimes it feels that way.

“So instead, we have migrated to using a COT. It’s obviously not very efficient, but it fits, and it eliminates the dependency on the FOOD component (a COT does not eat). We’re planning to replace the COT with a COG in a future release, which should be a smooth transition, and free up some system resources. Once that’s done, we can use those resources to re-engineer the FOOD component to support a DOG, assuming management signs off on the additional cost.”

By this time, depending on the consultant’s level of experience, she will either be staring at me like I’m a lunatic, or shaking her head with a sympathetic grimace (usually the latter). In either case, the response is usually some variant of “I see.” And the final report will advise, “Upgrade from COT to DOG ASAP.”

Sigh.

There is no aspect of an organically grown system that could not be better designed in retrospect. The shape of the completed system is not governed solely by the appropriateness of the design/architecture. It is largely shaped by convenience, the accessibility of specific tools or components, the cost-benefit trade-offs and time constraints imposed externally on the design process.

The line between sense and nonsense is squiggly, because it must be drawn through the whole history of the system. And it’s not always obvious which side of the line you are on.

Posted on

Take heed, managers: your “best practices” are killing your company

If you are a manager, you need to understand the ideas of W. Edwards Deming. Deming wrote several books about management, in which he chastised American business schools and American corporate management for perpetuating a failed philosophy and failed management techniques.

Deming proposed a new philosophy of management motivated by quality and grounded in systems theory. The Deming philosophy is too deep, too broad, and too rich to be explained in a mere blog post. Volumes have been written about it, and as I read those volumes I am sharing my thoughts through this venue (with apologies to Mr. Deming if I misrepresent anything, I am still learning.)

Probably the best introduction to Deming and his theories is his Red Bead Experiment. The experiment is detailed in Chapter 7 of his book, The New Economics for Industry, Government, Education. The experiment is extremely educational, and I highly recommend you watch it play out in the video below (you’ll need about an hour).

Deming’s Red Bead Experiment

In case you haven’t the time to watch the video version, here is the one paragraph summary of the Red Bead Experiment.

The experiment simulates a company, the White Bead Corporation, whose job is to ship white beads to its customers. Several employees are recruited from the audience, including line workers and quality control workers. Workers are presented with a box containing 3,200 small white beads, and 800 red beads of the same size. They are given a tool to extract beads from the box 50 at a time, and strict instructions on how to carry out their task of “making” white beads. They use the tool as instructed, then report each batch to quality control for inspection, where the number of defects (red beads) is recorded. The foreman (the instructor) tries several management techniques to improve the performance of his workers: he puts up motivational posters, sets numerical goals, introduces pay incentives, conducts individual performance reviews, and finally lays off the poorest performers. In the end, the company goes out of business because it cannot meet customer demand for defect-free white beads.

Now, observers of this experiment can see that the game is rigged. The workers are destined to fail, and that is precisely the point that Deming is trying to make to the managers.

“Apparent performance is actually attributable mostly to the system that the individual works in, not to the individual himself,” Deming wrote.

Despite the fact that there were observable differences between the output of individual workers, those differences were entirely the result of common cause, that is, the variation is inherent to the system itself. All the efforts spent trying to improve the individual performance of the workers is wasted, because the flaw is not in their performance, but in the system under which they work.

Deming warns, “Instead of setting numerical quotas, management should work on improvement of the process.” As a manager, it is your job to understand the difference between special causes that should be remedied individually, and common causes that can only be eliminated with a change to the system itself. And as a manager, the system is your responsibility, not to be delegated.

Deming describes how numerical quotas and incentive are not only useless, but actually counter-productive. He gives several examples where workers may report misleading figures, or make poor business decisions that game the system so that the numbers work out. A grocery manager accountable for inventory pulls cashiers to audit a delivery while paying customers wait in line. He stops stocking certain items that move slowly and might spoil on the shelf, forcing customers to shop elsewhere for those items. “He knows 55 other ways to help to meet his allowance of 1 percent shrinkage, all of which hurt the business. Can anybody blame him for living within his allowance?”

Listening to Deming, you have to conclude that those annual performance reviews you conduct as a manager are aimed in the wrong direction. Performance of your employees is not a result of the employee’s competence, but of the manager’s competence to build a system in which they can be productive. If your employees are not performing up to standards, you as a manager need to ask yourself what you are doing wrong. What changes must be made to the process of your business to make these employees productive? Ask them, they can probably tell you several, because they actually want to achieve, and they can see what parts of the system are holding them back.

Incentive programs and performance reviews are considered management “best practices” and this is why Deming chastised American managers. These practices simply don’t produce the results managers are looking for. Have you ever had a performance review or bonus program result in an order of magnitude increase in productivity? Never. At best, you’ll squeeze out a few percentage points. At worst, you make your employees feel micro-managed and powerless, removing all desire they may have to improve.

To move the needle on organizational productivity, you need to focus on the process by which your company produces value, and constantly improve that process.

Posted on

Systems vs Habits: Why GTD Often Fails

In my previous post, I wrote about David Allen’s Getting Things Done book and productivity system. If GTD has a weakness, it is that, although the book describes the system very well, it does a poor job of describing the change of daily habits you’ll have to perform if you really want to implement the system. The major reason people fail at implementing a GTD-style productivity system in their lives is that, no matter how simple the system may be, it’s a big change from what they are used to.

Leo Babauta is a self-made expert in changing and forming habits. His Zen Habits blog has changed the lives of many of its readers. So when I decided to try getting organized once again, there were two books on my reading list: David Allen’s (the System), and Leo Babauta’s Zen To Done, Leo’s personal take on productivity.

On Habits and Willpower

I like to think of habits as irrigation ditches. 

If you are a farmer who wants your fields watered, the obvious thing to do is to go get some water. But carrying buckets of water from the well to your field is inefficient and places a low upper limit on the amount of crop you can grow effectively. The effective farmer instead spends his effort digging irrigation ditches. It’s exhausting work, and at first it seems to generate no benefit at all. But once the ditch is complete, the water flows naturally into your fields on its own, without effort.

Good habits are a way to automate your behavior the way irrigation ditches automate watering. They allow you to accomplish work without effort. But if you don’t have them already, good habits can be hard to form.

As humans, we have a natural aversion to change. The world and activities that we are comfortable with got us this far, so they must be good, right? Change might make things worse. So if your bar for success is mere survival, aversion to change is probably a good thing. That’s why change makes us uncomfortable. It’s instinctive.

Each of us has a limited ability to tolerate change. Too much change makes us too uncomfortable, and we start to squirm, trying to avoid the change, to get back in our comfort zone. The uncomfortable feeling we get from too much change we call “stress”. When we’re trying to affect change, we call the ability to tolerate it “willpower”. But this is misleading, because willpower must also be expended to tolerate change that comes from the outside, change that we don’t want.

Remember Steve McCroskey from Airplane!, the guy who picked the wrong week to quit smoking? Too much change, he ran out of willpower.

Habits are a way to acclimate yourself to a new condition or activity, so that you stop seeing it as stressful change and start seeing it as normal.

Habits and Productivity

GTD asks you to master five classes of activity:

  • Collect (everything in an Inbox, as few as possible)
  • Process (Empty the Inbox, Make decisions about where items go)
  • Organize (File and Schedule items & tasks)
  • Review
  • DO

But the GTD system itself doesn’t tell you how to master these activities. For most people, mastering these activities means forming at least 4 new habits. For others it may require dozens of new habits to master them. But forming habits requires willpower, and we only have so much of that. The result is that many people trying to implement the system as a whole feel overwhelmed by the change, and stop.

Zen To Done is a short ebook (there’s also a paperback) that describes ten habits you can adopt to become fully productive. If even ten habits sounds daunting and unachievable to you, don’t worry, Babauta has you covered. He describes a minimalist system that will yield major improvements in productivity consisting of just four habits: Collect, Process, Plan, and Do.

Babauta’s approach to productivity is the same as his approach to self-improvement. Break down the desired change into a set of behaviors or habits, and tackle each habit one at a time before moving on to the next. He has some quick tips and tricks in the book to help you form these habits, but if you want to go deeper, you should probably read his other book, The Power of Less, or page through the great free content on his blog Zen Habits.

You should buy and read Zen To Done. It’s cheap, it’s an easy read, and it may help you to make the changes you want in your life. But if you don’t, here’s some friendly advice inspired by Babauta.

Give yourself permission to move slowly. Focus on just one habit, until that one habit is mastered. This means there are several other habits in your queue that are not “done” yet. You have to be okay with that. You have to accept that first things come first, and trust that those other things will get done. But for now, you must focus on the one thing you have chosen. Remember, you aren’t watering the fields quite yet. You are still digging ditches.

Posted on

Getting Things Done — Productivity System

GTD workflow diagram

David Allen’s Getting Things Done: The Art of Stress-Free Productivity is a phenomenon in the tech community. If you’re reading this blog, you’ve probably already read the book, or at least know something about the productivity system that it defines. I read it years ago, but like many readers never put into practice more than a tiny portion of the system.

As 2012 drew to a close and I looked back on all the things I meant to accomplish, I decided that I should give this productivity bible another look, in the hopes of getting more things done in 2013. I won’t bother to summarize the system that David Allen defines. The book is very readable and does a much better job than I could. Instead, I’m just going to note how I decided to apply the principles of his system in my own life, especially given the changes in technology and lifestyle since the book was originally published a dozen years ago.

Some essential elements of the GTD system include:

  • one or more inboxes (as few as you can get away with),
  • a calendar,
  • a “tickler” file to remind yourself of tasks that can only start at some future date,
  • a place to organize your reference material, and
  • a tool for organizing lists of projects and tasks.

The original GTD system was developed in a paper-focused world, before cloud-based calendars and Internet-connected phones became the norm. I’ve worked really hard to eliminate the mountains of paper in my life, so I have no interest in buying filing cabinets and manila folders.

The vast majority of things I need to manage in my own life are actually digital. Email, digital music, more email, PDF downloads, email again, digital pictures, and did I mention the email? Digital references take up far less space, are easier to move, and are full-text searchable. I quickly resolved that my organization system would be digital, not physical.

As I had already begun to use it for scribbling notes I wanted to keep track of, I decided to do the simplest digital thing that could work for staying organized, and elected Evernote as my default tool.

Evernote gives me myriad advantages over a paper-based system.

  • It’s a great place to jot down random notes (replacing a physical notepad). It can even create notes from pictures or voice recordings, making it even easier to capture everything.
  • The option to use notebooks and/or tags to organize notes, and its full-text search capability, make it an excellent reference database (replacing the filing cabinet).
  • It runs on my phone, so it’s always with me (no need to carry a paper organizer or notepad).
  • It magically synchronizes with both my work computer and my home computer, so I never have to switch contexts to access it.

Setting up Evernote to work with my system was dead simple. I renamed the default notebook to “INBOX.” Any random notes I capture are there, waiting to be processed when next I process my inboxes. I created a Projects stack, containing a notebook for each large project, and a “Miscellaneous” notebook with a separate note for each smallish project. A Reference stack contains various notebooks organized by topic where I can file informational notes, PDF or Word documents, photos of the whiteboard scribbles from a brainstorming session, or any other assets I need to keep around.

With a physical inbox, Allen recommends dealing with things that won’t fit into it by writing a reminder of them on a piece of paper and placing the paper in the inbox. Since my inbox is digital, physical things won’t fit into it. So if I need a reminder of a physical thing, I snap a picture of it with my phone and add it to my Evernote inbox. I also file away papers by scanning them or taking a picture with my phone (and then trashing/recycling the physical paper).

Allen recommends keeping a list with all your “next actions” on it. I have become accustomed to visualizing work using a kanban system, so instead of a “next actions” list, I have a notebook called Backlog containing a note for each task, and another called WIP that contains notes for the tasks I am currently working on. When completed, I move them to the Done notebook. During my weekly review of open projects, I determine the next actions for each project and add a note for each one to the Backlog.

GTD recommends keeping an agenda list for every regular meeting you have, so that you never have to be embarrassed that you forgot to ask Bob about that one thing when you spoke with him this morning. I keep an Agenda notebook in Evernote, with a separate note for each person or group I speak to regularly. If I run into someone in the hallway, I can whip out my phone and access their agenda immediately. Any notes generated from the meeting go back into the inbox to be processed.

Since most of my reading is also digital, I use Pocket (formerly Read It Later) as my Reading list. I do a lot of my reading in the moment as a less-than-two-minute task, but when I need to queue something up, I toss it into Pocket. I am finding, however, that my appetite for reading later is a bit more ambitious than the amount of time “later” actually affords me. Perhaps I need to work on this.

GTD Patterns I Don’t Apply

GTD recommends keeping your Next Action list organized by Context: Things you can do at home, at work, at the phone, etc. I found organizing by context to be almost useless, because almost all my tasks can be performed in any context. My work is all digital, and my work computer is a laptop I bring home with me at night. In a pinch, most of my work could performed on my phone. I always have a phone in my pocket, so there’s no need for a “calls” context, I can make calls from anywhere. Most of my home activities are habits rather than tasks (take out trash, wash clothes, etc.) and therefore do not need to be tracked.

I don’t have a “Waiting For” notebook for tracking delegated tasks. Instead, I place a reminder on my calendar to follow up on a certain date if the awaited item has not arrived in my inbox by then. I also make “appointments” blocking out time to complete important tasks, otherwise there is a risk that my schedule will fill up and leave no time, or that I will get distracted by the in-the-moment work, leaving important things too late.

My calendar is already digital and synchronized across my devices. I use the Exchange calendar provided by my company, but I could just as easily use a synchronized iCloud or Google Calendar.

Finally, I decided that the tickler file was really an artifact of the paper world where a calendar is a sheet of paper with little boxes drawn on it. You can’t file papers in those little boxes, so you need those 43 folders to store date-specific items. In my all-digital world, if I need to be reminded of something on a certain date, I can just drop it onto my digital calendar and store it there, or at worst store a link to some other repository. So I don’t have a tickler notebook in Evernote, instead I use my calendar directly to fill that role.

I’m just getting started using and tweaking this system, and I’m sure it will evolve over time. Perhaps I will write a follow-up post in a few months to record how it has changed and how effective it has been.

Posted on

How to set up a new PC in 12 Steps, or How I spent my evening renewing my disgust with Windows

Step 1: Spend 30 minutes unpacking boxes, peeling plastic, and connecting cables.

Step 2: In breathless anticipation, press the power button.

Step 3: Spend another 30 minutes hunting for the Windows Product Key so you can access the computer you just bought. Find it, finally, on an indelible sticker on the far side of the computer’s case.

Step 4: Enlist an assistant to type the Windows Product Key while you hang upside down under the desk using a flashlight to read it out.

Step 5: Insert CD to install hardware drivers, because Windows does not know how to use the network card in your PC.  Try to convince Windows that you know what you are doing and yes, you really want to run that program from the CD.

Step 5b (optional): Wonder at how Windows has not only failed to improve, but has actually gotten worse in the 10 years since you last bought a PC.

Step 6: Using a clunky-looking “wizard” from the CD, attempt to connect to wireless network. Be unable to find your wireless access point in the list because you live in a crowded apartment building, and the list is sorted randomly rather than by signal strength or even alphabetically. Notice that the list has multiple pages, and advance to page two. There it is.

Step 7: Enter password for wireless access point. Curse in frustration when it fails to connect. Blush with embarrassment when you realize CAPSLOCK is on. Turn CAPSLOCK off and try again.

Step 7b (optional): Curse the inventor of the CAPSLOCK key.

Step 8: Start Internet Explorer. Type “google.com/chrome” into the location bar to download a real browser. Try to convince Windows that you know what you are doing and yes, you really want to run that program.

Step 9: Sign into Google Chrome. All extensions and bookmarks are automatically synced. Awesome.

Step 10: Using Google Chrome, visit www.ubuntu.com and download the Windows Installer to install a real operating system. Try to convince Windows that you know what you are doing and yes, you really want to run that program.

Step 11: Let the installer reboot into Linux. Be amazed at how all the hardware is recognized immediately, including the wireless card. Feel like Ubuntu just gave you a warm hug when the wireless network manager pops up on the screen and offers to connect you to your very own wireless access point if you will be so kind as to enter the password.
Check to ensure CAPSLOCK is off. Enter password.

Step 12: Click “Install Updates” when the update manager offers to do so. Wait.

Step 12a (optional): Write a blog post about your experience while waiting for updates to download. Feel sorry for people who have not yet discovered Linux.

Posted on

A Framework for Innovation

How does a large company create an environment that encourages and leverages internal innovation? Here is my checklist of prerequisites for “enterprise” innovation:

Great people. You may think this goes without saying, but it cannot be emphasized enough. You cannot hire drones who put in 8 hours for a paycheck and then head out the door. You need passionate, creative people, people who love their work, people who are impatient with “getting by” and want to be the best at what they do. These are the Innovators. Without them, innovation does not happen.

A clear vision. Innovation happens at the edges. It is not a top-down directed process, it is an organic, bottom-up growth. In order for the innovators at the edge to produce innovations that are relavent to the business, top management must articulate and communicate a clear vision for the direction of the company. If the innovators can see the direction, they will innovate in that direction and get you there faster. If not, they will innovate in random directions, and you won’t get the full benefit of innovation. A clear vision is the difference between innovation and distraction.

Spare capacity. Innovation is experimentation. Innovators need time to experiment, and they won’t have that if 100% of their time is allocated to executing your current plan. This is the hardest thing for top managers to accept, but it is absolutely essential. You need slack time, or there simply will not be any innovation. Allocate one slice of your capacity for executing the plan. Reserve a second slice for unplanned work and process improvements. Allocate a third slice explicitly to innovation. The relative size of the slices will be entirely dependent on your own business and your desired outcome. My personal preference is 50/30/20.

Freedom to make decisions. Innovators by definition have to make decisions, make changes, form partnerships, and allocate resources from the pool of spare capacity. If permission is required to accomplish these things, then innovation will be quashed before it can succeed.

Accessible Business Intelligence. If you are going to give innovators permission to make decisions, you must give them the information and tools they need to fuel decision making. Innovators need transparent access to customer data, product data, sales data, cost data. Without it, they are shooting in the dark, and the chances of success are low. Innovators also need easy access to tools for gathering their own data, for evaluating experiments and measuring success vs. failure.

Freedom to fail. Innovation is experimentation, and experiments, by their nature, do not always have the expected outcome. When Innovators exercise their power to make decisions, some of the decisions will be wrong ones. Innovators need to feel secure that they will not be punished for taking a chance if it doesn’t work out. Remember, these folks are corporate employees, not risk-taking entrepreneurs. They don’t stand to make millions if their innovation succeeds, so they shouldn’t have to give up their health coverage and pension if it doesn’t. Make it clear that failure is a learning opportunity, not a firing offense.

The above are a few requirements for fostering innovation in large companies. Ultimately, innovation only happens where the culture supports it. Managers at all levels build company culture through their hiring and firing practices first, and management styles second. If your managers fear the new and different, your culture will never innovate. Ensuring the above factors at all levels of the organization should help to unchain your hidden innovation potential.

What’s missing from this list? How does your company encourage (or discourage) innovation? Drop me a note in the comments.