There is a lot that task management can learn from computer programming. Computer programming is about automating tasks, after all, and task management is about assisting workflow by tracking it and making facts about it explicit. Where tasks might have sub-tasks, computer programs might have subroutines, and so on. Task management theory should borrow heavily from programming language and operating system theory as a consequence of this.
Before looking at the similarities, however, let's have a look at some of the differences, so that we get a feel for the limits of the analogy.
Task management is massively heterogeneous. This problem arises in distributed computer systems as well, but it's even more pronounced than usual in the context of task management. Most of the actors in the task management context are human beings, but computers and other automata can also get in on the act, mechanically raising or processing tasks. As a consequence, we need to be particularly flexible about data representation. Sometimes it will be read and modified by people, and sometimes by machines.
Task management is massively parallel. Parallelism is a tricky issue in programming, but it's even more pronounced in task management. In a computer, an operating system will normally perform the scheduling work, and allocate particular executable tasks to available processors. In the task management context, the task manager handles the queueing, but the "processors" are external to the system, consisting of people and other computers. Those external processors deal with the tasks in the relevant queues as they can, then update the task for re-queueing.
Task management is massively event-driven. Event-driven programming is a known pattern in computing, but it's not something that programming languages handle well, in my view. It's closely related to the concept of parallelism, since there are many things which need to be ready for action at the same time, any one of which could be next. This contrasts markedly with the tried and true patterns of structured programming, in which the flow of control proceeds in a continuous manner through loops, branches, and subroutines. Most programming languages adapt to the event-driven model using call-back functions, but there seems to be something fundamentally inappropriate about that approach. Further analysis of this issue will definitely be required. Whatever the case, tasks in the workflow management context need to be broken up into independently executable sub-parts as much as possible, while keeping tabs on the individual parts to make sure that the whole still progresses.
Task management is all about message passing. Coordination between sub-parts of a computer program is sometimes modelled as "message passing", but the analogy is usually not taken too literally (unless necessary, as in distributed systems), since there is a lot of overhead in a literal message. In the case of workflow management, however, the messages are quite literal, and the coordination process is all about message passing. Not only that, but the messages are preserved as history. In a computer program, data is generally thrown away (the memory freed) as soon as it is no longer necessary. Here, the data is being stored in a separate system, and the history is kept for its contextual, auditing, and performance-measuring value.
In short, the similarities are very real, but the differences are also important. Task management can be viewed as a kind of computer programming problem in a massively parallel, heterogeneous environment, in which the processing elements (mostly people) have relatively high latency. As a consequence, we need a loosely coupled processing model, in which work is performed asynchronously as much as possible. Rather than the usual structured programming model of subroutine calls (in which the calling routine temporarily suspends work while the subroutine does its thing), we need an asynchronous, parallel model, in which work requests are dispatched, and we immediately get on with something else that we could be doing (if anything), rather than wait for the response.
Due to the lack of tight synchronisation between the parallel components, we also need to model sub-tasks of this sort as queues of pending requests and responses. Due to the independent nature of the requester and the responder, one party may be busy doing something else when the other is ready to communicate, so the messages are passed by queueing rather than a synchronised "pass the baton" type of manoeuvre. This kind of thing is not unknown in programming, but it's not a pattern that I can illustrate using, say, language-level constructs in Perl (where subroutine calls are very much synchronous affairs). Analysis will be necessary.
_________________ The Famous Brett Watson -- brett.watson@gmail.com
|