|On The Ball (Briainstorming)
|Page 1 of 1|
|Author:||TFBW [ Sun Oct 14, 2012 6:29 am ]|
|Post subject:||On The Ball (Briainstorming)|
I'm not satisfied with the state of workflow management tools. I haven't used that many, but it strikes me that there is room for something simpler and more general than any I've seen so far. The ones I've used tend to target a particular problem (like software defect tracking, a la Bugzilla), or implement every piece of jargon in , making for a highly buzzword-compliant system, but an excessively complex interface that the average person can't understand.
As a consequence of this, I propose a new project, which I tentatively name "On The Ball". This is to be a bare-bones task-tracking service, with the intention that it should be designed for extensibility, so that it can fill the more particular roles as needs be. The core should be simple and generic, yet useful: a foundation upon which richer implementations can be built.
Extensibility needs to be done right: everything is extensible in principle, but not all extension models are created equal. I think that Perl provides us with a good example of extensibility done right, more or less: some constructs are built in (implemented at the language level); some are implemented in core modules (always available, but they must be explicitly imported); some are available in CPAN. In the same way, very little of On The Ball should be contained in the core program itself, but there should be enough useful extensions included with it to suit most basic needs. Consideration of where to draw the line between "core" and "module" will be a recurring theme.
The name, "On The Ball", comes from the sporting maxim, "keep your eye on the ball". Someone who is "on the ball" is alert, attentive to his environment, aware of what action is being taken around him, and anticipates what he needs to do to aid in the situation. Task tracking is easy to explain in terms of such sporting metaphors: to pass the ball, to drop the ball, the ball being in one's court, and so on. If there is a core problem statement for On The Ball, it is to identify all the various "balls" in the system, to ensure it is clear which court each ball is in, and to prevent any ball from being dropped.
The purpose of this forum topic is to brainstorm the design of On The Ball: to come up with initial ideas for the core data model and behaviour of the system. Due to the desire for a small but practically extensible core, this will also involve anticipating the various specialised roles into which we may want to extend the system, and determining which of the aspects are justifiably considered "core", versus "extension".
|Author:||TFBW [ Tue Oct 16, 2012 7:56 am ]|
|Post subject:||Tasks per Ball (Data Model)|
A first question to ask about the data model is whether the "task" is the fundamental unit of things being tracked. Or, to put it another way, if we are tracking metaphorical "balls", is there a one-to-one correspondence between tasks and balls, or can there be more than one per task?
As an example, suppose person A raises a task to set up a new web server. The person who sets up web servers (B) might in turn need new hardware for the server, and so raise a task with the hardware wranglers (C). Do we represent this as a ball passing from A to B to C, or two separate relationships, A-B, and B-C? What if the new web server not only requires new hardware, but also special network changes which must be handled by a separate party, D? In this case, we don't want to limit the model to the A-B-C style of ball-passing, because we want C and D to work independently and in parallel.
In order to achieve this parallel operation, we need to introduce more than one ball into the task, or create separate tasks and model inter-task dependencies. Inter-task dependencies seem intuitively simpler: a task can be blocked pending the completion of other tasks. If we model this as several balls within a single task, the model is less clear: B, as the middle man, must keep track of the number of balls in play, and not consider his task ready to proceed until such time as all the dispatched balls have been returned.
Exactly what data would we associate with each object in these alternative cases? For inter-task dependencies, we have a problem description, the person who raised the task (the Owner), the person to whom the task is assigned (the Assignee), and a set of tasks which block this one. In the multi-ball case, each ball has an Owner and an Assignee, but also a task with which it is associated. In addition, each ball needs a list of other balls upon which it is waiting, just like inter-task dependencies, otherwise there is no way to tell what's holding up progress. Further, each ball really needs its own problem description, so that we understand the specific issue associated with this ball.
By this point, there's apparently nothing left for the "parent task" to hold -- it's just a grouping mechanism. It does not seem to be a particularly useful one, either, since it acts as a limit on the possible relationships between activities in the system: inter-task dependencies are not possible.
Thus, initial analysis suggests that the task is indeed the fundamental unit being tracked -- that the task is identical with the metaphorical "ball".
|Author:||TFBW [ Wed Oct 17, 2012 1:16 pm ]|
|Post subject:||Roles (Data Model)|
My previous post identified two active roles associated with a task: the Owner (who requests that the task be performed), and the Assignee (who is requested to perform the work). Just as we analysed the relationship between "tasks" and "balls", we will now consider the relationship between these roles and the system users. Flexibility with regard to these roles is likely to have a major impact on the usability of the system.
The simple approach is to have each user of the system assigned an identity, and then have one of these identities in each role. When a task is initially created, the creator can occupy both the Owner and Assignee roles by default, then reassign the task to someone else as needs be. This is about as basic as it gets, and is tentatively our minimum requirement.
One consideration is whether the roles are always singular, or whether we might need more than one of each. Singularity makes for simplicity, but there are situations in which team effort is involved. How do we model these situations?
One example is where a problem is raised against a group, such as a department or team. A bug report might be raised as a task for a software development team, for example. The appropriate action in this case would be to raise the task against either a team leader, or a virtual role such as "bug reports" or "software development team". The role is virtual in the sense that it does not correspond to an actual person, or even to a role that a person fills (although we might consider "software development team" to be a collective role). Even if these virtual roles have no other relationship with actual users, they can still act as named queues into which work can be placed, and a particular group of people can deal with tasks in particular named queues as a simple matter of convention.
For added utility, it may help to model such a queue as a group with explicit membership: people who are members of the group can have that group's "task inbox" appear in their default view of the system (in addition to their personal one). Explicit group membership can also assist a team leader when reassigning tasks from the group identity to the individual members, since the member list can be provided as candidate assignees. Using a group identity in this way is better than simply using the team leader's address for team work, since it also allows us to distinguish clearly between work intended for the team, and work intended for the team leader in particular.
Our first observation, then, is that not all roles correspond to individual users. While it seems reasonable that every user should be addressable as a role, there will be cause to assign tasks to virtual identities. Indeed, roles don't have to be anything more than an arbitrary label, although they can be made more useful by adding relevant metadata (such as a list of related roles, being likely choices in the case that the Owner or Assignee is changed).
Note also that people aren't necessarily restricted to a single role. It may be handy to distinguish between a person's participation in various groups by giving that person group-specific roles, rather than simply adding their personal identity to a list of group members. In that way, tasks assigned to them in their group role can be distinguished from other tasks. This arrangement has the advantage that when a person leaves a group, the system can facilitate reallocation of any affected tasks by reassigning them back to the group as a whole.
What this means for our data model is that "roles" are a first-class entity in the system. The Owner and Assignee are designated by roles, and roles may be filled by individual people, or by virtual entities such as groups. It is apparent that ease of role management is going to contribute significantly to the usability of the system as a whole.
|Author:||TFBW [ Mon Oct 22, 2012 1:23 pm ]|
|Post subject:||Re: Roles (Data Model)|
I want to briefly emphasise and reiterate some of the points I've said about roles in the previous post.
A person can have many roles. In trivial systems, it is sufficient to simply address a task to a person, but, in more complex systems, it is important that work be assigned to roles rather than individuals. Role management is thus an important extension path. As a user, I want to be able to see my roles. When a user relinquishes a role, we want the system to do something sensible with the affected tasks, which may mean reassigning the tasks to a fall-back role (e.g. the group that the user is leaving), or closing the associated task (if the departure actually warrants it).
There can be many kinds of roles. A person can have an individual identity role, any number of group membership roles, and possibly categorised sub-roles within those groups. Roles can also be arbitrary named queues, not associated with any person, but dealt with on an informal basis. This may be simpler than managing groups in small organisations, where things are done on a more ad hoc basis. For example, rather than have a formal "receptionists" group with members, there might just be a "receptionist" role, and whoever is performing those duties at the time can deal with those tasks, without any formal connection between the people and the roles. The purpose of roles is to act as a management aid: they should make it easy to find tasks, and easy to reassign tasks in the face of change.
Large systems require navigation aids. Where there are a lot of roles, such as in a large organisation, it is important that they have a navigable structure. Where possible, the system should have some clue about the likely assignees for a task (such as members of a group), and aid the user by offering those as candidates. Where the search for an assignee needs to go further afield, the user should be able to drill down through various paths, such as a division/department hierarchy, or search by name. It will be helpful in most cases if the user can maintain a personal "contacts list", which is really just a cache of frequently or recently used roles.
Roles have semantic content as well as names. The name of a role should convey its purpose, but this will often be a slightly jargon-laden title, and may not be meaningful to all intended users. It may be a good idea to have longer descriptions associated with roles, to inform people who are considering assigning a task to the role as to what kind of tasks are handled by the role. Clearly, this description is going to be common across group roles where members perform the same task, so an appropriate form of inheritance will be desirable in that case. This description data becomes a good candidate for searching, to aid in choosing an assignee for a task.
|Author:||TFBW [ Wed Oct 24, 2012 2:08 pm ]|
|Post subject:||Task Management, Programming Analogies|
There is a lot that task management can learn from computer programming. Computer programming is about automating tasks, after all, and task management is about assisting workflow by tracking it and making facts about it explicit. Where tasks might have sub-tasks, computer programs might have subroutines, and so on. Task management theory should borrow heavily from programming language and operating system theory as a consequence of this.
Before looking at the similarities, however, let's have a look at some of the differences, so that we get a feel for the limits of the analogy.
Task management is massively heterogeneous. This problem arises in distributed computer systems as well, but it's even more pronounced than usual in the context of task management. Most of the actors in the task management context are human beings, but computers and other automata can also get in on the act, mechanically raising or processing tasks. As a consequence, we need to be particularly flexible about data representation. Sometimes it will be read and modified by people, and sometimes by machines.
Task management is massively parallel. Parallelism is a tricky issue in programming, but it's even more pronounced in task management. In a computer, an operating system will normally perform the scheduling work, and allocate particular executable tasks to available processors. In the task management context, the task manager handles the queueing, but the "processors" are external to the system, consisting of people and other computers. Those external processors deal with the tasks in the relevant queues as they can, then update the task for re-queueing.
Task management is massively event-driven. Event-driven programming is a known pattern in computing, but it's not something that programming languages handle well, in my view. It's closely related to the concept of parallelism, since there are many things which need to be ready for action at the same time, any one of which could be next. This contrasts markedly with the tried and true patterns of structured programming, in which the flow of control proceeds in a continuous manner through loops, branches, and subroutines. Most programming languages adapt to the event-driven model using call-back functions, but there seems to be something fundamentally inappropriate about that approach. Further analysis of this issue will definitely be required. Whatever the case, tasks in the workflow management context need to be broken up into independently executable sub-parts as much as possible, while keeping tabs on the individual parts to make sure that the whole still progresses.
Task management is all about message passing. Coordination between sub-parts of a computer program is sometimes modelled as "message passing", but the analogy is usually not taken too literally (unless necessary, as in distributed systems), since there is a lot of overhead in a literal message. In the case of workflow management, however, the messages are quite literal, and the coordination process is all about message passing. Not only that, but the messages are preserved as history. In a computer program, data is generally thrown away (the memory freed) as soon as it is no longer necessary. Here, the data is being stored in a separate system, and the history is kept for its contextual, auditing, and performance-measuring value.
In short, the similarities are very real, but the differences are also important. Task management can be viewed as a kind of computer programming problem in a massively parallel, heterogeneous environment, in which the processing elements (mostly people) have relatively high latency. As a consequence, we need a loosely coupled processing model, in which work is performed asynchronously as much as possible. Rather than the usual structured programming model of subroutine calls (in which the calling routine temporarily suspends work while the subroutine does its thing), we need an asynchronous, parallel model, in which work requests are dispatched, and we immediately get on with something else that we could be doing (if anything), rather than wait for the response.
Due to the lack of tight synchronisation between the parallel components, we also need to model sub-tasks of this sort as queues of pending requests and responses. Due to the independent nature of the requester and the responder, one party may be busy doing something else when the other is ready to communicate, so the messages are passed by queueing rather than a synchronised "pass the baton" type of manoeuvre. This kind of thing is not unknown in programming, but it's not a pattern that I can illustrate using, say, language-level constructs in Perl (where subroutine calls are very much synchronous affairs). Analysis will be necessary.
|Author:||TFBW [ Sun Oct 28, 2012 11:03 am ]|
|Post subject:||Tasks and Subtasks (Data Model)|
|Author:||TFBW [ Wed Nov 07, 2012 1:23 pm ]|
|Post subject:||Further Analysis of the Task Data Model|
|Page 1 of 1||All times are UTC|
|Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group