TFBW's Forum

On The Ball (Briainstorming)
Page 1 of 1

Author:  TFBW [ Sun Oct 14, 2012 6:29 am ]
Post subject:  On The Ball (Briainstorming)

I'm not satisfied with the state of workflow management tools. I haven't used that many, but it strikes me that there is room for something simpler and more general than any I've seen so far. The ones I've used tend to target a particular problem (like software defect tracking, a la Bugzilla), or implement every piece of jargon in ITIL, making for a highly buzzword-compliant system, but an excessively complex interface that the average person can't understand.

As a consequence of this, I propose a new project, which I tentatively name "On The Ball". This is to be a bare-bones task-tracking service, with the intention that it should be designed for extensibility, so that it can fill the more particular roles as needs be. The core should be simple and generic, yet useful: a foundation upon which richer implementations can be built.

Extensibility needs to be done right: everything is extensible in principle, but not all extension models are created equal. I think that Perl provides us with a good example of extensibility done right, more or less: some constructs are built in (implemented at the language level); some are implemented in core modules (always available, but they must be explicitly imported); some are available in CPAN. In the same way, very little of On The Ball should be contained in the core program itself, but there should be enough useful extensions included with it to suit most basic needs. Consideration of where to draw the line between "core" and "module" will be a recurring theme.

The name, "On The Ball", comes from the sporting maxim, "keep your eye on the ball". Someone who is "on the ball" is alert, attentive to his environment, aware of what action is being taken around him, and anticipates what he needs to do to aid in the situation. Task tracking is easy to explain in terms of such sporting metaphors: to pass the ball, to drop the ball, the ball being in one's court, and so on. If there is a core problem statement for On The Ball, it is to identify all the various "balls" in the system, to ensure it is clear which court each ball is in, and to prevent any ball from being dropped.

The purpose of this forum topic is to brainstorm the design of On The Ball: to come up with initial ideas for the core data model and behaviour of the system. Due to the desire for a small but practically extensible core, this will also involve anticipating the various specialised roles into which we may want to extend the system, and determining which of the aspects are justifiably considered "core", versus "extension".

Author:  TFBW [ Tue Oct 16, 2012 7:56 am ]
Post subject:  Tasks per Ball (Data Model)

A first question to ask about the data model is whether the "task" is the fundamental unit of things being tracked. Or, to put it another way, if we are tracking metaphorical "balls", is there a one-to-one correspondence between tasks and balls, or can there be more than one per task?

As an example, suppose person A raises a task to set up a new web server. The person who sets up web servers (B) might in turn need new hardware for the server, and so raise a task with the hardware wranglers (C). Do we represent this as a ball passing from A to B to C, or two separate relationships, A-B, and B-C? What if the new web server not only requires new hardware, but also special network changes which must be handled by a separate party, D? In this case, we don't want to limit the model to the A-B-C style of ball-passing, because we want C and D to work independently and in parallel.

In order to achieve this parallel operation, we need to introduce more than one ball into the task, or create separate tasks and model inter-task dependencies. Inter-task dependencies seem intuitively simpler: a task can be blocked pending the completion of other tasks. If we model this as several balls within a single task, the model is less clear: B, as the middle man, must keep track of the number of balls in play, and not consider his task ready to proceed until such time as all the dispatched balls have been returned.

Exactly what data would we associate with each object in these alternative cases? For inter-task dependencies, we have a problem description, the person who raised the task (the Owner), the person to whom the task is assigned (the Assignee), and a set of tasks which block this one. In the multi-ball case, each ball has an Owner and an Assignee, but also a task with which it is associated. In addition, each ball needs a list of other balls upon which it is waiting, just like inter-task dependencies, otherwise there is no way to tell what's holding up progress. Further, each ball really needs its own problem description, so that we understand the specific issue associated with this ball.

By this point, there's apparently nothing left for the "parent task" to hold -- it's just a grouping mechanism. It does not seem to be a particularly useful one, either, since it acts as a limit on the possible relationships between activities in the system: inter-task dependencies are not possible.

Thus, initial analysis suggests that the task is indeed the fundamental unit being tracked -- that the task is identical with the metaphorical "ball".

Author:  TFBW [ Wed Oct 17, 2012 1:16 pm ]
Post subject:  Roles (Data Model)

My previous post identified two active roles associated with a task: the Owner (who requests that the task be performed), and the Assignee (who is requested to perform the work). Just as we analysed the relationship between "tasks" and "balls", we will now consider the relationship between these roles and the system users. Flexibility with regard to these roles is likely to have a major impact on the usability of the system.

The simple approach is to have each user of the system assigned an identity, and then have one of these identities in each role. When a task is initially created, the creator can occupy both the Owner and Assignee roles by default, then reassign the task to someone else as needs be. This is about as basic as it gets, and is tentatively our minimum requirement.

One consideration is whether the roles are always singular, or whether we might need more than one of each. Singularity makes for simplicity, but there are situations in which team effort is involved. How do we model these situations?

One example is where a problem is raised against a group, such as a department or team. A bug report might be raised as a task for a software development team, for example. The appropriate action in this case would be to raise the task against either a team leader, or a virtual role such as "bug reports" or "software development team". The role is virtual in the sense that it does not correspond to an actual person, or even to a role that a person fills (although we might consider "software development team" to be a collective role). Even if these virtual roles have no other relationship with actual users, they can still act as named queues into which work can be placed, and a particular group of people can deal with tasks in particular named queues as a simple matter of convention.

For added utility, it may help to model such a queue as a group with explicit membership: people who are members of the group can have that group's "task inbox" appear in their default view of the system (in addition to their personal one). Explicit group membership can also assist a team leader when reassigning tasks from the group identity to the individual members, since the member list can be provided as candidate assignees. Using a group identity in this way is better than simply using the team leader's address for team work, since it also allows us to distinguish clearly between work intended for the team, and work intended for the team leader in particular.

Our first observation, then, is that not all roles correspond to individual users. While it seems reasonable that every user should be addressable as a role, there will be cause to assign tasks to virtual identities. Indeed, roles don't have to be anything more than an arbitrary label, although they can be made more useful by adding relevant metadata (such as a list of related roles, being likely choices in the case that the Owner or Assignee is changed).

Note also that people aren't necessarily restricted to a single role. It may be handy to distinguish between a person's participation in various groups by giving that person group-specific roles, rather than simply adding their personal identity to a list of group members. In that way, tasks assigned to them in their group role can be distinguished from other tasks. This arrangement has the advantage that when a person leaves a group, the system can facilitate reallocation of any affected tasks by reassigning them back to the group as a whole.

What this means for our data model is that "roles" are a first-class entity in the system. The Owner and Assignee are designated by roles, and roles may be filled by individual people, or by virtual entities such as groups. It is apparent that ease of role management is going to contribute significantly to the usability of the system as a whole.

Author:  TFBW [ Mon Oct 22, 2012 1:23 pm ]
Post subject:  Re: Roles (Data Model)

I want to briefly emphasise and reiterate some of the points I've said about roles in the previous post.

A person can have many roles. In trivial systems, it is sufficient to simply address a task to a person, but, in more complex systems, it is important that work be assigned to roles rather than individuals. Role management is thus an important extension path. As a user, I want to be able to see my roles. When a user relinquishes a role, we want the system to do something sensible with the affected tasks, which may mean reassigning the tasks to a fall-back role (e.g. the group that the user is leaving), or closing the associated task (if the departure actually warrants it).

There can be many kinds of roles. A person can have an individual identity role, any number of group membership roles, and possibly categorised sub-roles within those groups. Roles can also be arbitrary named queues, not associated with any person, but dealt with on an informal basis. This may be simpler than managing groups in small organisations, where things are done on a more ad hoc basis. For example, rather than have a formal "receptionists" group with members, there might just be a "receptionist" role, and whoever is performing those duties at the time can deal with those tasks, without any formal connection between the people and the roles. The purpose of roles is to act as a management aid: they should make it easy to find tasks, and easy to reassign tasks in the face of change.

Large systems require navigation aids. Where there are a lot of roles, such as in a large organisation, it is important that they have a navigable structure. Where possible, the system should have some clue about the likely assignees for a task (such as members of a group), and aid the user by offering those as candidates. Where the search for an assignee needs to go further afield, the user should be able to drill down through various paths, such as a division/department hierarchy, or search by name. It will be helpful in most cases if the user can maintain a personal "contacts list", which is really just a cache of frequently or recently used roles.

Roles have semantic content as well as names. The name of a role should convey its purpose, but this will often be a slightly jargon-laden title, and may not be meaningful to all intended users. It may be a good idea to have longer descriptions associated with roles, to inform people who are considering assigning a task to the role as to what kind of tasks are handled by the role. Clearly, this description is going to be common across group roles where members perform the same task, so an appropriate form of inheritance will be desirable in that case. This description data becomes a good candidate for searching, to aid in choosing an assignee for a task.

Author:  TFBW [ Wed Oct 24, 2012 2:08 pm ]
Post subject:  Task Management, Programming Analogies

There is a lot that task management can learn from computer programming. Computer programming is about automating tasks, after all, and task management is about assisting workflow by tracking it and making facts about it explicit. Where tasks might have sub-tasks, computer programs might have subroutines, and so on. Task management theory should borrow heavily from programming language and operating system theory as a consequence of this.

Before looking at the similarities, however, let's have a look at some of the differences, so that we get a feel for the limits of the analogy.

Task management is massively heterogeneous. This problem arises in distributed computer systems as well, but it's even more pronounced than usual in the context of task management. Most of the actors in the task management context are human beings, but computers and other automata can also get in on the act, mechanically raising or processing tasks. As a consequence, we need to be particularly flexible about data representation. Sometimes it will be read and modified by people, and sometimes by machines.

Task management is massively parallel. Parallelism is a tricky issue in programming, but it's even more pronounced in task management. In a computer, an operating system will normally perform the scheduling work, and allocate particular executable tasks to available processors. In the task management context, the task manager handles the queueing, but the "processors" are external to the system, consisting of people and other computers. Those external processors deal with the tasks in the relevant queues as they can, then update the task for re-queueing.

Task management is massively event-driven. Event-driven programming is a known pattern in computing, but it's not something that programming languages handle well, in my view. It's closely related to the concept of parallelism, since there are many things which need to be ready for action at the same time, any one of which could be next. This contrasts markedly with the tried and true patterns of structured programming, in which the flow of control proceeds in a continuous manner through loops, branches, and subroutines. Most programming languages adapt to the event-driven model using call-back functions, but there seems to be something fundamentally inappropriate about that approach. Further analysis of this issue will definitely be required. Whatever the case, tasks in the workflow management context need to be broken up into independently executable sub-parts as much as possible, while keeping tabs on the individual parts to make sure that the whole still progresses.

Task management is all about message passing. Coordination between sub-parts of a computer program is sometimes modelled as "message passing", but the analogy is usually not taken too literally (unless necessary, as in distributed systems), since there is a lot of overhead in a literal message. In the case of workflow management, however, the messages are quite literal, and the coordination process is all about message passing. Not only that, but the messages are preserved as history. In a computer program, data is generally thrown away (the memory freed) as soon as it is no longer necessary. Here, the data is being stored in a separate system, and the history is kept for its contextual, auditing, and performance-measuring value.

In short, the similarities are very real, but the differences are also important. Task management can be viewed as a kind of computer programming problem in a massively parallel, heterogeneous environment, in which the processing elements (mostly people) have relatively high latency. As a consequence, we need a loosely coupled processing model, in which work is performed asynchronously as much as possible. Rather than the usual structured programming model of subroutine calls (in which the calling routine temporarily suspends work while the subroutine does its thing), we need an asynchronous, parallel model, in which work requests are dispatched, and we immediately get on with something else that we could be doing (if anything), rather than wait for the response.

Due to the lack of tight synchronisation between the parallel components, we also need to model sub-tasks of this sort as queues of pending requests and responses. Due to the independent nature of the requester and the responder, one party may be busy doing something else when the other is ready to communicate, so the messages are passed by queueing rather than a synchronised "pass the baton" type of manoeuvre. This kind of thing is not unknown in programming, but it's not a pattern that I can illustrate using, say, language-level constructs in Perl (where subroutine calls are very much synchronous affairs). Analysis will be necessary.

Author:  TFBW [ Sun Oct 28, 2012 11:03 am ]
Post subject:  Tasks and Subtasks (Data Model)

One of the key issues for managing workflow is coordination between tasks and subtasks. If parallelism is only available as a matter of creating subtasks, then it is important that we coordinate well between tasks, since we need to exploit parallelism at every available opportunity. This raises a question as to what models we have for that coordination.

Computer programming offers some relevant ideas, but, as we have seen, these tend to be tailored to a tightly coupled, synchronous environment. Although some of the ideas will be very helpful, we need to watch out for those aspects which assume tight coupling, and adjust accordingly.

One aspect of the problem is communication between subtasks. The simplest and most basic kind of communication involves one task signalling another that it is complete. In more complex cases, this completion will involve some data which is made available as a product of the subtask. How do tasks block each other, and how do they handle data?

Another aspect is that of why and when a part of a task becomes a subtask. At one extreme, we could keep reassigning a single task as we reach different parts of the larger task which require action by different people. There is no strict need to spawn a sub-task until parallel operation is required. Call this the "reassignment model". At the other extreme, we might consider the act of reassigning a task to be a last resort, and spawn sub-tasks for every sub-step in the larger task. Call this the "explicit sub-task model". We need to consider the implications of these choices, and figure out what is going to be most practical.

Let's consider a real-world scenario, and how task tracking might be of most benefit to it. The scenario is a large business with a data centre. Person A wants the use of an additional computer in the data centre, and so raises a task with the group (B), who deals with that sort of thing. Initially, this might be an informal request, along the lines of, "I need a new server", but there is a formal procedure in place for fulfilling this request, as follows.

  1. Get the person who is asking for the server to fill out a request form, describing the server requirements.
  2. If the person requesting the server is not on the list of people who can approve such a request, require approval for the request.
  3. If there is no suitable server available in the spare pool, raise a task to add new hardware in the data centre (on which this task blocks).
  4. Allocate one of the available servers to this request. This involves updating the records of server allocations.
  5. Configure the server with the requested Standard Operating Environment.
  6. Install keys on the system to grant administrative access to the team that requested the server.
  7. Notify the team that the server is now available for use.

It's clear enough that the process starts with A raising a task against B, but where does it go from there? The next step is for B to throw it back at A with a request to fill in the form. If this is an explicit sub-task, then it's raised by B against A. The original task is then blocked by the new sub-task, so the original task is "off the boil", so to speak -- meaning that it is no longer immediately actionable by B.

That "off the boil" status is something that needs to be set explicitly, since there's nothing to tell the task manager that nothing else can be done in the main task until that particular sub-task is complete. It might be useful for the assignee to be able to set a state of "awaiting subtasks" for this condition. It's not essential, however: assuming that tasks are highlighted in a manner similar to email, the task will maintain an "already read" type of status unless something else has added an event to the task more recently than the Assignee. It may be useful, even so, since there is a difference between a task where progress is possible and one where progress is blocked.

If, on the other hand, we model the step as a reassignment of the original task (since there is no immediate need to parallelise), then we need a way to remember previous assignees. If we reassign the task to A, such that A is both the Owner and the Assignee, we would prefer to do it in such a way that the situation is clearly distinguished from the similar situation where the task is assigned back to the Owner because it is complete. When A receives the task to fill out the form, we want him to be able to simply fill it out and press "done", then have the task automatically reassigned to B for further progress.

A possible way to achieve this is to model the Assignee as a stack. In fact, the Owner could also form part of this stack: the party that creates the task is initially the only identity on the stack, then pushes a new identity on the stack to reassign it. The Owner is thus simply the role at the bottom of the stack, and the Assignee is the one on top. To temporarily reassign a task, a new role is pushed on the top of the stack. The task is declared finished by popping the Assignee role off the top, letting it fall back to the previous Assignee (or possibly the Owner).

The stack model is very nice in this limited context, but it doesn't parallelise well, which is a serious drawback, given the needs we have identified. It's a strictly linear construct: there's no way to fork it and push parallel entities onto the stack, so explicit sub-tasks would still be necessary, and it's not entirely obvious how they would interact with the stack. The explicit sub-task model is more general: it allows multiple tasks to block another, and forms a tree-like structure, which is stack-like, but amenable to parallelism.

Another argument against the stack-based reassignment model is that we constantly redefine the task as we reassign it. In the case of our current example, A assigns B the task of providing a new server, and then B assigns A the task of filling out a request form as a part of that task. In the stack model, this involves stacking and unstacking the task description as well as the Assignee. In the explicit sub-task model, we have two distinct tasks with distinct descriptions, plus a parent-child relationship between them. The latter model more closely resembles the shape of the problem.

The parent-child relationship between tasks is of some significance in our view of the tasks. The immediate children of a task should be visible to some extent in the parent task, and vice versa. It may be sensible to use a hierarchical identifier for tasks, so that task "X" has children identified as "X.Y". Note that other relationships between tasks should be possible, but parent-child relationships will be more common. This is an area for additional future analysis.

Despite its numerous drawbacks, the stack-based reassignment model has one distinct advantage that we would like to realise in the explicit sub-task model, if at all possible: the stack-like "push and pop" mode of operation, which is simple and straightforward. Can we bring this advantage to the explicit sub-task model? The major difference with the explicit sub-task model at the moment is that each task has its own explicit Owner-Assignee pair, whereas these concepts were implied by relative position in the stack under the reassignment model.

We can more or less recreate the stack model in the explicit sub-task model by reducing the two fields, Owner and Assignee, down to one. For the sake of the current discussion, let's say that we keep the Assignee role, and drop the Owner role. Thus, from the perspective of a sub-task, the chain of parent tasks can be viewed as a stack of Assignees, starting at this task, and proceeding back through the line of parents to the root task. Similarly, from the perspective of a root task, there is a tree of sub-tasks and associated Assignees underneath it, allowing the parallelism that the pure stack-based model did not.

With this shift in the model, the Owner role is now an implied thing: the "owner" is the party associated with the parent task (or the Owner is the same party as the Assignee in the case of a root task). The arrangement seems to convey advantages in terms of role reassignment: the inter-dependencies between tasks are independent of particular Assignee identities, and there is no need to preserve continuity between the Assignee of a task, and the Owner of its sub-tasks, as there would be if these were distinct roles and work were to be reassigned.

This shift in the model does imply some slight differences in the way that root tasks are handled. With distinct Owner and Assignee roles, the raising of a new task went something like this.

  1. Person A creates a new task, and is granted both the Owner and Assignee roles by default.
  2. Person A then reassigns the task to B (by changing the Assignee field).

Under the Assignee-only model things must be done differently. Again, we are faced with familiar alternative methods to handle this: reassignment, or sub-tasks -- and this after we've already decided to run with the explicit sub-task model! The reassignment approach would look like so.

  1. Person A creates a new task, and is granted the Assignee role by default.
  2. Person A then reassigns the task to B (by changing the Assignee field).

Note that a consequence of this action is that Person A is no longer explicitly associated with the task anymore. This might be considered a drawback, but it's not necessarily a big problem: people can flag tasks as being of interest to them, regardless of their association with the task, and presumably this task is still visible to A on the basis of such a flag.

In the sub-task approach, the procedure is as follows.

  1. Person A creates a new task, and is granted the Assignee role by default.
  2. Person A then creates an explicit sub-task, which is essentially the same task description, and assigns it to Person B.

This model has the advantage that Person A maintains an explicit "ownership" role of the task, relative to Person B. There is a minor corresponding drawback: the duplication of task descriptions. We can work around this easily by allowing a task to inherit its description from the parent task. We might call this approach "delegation", since it involves passing the task whole-cloth on to another person without relinquishing responsibility for the original task.

The difference between the two is largely a matter of style: neither is clearly more correct or superior to the other, given the analysis here. This is not a problem, since both operations -- reassignment and creation of sub-tasks -- are desirable in their own right.

My conclusion, given this analysis, is that we should change the task model to a single-role model, as described here, with other roles implied by inter-task relationships. With reference to the sporting analogy, this task is a "ball", and the role designates the party who has the ball. We might refer to the role as "Owner" or "Assignee", since both are appropriate in their own way, but it's more accurately a "responsible party" or "in possession" relationship. There may be a single word that captures this relationship well, but it eludes me for now, so I'll tentatively stick with the boring "Assignee" moniker as the best approximation.

Author:  TFBW [ Wed Nov 07, 2012 1:23 pm ]
Post subject:  Further Analysis of the Task Data Model

In the previous post, I looked at the question of when and why to spawn a sub-task. Analysis suggests that every sub-step should be considered a sub-task, so a policy of "spawn early and often" is our starting point. There are still numerous issues to consider, however, given that starting point.

One issue is the question of task inter-dependency. Just because one task is raised in the context of another task does not automatically imply that the new task is a sub-task. Another possibility is that it is an incidental task. The distinction is straightforward: a sub-task is one which forms a sub-part of the current task, and thus completion of this task depends on completion of the sub-task. Incidental tasks differ in that they are work which arises in the current context, but which does not block completion of the current task.

To illustrate this point, consider the following example. A business with a substantial number of properties might schedule routine inspections of those properties, to check for maintenance issues. The inspection tasks could be managed via the workflow system, but any maintenance tasks which are raised as a consequence of the inspection are not sub-tasks: they are incidental tasks, assigned to the maintenance group. The inspection task simply closes when the inspection work is complete, whether or not any incidental tasks are raised in the process. Even so, it is useful from a measurement perspective to maintain the relationship between the inspection task and the incidental tasks that it produces, rather than raise the new task ex nihilo.

This observation leads to a few further thoughts on tasks and their relationships. First, how might we want to distinguish between sub-tasks and incidental tasks, given the desire to track both? The key distinction is that there is a dependency between a task and its sub-tasks, whereas there is no such dependency in the case of an incidental task. Dependencies are also possible outside the sub-task relationship, and this suggests that a sub-task is simply one with both a parentage relationship and a dependency relationship. That, at least, is one possible model. Further analysis is definitely warranted, however, particularly given that there may be more than one kind of inter-dependency.

I didn't much consider the mode of communication between sub-tasks, so that seems an appropriate place to focus our attention now. Returning to the scenario of the previous post, person A was requesting a new server from group B, and the first step in that process was for B to present A with a request to fill out a form. Not only is this a sub-task which blocks further progress, it is also a request for information. On The Ball should facilitate the transfer and management of such information.

Without specific assistance from the task management framework, the request can be an informal one, and party B can use the response to manually update the appropriate systems. Alternatively, the task request can contain a URL which directs the Assignee to fill in a form in a separate system. Neither of these alternatives is ideal in the sense of minimising manual intervention, however. The second alternative is better, but still requires that the Assignee manually close the task when the work is done.

Ideally, the task itself contains a form, and the task reports itself as finished when the form is correctly populated. Further, the system then updates whatever back-end database is appropriate using the contents of that form. If there is any problem with the request, that problem is reported back to the Assignee as another task. Thus, if the task is closed, and the ball returns to group B for further action, it implies that the sub-task is genuinely complete.

There is, of course a significant question as to how this behaviour can be implemented. Some of the steps involved are as follows.

  • Presentation of the form. The form must be designed and specified, then presented to the user as a fill-out form. This should include in-form validation of data to the extent possible, plus hints and so on. The job of filling out a form may be considered a task in itself, so the form may be a sub-task of some other task.
  • Acceptance of the form data. Something must receive the data from the form, validate it, and then act upon that data in some way. If the form data is acceptable, this may result in the task being closed.
  • Presentation of the form data to the party that wants it. The parent task must not only be notified that the task is complete, but also that it resulted in this particular data.

Another way of looking at this process is as a series of phases: data entry, data storage, and data presentation. These phases apply particularly to human interaction; mechanical interaction is a more general case of "data representation".

Again, it is perhaps simplest to consider these phases in the context of the example. In this case, party B, as part of the "set up a new server" task, raises a "please specify your server configuration" sub-task against party A. This sub-task is a special kind of task: it involves filling out a form. This introduces the idea of special tasks, which stand in contrast to the simple "exchange of messages" that have been our task model until now. Exactly how these "special" tasks integrate into the system is subject to analysis.

From the perspective of party A, this special task should be different in that a fill-out form appears as part of the task. The exact presentation of this form is subject to analysis, but here are some preliminary observations. First, note that the conversational aspect of the task (messages back and forth) also requires a form-like interface, although the "form" in this case is likely to consist of a single rich text editor. There is a question as to whether the special form replaces the conversation form, or whether it is offered in addition to that form. This question also impacts our task data model.

Whatever the case, party A is presented with a special form, and is required to fill it out with valid data to complete this task. On completion, the data is ideally used to select a set of suitable spare hosts which might fulfil the request, and this set is presented to party B. The system might even go ahead and select one of the suitable servers itself. Whatever the case, it's clear that there is some special programmed intervention between the submission of the form, and what happens next.

Let's consider the process in a little more detail. Party A fills out the form with valid data and submits it. The code module that validates the form then passes the validated data on to a module that selects a set of possible servers based on these criteria. This set can then be presented to party B as another fill-out form. This form is another sub-task of the main task. There are also some special cases which might be handled differently: if there is exactly one suitable server, we can skip the selection process and proceed to the next step (allocation); if there are no suitable servers, we detour via the "order new hardware" sub-task.

Again, let's consider this process in more detail. When party B sends the "specify a server configuration" task to party A, an automated chain of events is being triggered -- a guided process. This is distinct from the more general back-and-forth message passing process (which would still benefit from further analysis, but let us consider this more constrained process first). In allocating a special task of this sort, the code behind the scenes is taking control of some of the workflow.

For the moment, it may be easiest to think of this workflow management as a task of its own, and the various sub-steps in the workflow as sub-tasks. Thus, when B identifies this as a "new server" task, and raises the "specify the configuration" task against A, what actually happens is that B reassigns the task to the special task module for this process -- the "server allocation 'bot", if you like. That module (or 'bot) then starts its process by raising a new sub-task against A, involving the fill-out request form. When that sub-task finishes to the 'bot's satisfaction, it raises a new sub-task against B (to select a server), and so on.

The important thing to observe in this case is that a code module is taking the part of an Assignee. Tasks aren't just raised against people, but also against modules. If we think of the process management module in this case in terms of a stand-alone computer program, then the steps are similar: first it needs to obtain information on the server configuration; then it needs to find a set of servers that meet those needs; if there is more than one such server, it needs to choose between them, and so on. These steps, which would typically form subroutines in a computer program, become sub-tasks of the process management module's task. Interaction with people happens through task assignments rather than a traditional UI.

So far, we've supposed that party A initially raises an informal request for a new server against party B. If party A understands the process, he will raise the task directly against the appropriate workflow management module instead of pestering group B. In that case, the module immediately responds with the "select a configuration" sub-task, and group B does not even need to be aware of the task until it reaches a step that the module assigns to them (because manual intervention is required). If the workflow management module can be made to automate much of the task, the process may be quite well advanced by that time.

One more thought follows from the fact that a code module can be an assignee: one code module can raise tasks and assign them to another code module. This might seem odd at first glance: why would two computers need a workflow management system? Surprisingly, perhaps, this could be very handy. The workflow management system forms a buffered queue between the two systems, meaning that each one can operate independently of the other, so long as there is outstanding work to do. Without such an intermediate buffer, each system can only operate while both are available. The buffer zone also provides an excellent point for interface contract enforcement, exception handling (reassign the task to an exception queue for manual inspection), and audit trail.

Page 1 of 1 All times are UTC
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group