Skip to content

Introducing the Area-Task Label Administration System: ATLAS #6

@Arlodotexe

Description

@Arlodotexe

Area-Task Label Administration System: ATLAS

A system for structured planning and execution.

This is an extension and conceptual integration of #1 and #2. Since these two tickets are kept up to date, we'll cover the concepts in this text only briefly.

If you're not familiar with capability vs capacity or our established labels, see #1 and #2.

Additionally, see #9 for starting guidance on building out maps for your project.

Table of Contents

  1. Evaluation of existing options and alternatives
  2. Label Taxonomy System' naming analysis
  3. Capability and capacity
  4. The 'core five'
  5. Additional combinations
  6. Area and task mapping
  7. Capacity and capability allocation
  8. Closing thoughts

Evaluation of existing options and alternatives

This idea of a standard and extensible set of labels that others can template and extend seems like something that should be partially covered by the tooling I've seen around momentum and velocity in agile or waterfall.

This begs the question: Does the idea of a turnkey "standard" set of labels already exist, or are we rebuilding the wheel?

Short answer: At the time of writing, there's no single, turnkey, industry-wide "standard label set" that everyone can use out of the box. People often assume such a thing might exist because common project-management practices like Agile/Waterfall, GitHub defaults, or Atlassian configurations "should" converge on one labeling scheme--but they haven't.

GitHub's Default Labels

When you create a new repository, GitHub provides these default labels:

  • bug
  • documentation
  • enhancement
  • help wanted
  • good first issue
  • question

They're helpful but minimal; there's no deeper layering (e.g., sublabels for "task vs. area," or domain vs. technical area, etc.). Many teams rename or delete them right away.

Why it's not a comprehensive standard:
They don't cover deeper aspects of project organization, and they rarely match the complexity of real-world dev processes once a project grows. Hence, you see a lot of repos that started with "bug" and "enhancement," but then quickly outgrow them and invent their own.


Atlassian/Jira Workflows

Jira doesn't have "labels" in the same sense as GitHub. It has "Issue Types" (e.g., bug, task, epic, story, sub-task), "Components," "Custom Fields," and "Labels" as a free-text tag. Often, people will make a custom setup to replicate the "task vs. area" approach, but it's all bespoke.

Why it's not a standardized label set:
Atlassian deliberately leaves configuration open-ended for each team's needs (Scrum vs. Kanban vs. SAFe vs. Something else), and does not prescribe a single official label taxonomy.


"Agile" or "Waterfall" Best Practices

Agile, Scrum, or Waterfall process guides talk a lot about roles, ceremonies, sprint cycles, phases, etc., but they don't define a universal label set. They often mention "epics," "stories," "tasks," "bugs," "risks," "issues," but not as a rigid standard with sub-categorizations.

Why it's not a labeling standard:
They're more about process (how to structure your work) than about labeling objects in a tool. Each team or tool implements them in a custom way.


Conventional Commits & Semantic Versioning

Some folks conflate commit-message conventions (e.g., feat: add a new user profile page, fix: properly handle edge cases) with a label system. While they do standardize how you might describe changes or automate release versioning, they don't directly standardize how you label your tickets/issues.

Why it's not a labeling standard:
They're primarily for commits (and sometimes PR titles), not for categorizing tasks in a backlog.


Third-Party "Sample" Label Sets

Various open-source communities or frameworks (e.g., React, Angular, Kubernetes, .NET) have partial label sets that suit their own repositories:

  • "help wanted"
  • "good first issue"
  • "discussion"
  • "kind/feature," "kind/documentation," "kind/bug," etc.

You'll see patterns, but they tend to be specialized to that community and rarely roll up into a general-purpose, widely adopted system for all.


So, are we reinventing the wheel?

Sort of, but it's not wasted effort. Most teams do build out their own label taxonomy to reflect:

  1. The domains or "areas" relevant to them (e.g., "infra," "docs," "release," etc.), and
  2. The "what" or "task type" (e.g., "bug fix," "new feature," "refactor," "security review," etc.).

Nearly everyone ends up with some variant of that, but they do it in their own style.

If you need something quick-and-dirty, you can use GitHub's defaults, maybe add "refactor," "security," and "chore," and call it a day.

But as soon as you want deeper clarity for your project and your collaborators, a more structured label set--like what we're proposing--becomes extremely helpful.


Why the structured approach can still be valuable

  1. Consistency Across Multiple Repos
    If you're juggling multiple repos (or an entire open-source org), having a standardized label set (like ATLAS) helps everyone find, filter, and triage tasks consistently.

  2. Mapping "Where" vs. "What"
    Separating the domain areas (areas::docs, areas::infra, etc.) from the task types (tasks::fixes::bug, tasks::review::security) is a best practice many teams eventually converge on themselves. We're just giving them a head start.

  3. Encouraging Best Practices
    By including labels like capability::entry or capacity::overloaded you help teams be more thoughtful about who can take an issue and how urgent it is.

  4. Scalability
    A well-thought-out labeling system scales better than ad-hoc labels. It's easier to introduce new labels without clutter, and you can prune deprecated ones systematically.


Bottom line

  • Does an off-the-shelf, universal label set exist?
    No. There's no recognized industry or GitHub-endorsed standard that fully covers "where vs. what" plus capacity/capability and so on.

  • Are we reinventing the wheel?
    We're covering ground lots of teams have covered, but there's no mass-produced "wheel" to install. Each org typically builds or curates its own label set, sometimes referencing partial standards (like "bug," "enhancement," etc.).

  • Should we keep going with a label taxonomy?
    Absolutely--if a project is large enough or a team/distributed contributors need the clarity, a well-structured label taxonomy is worth it. You're bringing order to something that would otherwise devolve into dozens of random tags.

In other words, we're not duplicating an existing universal solution--because there isn't one. But we are repeating a process that many large teams eventually do themselves. And that's good! It means we can help others skip that friction and adopt a ready-made, well-thought-out scheme.

'Label Taxonomy System' Naming Analysis

Frontrunners

ATLAS (Area-Task Label Administration System)

  • Area-Task Label Administration System
  • Professional and memorable acronym
  • Maps well to core concepts (Area/Task organization)
  • Natural metaphor - atlas guides/maps organization
  • Easy to reference: "ATLAS standard labels", "ATLAS-compliant"

TALOS (Task-Area Label Organization Standard)

  • Task-Area Label Organization Standard
  • Clean, professional acronym
  • Emphasizes organizational aspect
  • "Standard" reinforces template nature
  • Easy to reference: "TALOS standard labels", "TALOS-compliant"

Selected name: ATLAS

This feels like a natural choice, given the repeated thematic parallels between compasses and maps.

Capability and capacity

Highly integral concepts for task planning, allocation and execution. See #2 for more details.

Here, we'll visit some vectors that attempt to capture varying aspects of capability and capacity, including capability and capacity themselves.

Capability

The degree to which an individual or team has the skills, knowledge and competence to perform a specific task or to perform in a specific area.

  • Skills: Learned abilities
  • Knowledge: Familiarity of domains
  • Competence: Proven skill or ability to perform
  • Goes up over time, accumulative
  • Effective capability can be forced down by persistent low capacity
  • Mismatches should be identified during execution:
    • Takes too long
      • Low confidence or capability
      • Low enthusiasm
      • Low velocity
Level Description
-1 Untrained
0 Novice
+1 Trained, expert

Capacity

The degree to which an individual or team has the bandwidth, time and velocity to perform a specific task or in a specific area

  • Bandwidth: Mental and cognitive energy
  • Time: Hours or velocity to spend
  • Fluctuates over time
    • Temporary: Vacation, bottlenecks
    • Chronic: Persistent overwhelm or non-participation
Level Description
-1 Overloaded
0 Balanced, none to spare
+1 Spare, stable

Utility in balance

In practice, balancing the individual vs collective perspectives helps planners answer:

  • Do the skills or bandwidth of required tasks (collective) match the available skills and bandwidth (individual)?
  • Where do responsibilities lie, and how are tasks assigned to those with both capability and capacity?
  • When individual capability / capacity isn't "stable", how do we collectively shift to accommodate?

The 'core five'

These are concepts that can fall under either capability, capacity or both:

  • Interest, Willingness, Motivation/Drive, Enthusiasm and Confidence

These concepts are only orthogonal enough to be used to capture varying potentially conflicting aspects behind an individuals' preferences.

Interest

Cognitive curiosity, personal intrigue

Level Description
-1 Disinterest
0 Indifference
+1 Curious, relevant

Willingness

  • Situational, external, perceived
  • Readiness, consent, obligation
Level Description
-1 Refusal
0 Indifferent, undecided
+1 Volunteering

Enthusiasm

  • Emotional, eagerness, zeal
  • Passion, excitement
Level Description
-1 Dread
0 Indifference
+1 Eagerness

Motivation or Drive

Inner or outer incentive or purpose

Level Description
-1 Opposition to act
0 Balanced or neutral
+1 Strong push to act

Confidence

Subjective skill, comfort

Level Description
-1 Doubt
0 Uncertain or unknown
+1 Trust, belief

Additional combinations

This is an incomplete list of additional concept vectors can made using combinations of the "core five" but may be useful to measure on their own to identify additional nuance or discrepancies in presumptions.

Persistence, commitment

  • Tenacity, endurance, adaptability
  • Confidence + Motivation
Level Description
-1 Low resilience or resourcefulness
0 Baseline resourcefulness
+1 Determined, resourceful

Ownership

  • Accountability, stewardship
  • Willingness + Confidence
Level Description
-1 Not responsible
0 Indifferent
+1 Responsible

Area and task mapping

  • What: Tasks, or the type of work being done
  • Where: Areas, or the domain/context

Careful isolation of "area/where" and "task/what" enables approximated orthogonal plotting the two concepts.

In a 2d space, an area and task combo is a category or hyperedge of labels representing those specific labels.

The full spectrum of what / where DAGs and all sub-types (e.g. area::processes::planning or tasks::features::improvement) for area and task are encoded on the boundary as a partially ordered set.

In order for a DAG to be representable as a partially ordered set (and thus as a projection on a boundary), the edges must encode precedence or dependency, which is what we've done here with our labels (reflexivity, antisymmetry, transivity).

This allows us to create a 2d space that represents all tickets as possible combinations of "area" and "task" labels, which can then be further filtered by individual offerings.

Capacity and capability allocation

What * Where = Who

In order to allocate tasks using the 2d space (What * Where) created in our previous section, we'll consider a third Z dimension representing Who that we can use as a filter.

The Z dimension will also run from -1 to 0 to +1, but we'll need to substitute it with another manifold to conceptualize the process of task filtering.

Z manifold substitution: Disablers <--> Enablers

Rather than mapping directly to capability and capacity, we focus on several aspects (see above) and identify commonalities between their extremes to do our substitution.

Identify the common 'preference' vector

For all aspects mentioned, from simple capability and capacity, to the core five, to any additional ones created by combinations thereof:

  • The state of -1 corresponds to a Disabler
  • The state of 0 corresponds to Indifference
  • The state of +1 corresponds to an Enabler

Thus, by making sure we collect data that is well-isolated, we maintain the wisdom of the masses (offsetting each other in equally right or wrong ways) when we start utilizing the properties of ergocity over time for project management.

Substitute Z vector

Back in our 3d space, we can now use our substitution of "Who" for "Enablers/Disablers" to do a partial ordering embedding along the boundary of all the aspects we choose to consider.

At minimum, this should be capability and capacity, but you should be able to utilize the core five instead. The core five can be used to more accurately determine capability and capacity.

When adding your own or using multiple category sets together, just be mindful of interdependency or data may end up too skewed to be useful.

  • When Z = -1, we consider a state describing an individual with the most "disabler" signals.
  • When Z = 0, all tasks and areas are in a neutral state. All individuals with no preferences set will be here
  • When Z = +1, we consider a state describing an individual with the most "enabler" signals.

Add scope, map Z preference vector to 2D what/where

This, of course, leaves a gap. It's safe for an individual to leave everything at Z = 0 "indeterminate", but exactly what tickets are we describing as indeterminate? Everything? Nothing? Something specific?

We had previously defined "tickets" in this framework as existing on a hyperedge in the space formed by the "what" and "where" axes, each representing a specific combination of labels and corresponding to tickets with those labels.

This gap is covered by considering a point along a vector representing the individuals' choice to label or not label, and the scope of that choice.

  • By default, preferences might apply to nothing or might apply to everything with no effect (Z = 0).
  • Deviating from Z = 0 requires deviating X or Y, what or where, in this particular individuals' 2d plane.

Given the above, we'll need to:

  1. Allow individuals to label tasks or areas with signals representative of disablers and enablers
  2. Allow individuals to scope those signals at multiple levels, such as:
    • Specific task types (e.g. "documentation")
    • Specific areas (e.g. "backend")
    • Combinations thereof (e.g. "documentation in backend")
  3. Track these preferences over time to build reliable capability/capacity patterns
  4. Use these patterns to inform task allocation and workload balancing

This allows us to perform a partial ordering of each individual's 2D slice along the Z boundary. It creates a "Who" vector where the individual(s) at the highest Z value represent high task suitability, while those at the lowest Z vector represents minimum task suitability or potential conflicts.

It's also worth noting that overextension of certain aspects, such as poor planning due to over-enthusiasm, can have the opposite intended effect. These things should be accounted for in the model as much as possible, but we have a good start here.

Closing thoughts

The ATLAS standard is a multi-dimensional label administration system that's been carefully designed to capture and bridge the aspects and dynamics of collective and individual capability and capacity with the many (most common) possible areas and task types in a given project, while still remaining extensible.

Originally designed to serve as a refined template of labels for others to use, this framework now also aims to carefully standardize the very processes used to bridge high-level planning and low-level task completion in a way that respects the individuals contributing to the collective.

With this in hand, we'll have a much easier time scaling our processes. From simple to complex, from task execution to meta-process reviews, and whether stretching many tasks over a select few or planning a few tasks with many contributors, we're now able to proficiently allocate tasks and course correct for common problems like upskilling, burnout, and resource misallocation.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions