Delegation

Ask, Don’t tell :::: Tell, Don’t Ask :::: Don’t Ask, Don’t Tell

For those of us in software, we know about the two programming paradigms that are used widely in the industry – Object-Oriented and Functional. The object-oriented paradigm is usually termed as the “Tell, Don’t Ask” model, where the data and behavior are kept together. The functions in the object-oriented paradigm change state of the object (i.e., cause side effects). The functional paradigm is usually termed the “Ask, Don’t Tell” model, where the functions don’t change state and cause no side effects. The software security implementations favor the “Don’t Ask, Don’t Tell” – or weakly the principle of least privileges.

These software models are just copied over from the management/leadership principles. It’s all about delegation.

  1. Tell, Don’t Ask: Delegates responsibility and authority.
  2. Ask, Don’t Tell: Does not delegate responsibility or authority.
  3. Don’t Ask, Don’t Tell: Intentional Non-transparency.

The management coaches will scream: “You should delegate responsibility, but not accountability.”

The agile coaches will correct the statement: “You should delegate responsibility and authority, but not accountability.”

This one everybody agrees – The accountability lies squarely on the first line leader a.k.a. the project manager. However, the project manager is encouraged not to micro-manage but to delegate responsibility and authority to the scrum teams.

Not all scrum teams are alike because not all people are alike and have different natures (due to the nature/nurture mix) that drive their behaviors. Some people follow commands, whereas others question the status quo. Some people focus on problems, and others focus on solutions. When a scrum team is formed, and people with different backgrounds are put together, delegating responsibility and authority can be challenging. Some scrum teams will accept responsibility but will defer the authority to the leadership. Other scrum teams will not accept responsibility without authority.

Example: The architect is accountable for defining the architecture, and the scrum team is responsible for implementing the architecture. In this model, responsibility is delegated, but the authority to make architectural choices are not. Depending on the culture (largely nurture) you come from, the scrum teams would be either happy or revolt.

Not all people are made equal or nurtured equally. Some people have lived prosperously and always made their choices. Others have lived in a command control structure and always had to make the commander’s choice their own. When people of different natures are put together in a scrum team, the team takes a while to form, norm, and perform. Delegating to such scrum teams takes patience.

Hence, agile coaches ask not to break the scrum teams. Don’t allocate people to projects; assign projects to scrum teams. This principle also means that we don’t treat people like resources (i.e., people resources allocated to projects, but we assign projects to people teams). So, treat people like people (just like you treat code like code and infrastructure as code).

Summary: The project manager must delegate responsibility and gradually delegate authority (depending upon the scrum team the project manager has to deal with). Delegating authority creates an egalitarian environment that people (even from working democracies) will need to adapt.

Activities vs. Outcomes

The operation is successful but the patient died – A day in life of a doctor

It’s classic. The outcome expected by the patient’s relatives from a heart surgery is that the patient’s condition improves. The surgical team performs many activities (wheeling the patient into the surgery room, anesthesia for the patient, monitoring vitals, and many more) pre-operative, intra-operative, and post-operative. Eventually, If the patient does not recover, the team will still claim that all the activities were successful.

Successful activities don’t necessarily lead to a successful outcome. Successful activities are necessary but not a sufficient condition.

In a software development context, to reach a goal, a team performs many activities. They may spend many sprints busily doing activities. The burn charts will show that work is being done and accepted, but alas, the outcome may not be in sight for many months. In Agile, we break down a business EPIC into Features, Features into Stories, and perform tasks to claim a story. The customer (or proxy) defines the outcome in the EPIC, and the acceptance of features, stories, and tasks are just activities. Successful completion of the features, stories and tasks does not guarantee a successful outcome.

The Feature burn charts look good, and teams like to showcase these charts to demonstrate progress. Progress is necessary but not sufficient for an outcome.

A few things can happen when the team completes all the features in an EPIC

  1. Successful Outcome: The team accepts all features of an EPIC, and the customer (or proxy) accepts the EPIC.
  2. Partial Successful Outcome: The team accepts all features of an EPIC, and the customer (or proxy) gives new inputs or flags issues to the team. The EPIC is not accepted, and issues are added to the EPIC, or the existing EPIC is accepted, and new EPICs are created to handle new inputs (scope increase).
  3. Failed Outcome: The team accepts all features of an EPIC, and the customer (or proxy) does not accept the EPIC, and the team has to significantly re-plan.

If the customer (or proxy) is engaged continuously, #3 is an anomaly, but it can happen. A Partial successful outcome (#2 above) would be the most likely outcome unless the EPIC were very well defined, tiny, and non-ambiguous. But we are agile to deal with ambiguities. To handle new features (scope creep), the teams create a new EPIC V2.0. To handle issues (bugs), the teams create issues in the EPIC and plan to close this debt (at least some of them) before new features are prioritized, and the EPIC can be (reasonably) accepted.

Going back to our heart surgery, the team may have planned the exact features in the surgery (e.g., Stent the artery), but during the surgery, they may find anomalies that need to be taken care of other than just stenting the artery. These anomalies take time (effort) to fix, and the surgery time may increase. The surgeon may also detect anomalies that might require a new surgery (new EPIC) to handle during the surgery. After the surgery, the patient may have BP stability issues (hypotension) and is on NOR (Norepinephrine) to maintain blood pressure and stabilized in the ICU before being discharged (discharge: A successful outcome).

Summary: Activities lead to outcomes. This activity completion is necessary but not sufficient. Successful activities may lead to failed outcomes. Organizing and Planning work (in EPICs/Features/Stories) is important for efficiency and predictability. However, in most practical cases, things go wrong. Sometimes, more work has to be done that causes the effort in a feature to shoot up, OR more work has to be done before a feature can be claimed to be done, OR new requests prop up after an EPIC is claimed.

There is a need to separate “organizing work” and “seeking outcomes” so that both efficiency metrics (lead metrics: say/do) and outcome metrics (lag metrics: KPIs) are tracked.

Data about Data

As a Data Engineer, I want to be able to understand the data vocabulary, so that I can communicate about the data more meaningfully and find tools to deal with the data for computingData Engineer

Let’s start with this: Binary Data, Non-binary Data, Structured Data, Unstructured Data, Semi-structured Data, Panel Data, Image Data, Text Data, Audio Data, Categorical Data, Discreet Data, Continuous Data, Ordinal Data, Numerical Data, Nominal Data, Interval Data, Sequence Data, Time-series Data, Data Transformation, Data Extraction, Data Load, High Volume Data, High Velocity Data, Streaming Data, Batch Data, Data Variety, Data Veracity, Data Value, Data Trends, Data Seasonality, Data Correlation, Data Noise, Data Indexes, Data Schema, BIG Data, JSON Data, Document Data, Relational Data, Graph Data, Spatial Data, Multi-dimensional Data, BLOCK Data, Clean Data, Dirty Data, Data Augmentation, Data Imputation, Data Model, Object (Blob) Data, Key-value Data, Data Mapping, Data Filtering, Data Aggregation, Data Lake, Data Mart, Data Warehouse, Database, Data Lakehouse, Data Quality, Data Catalog, Data Source, Data Sink, Data Masking, Data Privacy

Now let’s go here: High volume time-series unstructured image data, High velocity semi-structured data with trends and seasonality without correlation, High volume Image data with Pexels Data source masked and stored in Data Lake as the Data Sink.

The vocabulary is daunting for a beginner. These 10 categories (ways of bucketizing) would be a good place to start:

  1. Data Representation for Computing: How is Data Represented in a Computer?
    • Binary Data, Non-binary Data
  2. Data Structure & Semantics: How well is the data structured?
    • Structured Data, Unstructured Data, Semi-structured Data
    • Sequence Data, Time-series Data
    • Panel Data
    • Image Data, Text Data, Audio Data
  3. Data Measurement Scale: How can data be reasoned with and measured?
    • Categorial Data, Nominal Data, Ordinal Data
    • Discreet Data, Interval Data, Numerical Data, Continuous Data
  4. Data Processing: How is the data processed?
    • Streaming Data, Batch Data
    • Data Filtering, Data Mapping, Data Aggregation
    • Clean Data, Dirty Data
    • Data Transformation, Data Extraction, Data Load
    • Data Augmentation, Data Imputation
  5. Data Attributes: How can data be broadly characterized?
    • Velocity, Volume, Veracity, Value, Variety
  6. Data Patterns: What are the patterns found in data?
    • Time-series Data Patterns: Trends, Seasonality, Correlation, Noise
  7. Data Relations: What are the relationships within data?
    • Relational Data, Graph Data, Document Data (Key-value Data, JSON Data)
    • Multi-dimensional Data, Spatial Data
  8. Data Storage Types:
    • Block Data, Object (Blob) Data
  9. Data Management Systems:
    • Filesystem, Database, Data Lake, Data Mart, Data Warehouse, Data Lakehouse
    • Data Indexes
  10. Data Governance, Security, Privacy:
    • Data Catalog, Data Quality, Data Schema, Data Model
    • Data Masking, Data Privacy

More blogs to deep dive into each category and the challenges involved. Let’s peel this onion.

Tunnel Vision

When I drive through a tunnel with my kid, there are two points of excitement – entry point and exit point. When entering a new tunnel, it’s always a “Yay!!” The exit feeling depends on how long we were inside. It’s either the expression – “Finally some light” or “Oh, no, we are out.”

You get my point. We love tunnels. We like to see the light at the end of the tunnel.

We not only love tunnels, but we love to tunnel. Tunneling helps us focus, and there is a focus threshold after which we need to see some light.

Sprinting in agile is Tunneling. After a two-week sprint, we might have the expression, “Oh, no, we are out too soon.”, and after a four-week sprint, we might have the expression, “Finally, some light.”

A two weeks sprint seems to be a global average of adequate time in a sprint. While that is an indicator, the team must choose their sprint duration.

Not all people are alike. Some people like to be first finishers – Tunneling helps them to get from A to Z fastest. However, we remember our journeys for unplanned explorations. Can we visit that lake nearby? Can we bathe in that waterfall? Can we take a different road?

In software projects, the expectation from engineers/architects is that they build a path from point A to Z as fast as possible (tunnel), but the way should be open to exploration by the user. E.g., Build a mobile user interface to update my health parameters, but let me explore new services in the user interface to improve my health.

If you are agile, you can introduce innovation experiments to explore users unwritten needs, making your own journey not feel like a long tunnel.

Takeaway: We need black-box tunnels, exciting tunnels, and open roadways, and in agile software parlance, that loosely equates to sprints, spikes, and MVPs.

Go Slow to Go Fast

“I am adopting Agile; finally, I can tell my customers that since we are going to be agile, the delivery will be late. The agile coach told me that to go fast; you need to slow down. If the customer has questions, I will ask the agile coach to help convince the customer” – Project manager

Sometimes, projects need to be rescued. They have either messed up the quality, or schedule, or both. They had some “accurate estimations” done 12 months back for the entire project. Now they have to prove that their estimations were not wrong irrespective of the scope creep, requirements ambiguities, technology risks that popped up, and COVID impacting day-2-day life.

Such projects can be rescued without agile principles. The team can get a stock of current realities and re-plan, and continue this process until successful. There are very successful waterfall projects. There are failed waterfall projects that were rescued with waterfall. A non-agile method to rescue the project will still ask you to stop, re-plan, and continue, i.e., go slow (stop-think) to go fast.

An agile method to rescue the project will also ask you to stop, re-plan, and continue iterations. i.e., deliver a small “good” quality feature increment that does not break the product, then continue. While some team members have a sprint tunnel vision, others will look beyond a sprint to ensure that they have “ready” features to add to the product when they meet their DOD, and their stories are accepted. After some sprints, the team knows more about its velocity and can predict a schedule (with a guaranteed quantum of quality). The backlog would also be groomed (prioritized, broken-down, detailed) in parallel by the product owners while the team is sprinting on the stories that were picked up. Agile enables teams to continue on features that are “ready” and not “halt.” If the reason for distress is “quality,” then the stories could be “debt” that must be paid off until we can build some more.

Agile is not an excuse to stop. Even when you are agile, you can accumulate debt that results in an inferior quality product, and prioritization/judgment must ensure that the debt is not out of control. There is always good debt (to gain speed) and bad debt (than hampers quality); and some intersection of the two. Even in agile, there is an in-built “stop” with iterations and a “STOP” requested by the team/customer.

Takeaway: Agile or non-agile a project in distress requires to stop, think, and continue.

A best practice that I have used to prevent projects to get into a bad state is to build an architectural runway (intentional architecture) outside the sprinting zone. “Readiness” to sprint is critical for success of the team.

Data-driven, Metric-driven

Some say they are the same – metrics are computed on data. So, I must be data driven if I am already metric driven.

I claim that they are different. Agile combines them together in a nice way.

You would be metrics-driven to achieve a goal. Goal coaches insist that goals should be SMART (Specific, Measurable, Attainable, Realistic/Relevant, and Time bound). E.g., I want to be an millionaire in one year is a SMART goal, that can be measured along the way (lead metrics) and once an year has elapsed (lag metrics). The lead metrics tell you whether you are on the right path to achieve the goal, while the lag metrics tells you whether you have achieved or how well you have achieved the goal.

You would be data-driven when you have to deal with ambiguities. Data coaches insist that good quality data must be captured continuously to seek insights that can be converted to knowledge and actions. E.g., I want to drive academic excellence for my child this year. This statement does not tick all the boxes of SMART. It does not say that my child should score A+ in mathematics. There is inherent ambiguity in the statement and choice of words (“excellence”). In such situations, you collect data from tests, teachers feedback, and your own observations. Based on the data, you get insights – the child is great at arts, excellent at mathematics, and needs improvement in language. You then focus on sustaining the strengths (arts, mathematics), and focus on development needs to improve academic excellence. Being data-driven is all about seeking actionable insights. You may make a decision to not take any action to improve language skills and let the child excel in her strengths, but that’s still a data-driven decision.

Agile helps with the cone of uncertainty with the levels of agility. E.g., In a software development context, the team may look at lead metrics like flow velocity and team happiness to determine whether we are on the right track to reach the outcome measured by the lag metrics. However, the team will also look at data (new features, technical debt, customer feedback) to derive whether a change of course is required. So, you get the benefit of both being iterative. Being iterative, and taking small chunks of work to do (stories with 8 story points), you can be metric-driven (SMART) to measure say/do as a lag-metric. Also, grooming the product backlog with insights from data, you will be data-driven.

Technical Career and Competencies

Some people plan their careers. Others don’t and let it happen. Which one is better?

A great career coach will talk to you and advice to either plan or to flow. A good career coach will always ask you to plan. A mediocre career coach will ask you to flow.

Expertise is a necessary attribute for a great career, but not sufficient. A performance track record is another necessary attribute for a great career, but not sufficient. Presidential communication and influencing skills is yet another necessary attribute for a great career, but not sufficient.

It’s easy to list down the ingredients of a great recipe (methods for a great career), but different cooks with the same recipe have different results. So, there is also some luck and practice involved.

For this blog post, lets focus on EXPERTISE. Early technical careers are measured by the depth of technical competencies. Technical competencies define late career choices as well. My advice has always been to develop 1-2 competencies in early career to build depth. Some broad technical competencies are:

Web (Cloud Scale) Software
Enterprise Software
Device (Mobile) Software
Device (Embedded) Software
Security and Privacy
Artificial Intelligence (BI/ML)
Data Engineering
DevOps
Test Automation
Robotic Process Automation
Operations Research
Agile

The list is not comprehensive, but is a classification of the type of software problems that software engineers & architects solve today for the market. The engineers have to build different mindset and skills for each class of problems.

E.g., As a Web Software Engineer, you would need to have the mindset of building for scale, and skills to debug programs that may fail at scale. As a Device (Embedded) Software Engineer, you would need to have the mindset of building for scarcity (memory, cpu), and skills to debug programs with concurrency related failures. As a Enterprise Software Engineer, you would need to have the mindset of integration, and skills to debug programs with integration/messaging problems with other systems. As a Data Engineer, you would need to have the mindset of processing data in batches, and skills to find anomalies and patterns in data. As an Agile/DevOps Engineer, you would need to have the mindset of continuous improvement, and skills to automate workflows.

Best bet – early in the career, if you know your natural mindset, you can choose a competency that fits you. Later, choose a competency that challenges you.

Don’t choose a competency that claims to maximize your cash flow. So, work to strengthen your strengths, and later work on your development opportunities.

Summary: Once you plan you need to let it happen, and measure your happiness. If you are not satisfied with the flow, you need to plan a change. Rinse-repeat until you are settled and happy. If you are content, plan to change.

Myth: Architects only create Diagrams

Like it or not. Diagrams (visual) are a great communication tool. If one of the responsibility of architects is communication, there is no better way than visual communication. Contextual communication requires that the same information to be represented differently for effective communication.

Architects (titled or not) have a responsibility to analyze data from various sources:

a) Requirements coming from customer or product manager.
b) Complaints coming from customer or product manager.
c) Constraints coming from customer or product manager.
d) Constraints and opportunities from operational leaders.
e) Technology advances in industry.
f) Patterns and practices in architecture.
g) Feedback from development teams.
h) Feedback from independent consultants (peers, stakeholders).
i) Inputs from security teams.
j) Inputs from operational teams.

All this data needs to be analyzed to produce conceptual and detailed sketches (as required) for construction. Today, most of these sketches are conceptual. Teams are very skilled to develop the detailed sketches right in code.

The architect is like a data scientist working on all this data to determine the function that ‘fits’. This function is represented as a diagram – a view, or a perspective. The diagram could even be a simple table.

Stating that ‘architects only create diagrams’, however, is a poor critique of the effort. Creating conceptual clarity is important for the architect. But the architect’s job does not end at diagrams. They have to communicate, plan, and code. Unless, the diagram is realized, it’s useless.

Just like code and configuration should be treated like ‘code’; documentation and diagrams also need to be treated like ‘code’. This means – reviewed, maintained, tested, critiqued, destroyed, re-factored, …

Only treating ‘code’ like ‘code’ is bad ‘coding’. Code, configuration and concepts need to be treated like ‘code’.

Dismissing concepts (usually the work of an architect) is immature.

The best representation of ‘architecture’ is ‘code’. The first draft’s of ‘code’ are ‘concepts’. ‘Concepts’ are represented as ‘diagrams’ for communication. AND. communication is a good thing.

Enough said.

Myth: Form Follows Function

This argument gets philosophical.

In classic building architecture, the form of a stadium is meant to perform the function to host large audiences. The form of a house is meant to perform the function to host families. A stadium form is not used to home a family! So, it follows, form follows function. This argument then leads to architects and designers to start from function to create forms.

a) Can an architect create form of a stadium to home a family of four? Why not?
b) What about Bill Gates house? Is that form follows function, or form follows money spent?

Let’s take another example. How about a car? The function of a car is to take a few passengers from point A to point B safely.

However, cars are also used for

a1. Kidnapping people
a2. Moving goods (not passengers)
a3. Racing

Seems like new functions were discovered once the form (‘car’) appeared. The form was adopted for other ‘functions’; not originally intended for…

Let’s take another example. How about email? The function of email is to communicate a message from a source person to several destinations persons.

However, emails are also used for

a) Advertisement
b) Spreading malware
c) Sharing photos
d) Sharing legal documents

Seems like new functions were discovered once the form (’email’) appeared. The form was adopted for other ‘functions’; not originally intended for…there may be specific platforms for ‘sharing photos’ e.g. Instagram; these specific forms could also be used for ‘selling drugs’ and ‘pornography’; not the original function.

Let’s take another example. How about Blockchain? The function of blockchain is to maintain a distributed ledger of transactions.

However, blockchain can also be used for

a. Creating and maintaining a patient’s historical record
b. Internet of Things
c. Voting

Seems like new functions were discovered once the form (‘blockchain’) appeared. The form was adopted for other ‘functions’; not originally intended for…there may be specific platforms for maintaining patient’s record or maintaining registry of connected things.

These examples above, indicate that “Unintended Functions Follows Form that Follows Original Function”. Kind of a recursive loop!

Having a functionalist (modernist) mindset in design & architecture is good; however, it creates blind-spots in designers that makes them choose from past (known) stereotype solutions and patterns. This results in re-use of known patterns; good for manufacturing & speed, bad for creativity.

OK! Just trying to say that there it’s more complicated than form following function. These days, it’s really form following profit and function following form. Flexibility and open mind is required to keep up in this changing world.

Myth: Architecture field does not have specializations.

A Quick search for software architect titles would dispel this myth.

  • Application Architect
  • Java Architect
  • Cloud Architect
  • Database Architect
  • Data Architect
  • Analytics Architect
  • Security Architect
  • SOA Architect
  • Solution Architect
  • Product Architect

Ten (#10) is a good number to make my point. Now let’s look at super-specialization.

Cloud Architect
        AWS Architect
        Azure Architect
        Cloud Foundry Architect
        Google Cloud Architect
Database Architect
        MS SQL Architect
        Postgres Architect
        Azure SQL Architect
        Cassandra Architect

Generalizing this, it could be reasoned that, the ask in the marketplace is for:

b1. Big “A” Architect, known for his/her knowledge of patterns and practices of general architecture.
b2. Specialized Architect, known for his/her knowledge of patterns and practices in a specialized field (e.g. cloud, security, database)
b3. Super Specialized Architect, known for his/her knowledge of patterns and practices in a super specialized field (e.g. Azure, AWS, Postgres)

Hmmm…now now, if you have different super specialties that are required to build a product, would you

c1. Build a team of super specialized architects?
c2. Hire an architect, and have collaboration with super specialized engineers, consult with super specialized architects on a need basis?

My personal preference is #2.

Let’s take a healthcare analogy. In healthcare, you can find generalists, specialists and super specialists – general physicians, cardiologists, pediatric cardiologist, neurologists, dermatologists, endocrinologists, etc.

If you have a headache, where do you go? General Physician or neurologist?

d1. The chance that your condition is classified as a common condition is high, if you visit the general physician.
d2. The chance that your condition will be over-tested is high, if you visit a specialist.

It’s generalist and specialist bias.

Same is true with software architecture. Go to a software architect for an authentication problem, the solution proposal will look simple e.g. Use LDAP. If you go to a security architect for an authentication problem, the solution proposal will be comprehensive e.g. Use LDAP, Enable SAML & OAuth2 for single sign on, Test for OWASP Top 10 Web Application Security Risk, Develop threat model, …

The specialist is also costlier than the generalist; and the super specialist is costlier than the specialist.

You can always take second opinions with a specialist. However, its best that the system encourages generalists to front-end specialists. A tiered approach is better than walking directly to a specialist.

Just like a family physician (PCP – Primary Care Physician) is the first line of consult, a general architect (Big “A” Architect) is a good start to lead the architecture of a software product/solution.

But – hey – specializations exist in software architecture.