The Nature of Care

This was originally a small part of an essay that attempted to do too much.

The content herein is contextual to my career, but will be useful for non-engineers who want to better reason about how they utilise AI in their own lives, and for what purposes. The underlying philosophy is transferrable.

In this text, I hope to explore why we, as designers, software engineers, builders, and tinkerers, may want to carefully consider our intentions when preparing and utilising AI-generated software.

There exists a term, found often in job descriptions, that I believe encapsulates my frustration with modern software engineering, and that is “bias for action”. Back in Edwardian times, this might be illustrated by the way in which electricity was misused; one could make specific reference to the electric tablecloth, inside which bare live wires ran, allowing for pins on the bottoms of bulbs to pierce the fabric and make contact, or perhaps clothes irons wired into a precarious overhead light fixture. The same reckless rush to production occurs now, as it always has done, with AI. We exist in a time of immense growth and adoption, and we have reached this point, like the Edwardians, before we have invited the simple fuse; we continue to innovate above our station. The almost unconscious rush exists within software as non-technical founders larping as engineers, in social media as algorithm manipulation, and more fundamentally as the incredible power required to produce it all. What these things have in common is that there seems to be a vast null space where many questions should be asked, and much care should be added.

Whether this rush to production is successful depends generally on ignorance and mystique surrounding AI, which for the moment, allows money to be made at the expense of those who would be charmed by sweet words and tales of the justness of consuming the forbidden fruit.

Just as we have a filter on our speech & actions, the strength of which varies by person, a similar mechanism should be implemented for our use of tools that create a comfortable illusion for us to offload our cognition. In the former, the filter helps us avoid, for example, using foul language in front of children, or viciously assaulting anyone who we perceive as having wronged us. When applied to AI, the full detriment to our ability to think independently is only now being uncovered, and I believe it’s imperative that we take personal responsibility for our use of such tools, to define the lines we will not cross, as well as better understanding and interpreting the information we are being fed.

If you take away nothing else from this short text, please consider the following;

Be wary of those who are seduced by the magician; those who have tasted the intoxicating waters will encourage you to drink even as the flesh peels from their body.

As I will come to realise on this journey of learning, AI is both the seducer, and the seduced.


To care about something is to feel concern, value, and attachment to it, or, at the very least to declare so. It is only when faced with the opportunity to make a sacrifice, are our true feelings are exposed; often, we are found wanting. During a house fire, with seconds available, we grab our family and pets. It becomes immediately apparent that our other belongings were never really that important. We say we care about many things because it brings us comfort, convenience, and pleasure. But in offering our care frivolously, and taking for granted what matters, we can lose sight of real value. When observed through the lens of sacrifice, care becomes more honest. Not everything is life-or-death, obviously. Hyperbole aside, there is always a hierarchy of importance, but the analogy brings us to an important question:

What am I willing to give up for this?  

The power in this question is that it quickly reveals how much we actually care.

In the context of AI, this gets complicated. Effective problem solving, thoughtful design, and reliable implementation always demands personal sacrifice. We sacrifice our comfort, our bias, our preconceptions, our ego, and our security, all in pursuit of the work. We should care about providing the best possible outcome or solution; however, we should also consider the best outcome as existing within the frame of reference of our own ability (the needle of which is ever-moving), not by some unhelpful notion of objectivity, which is often performative. To do our best work, we must be willing to let go of preserving who we are right now. True care asks us to shed a piece of ourselves, to step into discomfort, in pursuit of something better. It is a transformation; a constant flux between states of comfort and anxiety. Think of it like a body builder cycling through bulk and cut phases. They bulk to build mass and strength. They cut to strip away what is unnecessary, revealing their underlying shape. Both require presence, patience, and vision. Both are sacrifices made in service to the work. 

When we rely on AI to produce our code, the sacrifice changes: We give up the friction of learning, our potential for fluency, our mental muscles, and our journey. We don’t sacrifice for the work, we sacrifice ourselves, and provide an illusion that we have claimed as our work. The sacrifice has no weight. We aren’t shaped by it. It doesn’t teach us anything. It means nothing. So we must ask ourselves a version of this question: 

What parts of myself am I allowing to atrophy, and am I willing to pay that price to alleviate my most important struggles?

This does not mean that all fruits from the process of AI generated output are detrimental to us, quite the contrary; AI tools are incredibly powerful both for learning and execution. If we wish it, there are strategies to allow us to enrich ourselves, and avoid the atrophy, but it must be individually identified. On one hand you may have an engineer who is concerned with honing their craft, and utilises AI to be the absolute best engineer they can be; they use AI to fill gaps in their knowledge, to identify new methods, and streamline research, but not to offload their cognitive processes. On the other hand you may have an engineer who uses AI to offload as much cognition as possible, and to speed up their outcomes. In the first example, the engineer may become a major innovator in their sector, earning an eye-watering salary, having their of pick of company. In the second, by offload their cognition the engineer may be attempting to conserve energy for their family, or other activities they enjoy in life. In both cases, providing their work is accepted, there is an enrichment to both, despite opposing priorities. Some people will use AI to be the best that they can be, others will use it so they don’t have to be, and that is important to recognise. 

My suggestion is only to maintain an awareness of how we’re using AI tools, and to understand the potential effect misuse will have on us, now and in the future. As with any ethical discussion, it’s up to the individual. They need to determine what misuse looks like, and how much of their potential they’re willing to sacrifice, because there really is no universal answer, and I suggest caution of anyone who tries to provide one. Sacrificing your personal potential in one area in pursuit of another is a perfectly valid endeavour, if it aligns with your priorities in life.

Any form of design is inherently messy, and is a complex relationship between what we know, what we don't know, and what we think we know. My next post will dive into design, learning itself, and why it's important for us to embrace chaos if we want to generate original work.

Show Comments