Will AI Replace Me?
The fear of AI as a threat to technology workers is vastly overblown. But there is some truth about your future as a solution-builder and AI's impact.
A fellow developer recently asked me:
I build Google Workspace Solutions, and I fear that my skills are about to be replaced by AI.
AI can do a lot of things. Some seem scary, and a few may ultimately reduce the workforce. But let’s examine what developers do. And by “developers”, I’m referring to people who use their brain power to craft solutions such as spreadsheets, workflows, and automations. These power users often use Google Apps Script, and many are domain experts in their specific areas of expertise. It is often the case that solution-builders come from backgrounds and education pathways other than computer science. This makes them valuable to an organization - they know the business and are technically competent.
Externalities
About 80% of the stuff we do to engineer a solution is unrelated to code or formulas. I refer to these tasks as externalities of the solution. Thinking, pondering, designing, speaking with stakeholders, and considering all facets of the business process - these are the things that are very difficult to codify into an AI prompt. To approach the depth and breadth required to provide a large language model with the specifics of a given solution requires a mega-prompt. Prompt windows are limited in size, and LLMs require very discrete explanations, even for the simplest of tasks.
And if you try to encourage Bard to craft a solution without considering all these externalities, the results will be very poor. Your boss will not be impressed, and neither will your users.
Where AGI Shines
AI might make 20% of the work 50% more efficient in solution-building. These percentages may vary depending on each situation and your role. But overall, when it comes to code generation, AI is useful to a point. Where it stops being useful is applying its vision to practical implementation strategies. Humans have the edge in this regard.
AI may also play a key role in the other 80%. For example, I used Bard below to explain an apps script function. It was quick and mostly accurate. This makes me more efficient and affords me more time to focus on things AI can’t do well.
If all you care to know is the risk of AI replacing you, this is my guidance.
We should not fear being displaced by AI. We should fear being replaced by someone effective using AI to make 20% of the work 50% more efficient.
Practical Example
In my work, I have used Google Apps Script to control how an FAQ bot responds to questions framed in natural language by customers. One of the challenges is preventing jailbreaking of the AI model. It requires a deep understanding of several solution principles that are difficult for an LLM to understand without costly fine-tuning.
Jailbreaking is a process that uses a prompt injection to bypass safety and moderation features placed on LLMs by their creators. Jailbreaking usually refers to Chatbots which have successfully been prompt-injected and now are in a state where the user can ask any question they would like. This is a serious risk in AI solution development, and most projects get to production without jailbreak testing.
I use embedding vectors to thwart this risk to ensure the questions injected into my AI solution are relevant to the system's objectives. If a question doesn’t meet a relevance threshold defined by its vector score, it is politely ignored. Here’s an example of a recent jailbreak attempt on one of my systems.
Embeddings are extremely powerful because they can test query relevance to a given domain of information. But the design of that similarity testing process is complicated and requires a lot of abstract thought and planning - ergo, design.
LLMs know a lot of patterns, and they can generate new designs. However, they tend to fail because they don’t know all the nuances of the business domain or the users. Embodying this logic in a prompt is exceedingly difficult. Here’s a narrative of my solution - a single Google Apps Script function that processes every FAQ query.
The
processQuestion()
function takes a question as input and returns a response. The function first retrieves the question embedding using thegetEmbedding_()
function. The embedding is a vector representation of the question that can be compared to other questions.Next, the function uses the dot product to calculate the similarity between the question embedding and the embeddings of all the questions in the FAQ database. The dot product is a measure of the similarity between two vectors. The function then stores the top three most similar questions in a list.
The function then creates a completion prompt that includes the top three most similar questions and the original question. The prompt is then passed to the
getCompletion_()
function, which returns a natural language response. The function then returns the response.Detailed explanation of the
processQuestion()
function:
getEmbedding_()
- This function retrieves the embedding of a question using the Google Natural Language API.
dot()
- This function calculates the dot product of two vectors.
aTopList
- This is a list of the top three most similar questions.
prompt
- This is the completion prompt that is passed to thegetCompletion_()
function.
getCompletion_()
- This function returns a natural language response.Here is an example of how the
processQuestion()
function would be used:
let question = "What is the capital of France?"; let response = processQuestion(question);
The
processQuestion()
function would then retrieve the embedding of the question, calculate the similarity between the question embedding and the embeddings of all the questions in the FAQ database, and return the response.The
processQuestion()
function is a powerful tool that can be used to generate natural language responses to questions. The function is easy to use and can be customized to meet the specific needs of a project.
As you can see, a fair amount of business logic aims to fine-tune the FAQ bot's responses. Most importantly, FAQ answers often do not reflect a single top-scoring similarity. Instead, they may be a mashup (compiled by the LLM) based on the three best answers. Embedding vectors make it possible to identify multiple answers in the corpus that are very similar. You may spend days nudging an LLM with a prompt that uses this approach.
The Code
This is an excerpt of the processQuestion()
code. It has been simplified for this example. You can see how to expand the logic to meet other requirements, such as mashing up top-scoring answers with an algorithm that meets specific business requirements.