Overview
This page documents ways to think about and utilize artificial intelligence for the Scrum framework.

Notable tools
Concepts
How LLMs work
Using AI for Scrum
Using LLMs for User Stories
Example prompts:
- Please review the acceptance criteria in this user story. Is anything missing?
- How should this user story be tested?
- What validation is necessary for this user story? For example, regex email formatting.
- Besides development and QA, are any other team members needed? For example, a DevOps engineer or a designer.
- How can I ensure that this user story meets the definition of ready and definition of done?
- How many hours or days do you think this user story would take to develop, test, and deploy for a junior/intermediate/experienced Scrum team?
- What are some risks associated with this user story? Will deployment of this user story potentially impact unexpected parts of the application or infrastructure?
- Are there any links or external dependencies missing?
- Besides QA and the Product Owner, should anyone else sign-off on this user story to ensure it is done? For example, having a designer review the implementation.
- Is this user story the right size, too big, or too small? Should it be broken down into smaller pieces?
Personal thoughts
AGI
We’re not even close. LLMs are #13 of the 32 required technologies to produce AGI.
Cached Responses
What about an LLM which slowly cached their responses based on upvoted responses? It would decide if two queries were similar enough to relate and then the answers would receive upvotes and the highest upvoted answer would eventually be cached and served as the response to all future users to reduce cost.
Centaurs & Reverse Centaurs
Autonomous car example:
- Human drives the car with no AI observer (status quo)
- Occasional mistakes or inattentiveness results in fatalities
- AI drives the car with no human observer (ideal, cheapest)
- Occasional “hallucinations” results in fatalities
- What if the fatality rate is lower than option #1?
- AI drives the car with human observer (expensive)
- Reverse centaur argument (supervision)
- Minimizes fatalities as long as the human observer remains vigilant
- Human drives the car with AI observer (expensive)
- Centaur argument (augmentation)
- Minimizes fatalities by assisting with option #1
- Human is still required to do the work
Prediction: #4 is the winner. It validates my assumption that companies leveraging AI with humans will win. The problem is that it will make things more expensive since AI technology will be expensive. To keep costs down, employers will lower salaries to remain competitive, especially if some companies eliminate human workers and leverage AI exclusively (inferior product or service, but much cheaper than the human + AI alternative). Maybe what we’ll see with AI are improvements in quality assurance, delivery time, efficiency, etc. instead of cost savings like everyone initially thought. Unless AI dramatically increases productivity.
Companies could take staffing in-house because AI tools allow them to rapidly assess resumes, qualifications, etc. However, AI will also cause the amount of candidates per posted position to expand. So these two might negate each other. Staffing firms may still remain relevant because curation of content will become even more valuable in an Internet that has become extremely noisy. Other aspects of staffing might become more relevant as well like professional references and evidence of past work. Currently inaccessible information may become more valuable as well. For example, I have specific contacts in my professional network that I consider to be the best of the best based on my experiences. Employers would be lucky to have these hard-working, clever individuals, but that information only exists in my head. New AI tools and social networks like LinkedIn might be able to expose that moving forward. Basically, bringing executive search and VC search to the masses and using AI to find the signal in the noise.