Kiandra Insights

If you want trustworthy AI, don’t start with the tech

Aarti Nagpal - Software Development Team Lead
by
Aarti Nagpal
Software Development Team Lead
|
September 2, 2025
Aarti Nagpal
Software Development Team Lead
September 2, 2025
Abstract illustration of AI represented by a red cube with ‘AI’ letters, encircled by orbital rings. Two hands reach toward it from opposite sides, one labeled ‘Purpose’ and the other labeled ‘Effectiveness,’ symbolising the balance of ethical intent and performance in AI.

If you want trustworthy AI, don’t start with the Tech

“How do we make sure our AI systems behave responsibly, not just accurately?” We get this question a lot. Usually after something has already gone a bit sideways.

Here is the short answer: You build responsibility into AI from the very beginning.

Guided by our B-Corp principles, we see responsible AI as a balance of purpose and effectiveness. Clear policies help ensure systems behave as intended. Kiandra also draws from the OECD AI Principles, especially the focus on fairness. That’s why our decision-making framework includes specific checks to ensure ethical and fair outcomes.

Policy frameworks go beyond compliance checklists. They guide your AI to act ethically and reliably.

Let’s break it down.

Decision-making frameworks

This is your ethical core. You need it when the stakes are high, and outcomes are not black and white. Healthcare, Banking, Insurance - if your system affects real people, this layer adds judgment where automation alone falls short.  

  • Healthcare triage tools - AI helps prioritise patients, but ethics and fairness must guide those choices and not just speed or severity scores
  • Loan approvals - AI models assess risk but should factor in fairness (for instance, not penalising applicants based on postcode)
  • Employee hiring tools - Beyond matching keywords, AI needs to avoid bias and respect fairness and inclusion policies.

These frameworks help teams step back and ask: Should we make this decision, not just, can we?

Rules-based frameworks

These set the guardrails. They keep your AI within certain limits. Perfect for regulated spaces or when you need consistent behaviour.

  • Chatbots in banking or insurance - these follow pre-set rules to handle routine tasks without drifting into risky territory
  • Manufacturing automation - Robots follow strict protocols for safety and quality, definitely no freelancing allowed!
  • Privacy compliance - AI handling personal data must follow specific data access, retention, and security rules (e.g., GDPR).

Here, predictability isn’t boring, it’s non-negotiable.

Utility-based frameworks

These aim for results, like more clicks, better pricing, smarter recommendations. They work well but need limits. Without guardrails, they might chase numbers without thinking about fairness or safety.

  • Streaming recommendations (e.g., Netflix, Spotify) - Recommendation algorithms boost content engagement but need limits to avoid creating echo chambers
  • Dynamic pricing for e-commerce or airlines - Pricing algorithms adjust prices in real time to maximise profit but should avoid discrimination or price gouging
  • Route optimisation for logistics - Optimisation models calculate the most efficient delivery routes, while accounting for safety, legal, and environmental constraints.

These models chase performance but need ethical brakes, so they don’t game the system or cut corners.

The OECD AI Principles cover the basics like transparency, safety, accountability, fairness, and sustainability. If you’re setting up governance, start there but make it your own. To help you get going, we’ve shared our own decision-making template.

Understand the problem

  • Are we clear on what the specific problem is that we are trying to solve?
  • Does it bring value to any of our stakeholders? (Owners, employees, partners, suppliers, community, environment and clients)

Policy alignment

  • Does it align with our vision, mission, purpose, behaviours, strategy, goals and policies?
  • Is it ethical?
  • Is it fair?
  • Does it limit impact on the environment?
  • Is it secure and can we guarantee data privacy?

Stakeholder engagement

  • Do we know anyone in the industry we can talk to for more information? E.g. users, SMEs
  • Do we have any diversity of experience we can draw upon from within the team?
  • Have we considered the impact on all impacted stakeholders?

Deliver value

  • What does success look like? e.g. expected benefits
  • Can we quantify success? What is the metric?
  • How much will it cost? Actual cost + opportunity cost + expenses
  • How long will it take? Actual + lapsed time
  • How long is the payback period?

Understand risks

  • Is there any benefit or risk to our reputation?
  • Are there any safety risks?
  • Does it pose a risk to the security or privacy of any stakeholders?
  • Are there any risks to doing this?
  • What is the risk of ‘not’ doing this?

Conduct research

  • Have we undertaken thorough, relevant and validated research?
  • What were the sources? What did we learn?
  • Who or what are our competitors?
  • How do we differentiate?
  • Is there market need/demand and how can we validate this?
  • What are the other viable options?

Ensure accountability

  • Who is the decision maker?
  • When do we need the decision?
  • Who will own it and be accountable?

Why it matters

We’ve seen companies rush headfirst into AI and then scramble when something breaks. Legal headaches or brand damage aren’t cheap fixes!

So, start early. Use policy to guide behaviour before your AI goes live. Nail your decision-making. Put rules in place. And only then chase optimisation.

Think of your frameworks like living things... they need care and updates as your product changes. That’s how you keep AI sharp and trustworthy, no matter who’s watching.

If you'd like to learn about building AI into projects responsibly, let's talk.

Share article
LinkedIn.com

More insights

A man wearing glasses and a denim shirt looks at colorful sticky notes on a glass wall during a planning or brainstorming session.

The role of the Product Owner in successful software delivery

Cassandra Wallace
3/9/2025

At Kiandra, we work closely with Product Owners to bridge the gap between their organisation’s needs and our delivery team’s technical expertise. This collaboration is crucial for keeping the project aligned to business goals, managing scope effectively, and ensuring value is delivered.

Read more
A stylised heart shape with sharp edges and a gradient blend of orange, red, pink, and blue, set against a dark-to-light gradient background that fades from black at the top to orange at the bottom.

Lovable app review: AI prototyping and product ideation tools for clients

David Velasquez
27/8/2025

When working with clients in the earliest stages of a project, speed matters. The faster we can turn ideas into something visual, the sooner we can test assumptions, get feedback, and align on a direction. That’s where product ideation tools like Lovable come in.

Read more
A lone figure stands in front of a towering, glowing “AI” symbol, with dramatic shadows cast across the floor, representing the scale and impact of artificial intelligence on the future of work and technology.

What AI means for the software you already have

Cassandra Wallace
26/8/2025

AI is reshaping how software is built, used and maintained but most organisations aren’t starting from scratch. They’re working with what they already have: legacy platforms, off-the-shelf SaaS, or custom tools that still perform core business functions.

Read more

Let’s discuss your next project

Whether you’re curious about custom software or have a specific problem to solve – we’re here to answer your questions. Fill in the following form, and we’ll be in touch soon.

Email

Would you like to receive an occasional email showcasing the latest insights, articles and news from our team of software experts?

Thanks for reaching out! One of our software experts will be in
touch soon to help you with your enquiry
Oops! Something went wrong while submitting the form.

This website uses cookies to improve your experience. By browsing our website you consent to the use of cookies as detailed in our Privacy Policy