As we find ourselves at the precipice of an AI revolution, one critical question emerges: how can we ensure the ethical alignment of these increasingly powerful machines?
AI is quickly becoming ubiquitous in every industry, including retail. AI is making a big impact on the industry, from grocery retailers and delivery apps using generative AI to build and refine baskets, to fashion brands replacing models that reinforce Eurocentric beauty standards with AI-generated versions to display their clothes on diverse bodies.
From a retail perspective, the benefits and exciting applications of AI are endless. But in the absence of real regulations and policies, it’s the responsibility of AI leaders to be thoughtful and purposeful. It’s not acceptable to move fast, break things, and leave a mess for others to figure out. Doing so will risk our society’s economic, political, and social health. We must set higher standards for ourselves—and for our agents, AI or otherwise.
In particular, ethical alignment is required in three key areas:
Bias. Profiling and preferential group-based outcomes have no place in AI. Each of us has a responsibility to ensure that all people are treated fairly.
Safeguards. AI must not expose our economy, political systems, or communities to an increased attack surface for nefarious activities. This is especially true in the field of generative AI, where the ability to instantly create images, video, audio, and the written word can be used to scale distrust, discord, and many other ills if not kept in check by proper safeguards.
Trust. Participating in modern society increasingly requires the use of digital technology. As such, it’s essential that people are able to trust the systems that help manage our lives.
That’s why, as the co-founder and CEO of Standard AI, I have committed our company to four principles.
No facial recognition—ever. Facial recognition technology has the potential to exacerbate biases and disproportionately target marginalized communities. By committing to a future without facial recognition, we prioritize the protection of privacy and civil liberties.
Prioritize high-impact, low-risk applications of AI. To minimize the potential harm caused by AI, we believe there should be a focus on applications that yield significant benefits without posing grave risks. For example, AI can revolutionize retail experiences without the same level of danger as implementing it in aviation. A mistake on a receipt poses less of a threat to our well-being than a mistake on flight controls.
Keep a reasonable level of humans in the loop. To ensure AI systems remain ethically aligned, human oversight is crucial. In other words: who watches the watchers? By maintaining a human presence at key decision-making junctures, we can act as a safeguard against machine-generated biases and other ethical shortcomings.
Anonymize data and diversify inputs. To mitigate bias in AI systems, we must insist on diverse, continuously-updated data sets. Anonymizing data ensures the privacy of individuals while promoting more equitable outcomes.
As humans, we possess the unique ability to assess situations through both practical and ethical lenses, but our robotic counterparts have yet to develop this level of moral sophistication. It’s imperative that we, as an industry, align on core ethical standards before the train leaves the station. Otherwise we risk creating widespread harms that won’t be easy to reverse.
I’ll give you a hypothetical situation in retail AI—the field in which Standard AI operates.
For years, there have been situations where store employees have treated people like potential criminals because of their skin color, clothing styles, or other factors. Imagine for a moment that those same biases were held by members of a team tasked with building the models for an AI-based loss prevention platform—one that utilizes cameras on the ceilings of stores and flags suspicious people and behavior for store staff to monitor.
Suddenly those biases would be scaled to hundreds of stores across dozens of retailers. And worse: they might be given cover with the false veneer of data science, resulting in retailers ignoring the harm being perpetrated on so many of their customers. Even worse, this same outcome can happen even if the team building the AI themselves aren’t biased. Bias in training data can be insidious and hard to find. It’s not enough to not intentionally cause harm: without active measures to identify resulting biases and risks, the default outcome is for AI systems to be unaligned and cause potential harm.
This type of risk has existed in the field of AI since the beginning, but new attack vectors are emerging quickly, many even more concerning than previous failure modes. Ethics matter. They matter for people as we choose our actions. And now they have to be imbued into our AI systems, which are increasingly acting for us. As we navigate this brave new world, ethical alignment with AI is not a mere aspiration—it is a moral imperative. In the words of author and computer scientist Jaron Lanier, "We must learn to design the process of creating technologies as much as we design the technologies themselves."
By committing to these principles, we can create a more equitable, just, and trustworthy digital landscape for all.
Get this kind of content directly to your inbox
Sign up for our newsletter