Our weekly 3-2-1 on AI newsletter features some of our favorite resources and discussions on the latest AI industry trends, data and thought-provoking quotes, and quick updates on what we’ve been working on.
Check out a summary of last month’s links below and sign up to receive our weekly newsletter straight to your inbox here.
Top picks from the AI Community
- According to the best measures we’ve got, ML systems can now be trained nearly twice as quickly as they could last year. It’s a figure that outstrips Moore’s Law, but also one we’ve come to expect. Most of the gain is thanks to software and systems innovations, but this year also gave the first peek at what some new processors can do. (Article)
- Sweeping rules to police AI in the European Union could come as soon as 2023 — but Spain wants to get a move on, unveiling a new plan to test the EU's Artificial Intelligence Act, which seeks to enforce strict rules on technologies like facial recognition and algorithms for hiring and to determine social benefits, starting in October. (Article)
- With security by design essential to ensure IT systems are built to be robust from development, the same approach should be applied to AI tools so these can be deployed responsibly and without bias, says Salesforce.com's AI ethics principal architect. (Article)
- Ensuring ethics in AI system development and deployment often involves decision-making that shouldn't be done by a single person. With this in mind, a look at how decisions about the ethical use of AI can be made in two stages. (Article)
- Nearly two years after a global pandemic sent most banking customers online, the majority of financial institutions appear to be embracing digital transformation. And while many still have a long way to go, this may be the year that financial institutions finally embrace ethical AI. (Article)
- A good overview of six proven MLops techniques that can measurably improve the efficacy of AI initiatives in terms of time to market, outcomes, and long-term sustainability. (Article)
- Banks often have difficulty explaining how models that use AI arrive at a certain decision — especially credit decisions. It's an increasingly thorny issue as regulators insist on knowing how algorithms arrive at certain outcomes. A look at how 'explainable AI' can shed light on AI decision-making. (Article)
- Rob Reich wears many hats: political philosopher, director of the McCoy Family Center for Ethics in Society, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence. In this interview, Reich delves into the ethical and political issues posed by advances in AI, and whether AI developers need a code of responsible conduct. (Interview)
- Transparency is an essential element of earning the trust of consumers and clients in any domain. When it comes to AI, transparency is also about communicating with relevant stakeholders about why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it’s monitored and updated, and the conditions under which it may be retired. (Article)
- Fraud prevention analysts are overwhelmed with work as bot-based and synthetic identity fraud proliferates globally. The models they’re using aren’t designed to deal with synthetic identities or fraud’s unstructured and fast-changing nature. An overview of five ways AI is helping to detect and prevent growing identity fraud. (Article)
- A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free. This is BLOOM: a radical new project to democratize AI. (Article)
- Businesses are using AI for everything from resolving customer service issues to making financial decisions. But do customers trust their use of AI? For building trust, it’s vital that both developers and customers understand the reasons why AI makes the decisions and predictions it does. (Article)
AI Quotes We Love
“66% of AI decision-makers are concerned about meeting ethical business goals.” – InRule Technology
“Firms will need to staff their AI product teams with responsible business leaders who can assess the technology’s impact and avoid ethical pitfalls before, during, and after a product’s launch.” – Vishal Gupta
“We don’t have better algorithms, we just have more data. More data beats clever algorithms, but better data beats more data.” – Peter Norvig
“[AI] Transparency is a chain that travels from the designers to developers to executives who approve deployment to the people it impacts and everyone in between.” – Reid Blackman and Beena Ammanath
“Companies must prepare for AI regulation now, instead of taking a ‘wait and see’ approach or viewing compliance as just checking a box for completion, both of which can become unsustainable.” – Ray Eitel-Porter
“By being transparent [with AI] from start to finish, genuine accountability can be distributed among all as they are given the knowledge they need to make responsible decisions.” – Reid Blackman and Beena Ammanath
“We need AI builders from diverse backgrounds who understand the complex interplay of data, AI and how it can affect different communities.” – Solana Larsen
“The AI Infrastructure market is projected to grow from $28.7 billion in 2022 to $96.6 billion by 2027.” – MarketsandMarkets
... and an update on Apres
We are excited to share that we are now working on a number of pilots, with 10 already in production. Over the past few months, we've also grown our engineering team and started working with some of our first customers, helping them build more transparency into their AI models and better understand how their models and businesses can work together.