Google's AI Sleeper - Responsible AI
There's one aspect of the PaLM 2 API that may provide Google with a clear edge in the quest to dominate AI - Safety.
If you’ve spent any time using Google’s PaLM 2 API as I have, you may have noticed the safety payload returned with every inference outcome. It looks like this.
Only 10% of the API response is occupied by the data you seek. The rest of the response is safety analytics across 6 metrics which can be tuned and shaped to fit almost any imaginable confidence threshold needed for your use cases.
With PaLM 2 API, every response includes these metrics and for each inference candidate as well. This means that you can build responses to AGI content based on how safe they may be. Baking this into the API infrastructure is smart. Google AI researchers knew that a key safety requirement would be needed in the future and they guided their API teams to ensure safety decisions could be made without a lot of added effort, additional inferencing calls, or embedding gymnastics. Smart.
Using the integrated safety features, you can tune them to be sensitive to certain types of responses that may fall outside your project’s tolerance level. This — I believe — could be the ace-in-the-hole for Google as it attempts to dominate AGI.
As evidenced by Google’s significant research concerning responsible AI practices, they’ve been thinking about this a lot. It’s ironic that OpenAI, a company that recently carried the AI safety concern to Capitol Hill, gives what could be best described as lip service concerning practical measures to ensure AI is used responsibly.
Tuning an inference request for PaLM 2 is very simple. This is an example using the REST API in Google Apps Script, however, the various SDKs provide constants that are easier to instrument.
It’s obvious that Google has really thought this through and is likely to influence a lot of brands and businesses that must carefully walk a delicate line concerning AI safety.