Most of us have heard horror stories about ‘biased algorithms’ - from those that reject candidates with foreign sounding names to predictive crime AI that disproportionately associate minorities with higher levels of crime. In order to prevent the continuation of this in the future, it is vital that researchers catch such biases early on. Stanford University recently launched The Ethics and Society Review (ESR) which requires researchers looking for funding from the Stanford Institute for Human Centered AI to also submit forms where they must list any potential negative consequences their proposals might have on society. By doing this, ESR hopes to catch bias early in the first stages of development when it is easier to fix. While this process has so far generated success and shows potential, the need for AI risk management continues to grow on a global scale.
Current Lack of Risk Management
AI is still a relatively new concept and is not within the scope of what institutional processes like the Institutional Review Board consider reviewing of what may be harmful to society. Currently, some of the biggest risks related to AI are discrimination, privacy violation and incorrect implementation. If something in AI goes wrong, companies are liable for all repercussions which also include reputation damage and loss of funds. Due to AI being in the early stages, companies do not often consider the full scope of societal damage leading to an overestimate of how well they will be able to mitigate risk. As a result, it is necessary to build proper foresight about how products may impact society.
Moving Forward
To mitigate potential risks, it is necessary to deploy company-wide controls that include updated procedures and retraining all workers who interact with AI. For example, Omneky employees are trained on how and where to use AI as well as being familiar with how the company is taking the initiative to reduce potential harms.
Company-wide controls require implementing guidelines starting from initial development all the way to final production. In the early stages it is important that companies incorporate feedback loops into the development cycle as well as transparent and honest reporting of the progress of the product’s performance.
To decrease things such as discrimination or derogatory comments learned from the internet which is a common issue with GPT-3, companies should incorporate out of sample and back testing and assess the product regularly to find any data degradation. To prevent issues previously mentioned such as biased predictive crime AI, model results should be independently reviewed and cleared to be prejudice free. To avert implementation problems, detailed model testing and adherence to strict requirements that consider all possible scenarios where the product may be used are necessary. Lastly, the final step which considers the use of the model and decision making, requires creating plans for how to mitigate consequences if they do end up occurring as well as model performance monitoring.
Omneky highly values the importance of taking these suggestions seriously and is working vigorously to implement them.