If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.Why Responsible AI Development Needs Cooperation on Safety
And they identify four strategies to address this issue:
- Promote accurate beliefs about the opportunities for cooperation
- Collaborate on shared research and engineering challenges
- Open up more aspects of AI development to appropriate oversight and feedback; and
- Incentivize adherence to high standards of safety
The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.