Regulations on artificial intelligence (AI) do not necessarily inhibit smaller players in the market, where there are opportunities for use cases to be built on top of current major platforms.
Critics of regulatory policies, such as mandatory certification and licensing requirements, have argued that these rules will boost the foothold of large market players while rising entry barriers for startups looking to break into the market.
Also: Companies aren’t spending big on AI. Here’s why that cautious approach makes sense
Such regulation would crush innovation, said Andrew Ng, Stanford University professor and co-founder of Google Brain, on suggestions that AI could be made safer through mandatory licensing schemes. “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction,” Ng said in a recent interview with Australian Financial Review.
While he noted that the absence of policies is better than instilling bad ones, Ng pointed instead to the importance of having “thoughtful” regulation, such as the need for transparency from tech companies. This action could have helped prevent the harm these companies created with social media and would navigate the industry away from a similar result with AI, he said.
AI regulations, though, do not necessarily inhibit startups and market entrants, said Florian Hoppe, partner and head of vector in Asia-Pacific at Bain & Company, in response to ZDNET’s question on the impact of legislation on AI innovation.
Also: 4 ways to detect generative AI hype from reality
Large language models (LLMs), for instance, are costly to build and smaller players typically will lack the resources to develop their own. However, there are opportunities for new use cases to built on top of existing LLMs, such as specialized or domain-specific AI applications and models, Hoppe said.
Startups will be able to develop such products without the constraints of having to build their own LLMs, he noted, adding that regulations play a necessary role in mitigating AI risks.
Conversations have also been healthy between governments and industry players on how the regulatory framework for AI should evolve moving forward, added Sapna Chadha, Google’s Southeast Asia vice president. This situation is true for the…