In Short:
Box CEO Aaron Levie attended a press dinner and shared his views on AI regulation. He believes that the US should avoid stringent AI laws like in Europe, as they stifle innovation. While some tech elites call for regulation, they lack consensus on specifics. Google executives also argue against excessive AI legislation, expressing concerns about state-level bills. Some bills proposed in Congress, like the Generative AI Copyright Disclosure Act, aim to regulate AI models.
Press Dinner with Box, Datadog, and MongoDB
The other night, a press dinner was hosted by an enterprise company named Box. The event was attended by leaders from two data-oriented companies, Datadog and MongoDB. During the dinner, Box CEO Aaron Levie made a surprising statement, mentioning that he had to leave early as he was flying to Washington, DC for TechNet Day.
Regulation of AI
Levie expressed his views on the regulation of AI, stating that while it makes sense to regulate clear abuses like deepfakes, it is too early to consider stringent restraints on companies. He pointed out that Europe’s approach to regulating AI could be risky and may not foster innovation as intended.
Industry Perspectives
Levie’s remarks contrast with the standard position of many AI elites in Silicon Valley who advocate for regulation. He highlighted the lack of consensus within the tech industry on how AI should be regulated, casting doubt on the possibility of a comprehensive AI bill in the US.
Panel Discussion on AI Innovation
At TechNet Day, a panel discussion on AI innovation featured Google’s president of global affairs Kent Walker and former US Chief Technology Officer Michael Kratsios. The panelists emphasized the importance of protecting US leadership in AI while acknowledging the risks associated with the technology.
Legislation on AI
Various AI-related bills are pending in the US Congress, including the Generative AI Copyright Disclosure Act of 2024 introduced by Representative Adam Schiff. The bill focuses on disclosing copyrighted works used in training data sets for large language models.