Two of the biggest forces in two deeply intertwined tech ecosystems — large incumbents and startups — have taken a break from counting their money to jointly plead that the government cease and desist from even pondering regulations that might affect their financial interests, or as they like to call it, innovation.
“Our two companies might not agree on everything, but this is not about our differences,” writes this group of vastly disparate perspectives and interests: Founding a16z partners Marc Andreessen and Ben Horowitz, and Microsoft CEO Satya Nadella and President/Chief Legal Officer Brad Smith. A truly intersectional assemblage, representing both big business and big money.
But it’s the little guys they’re supposedly looking out for. That is, all the companies that would have been affected by the latest attempt at regulatory overreach: SB 1047.
Imagine being charged for improper open model disclosure! a16z general partner Anjney Midha called it a “regressive tax” on startups and “blatant regulatory capture” by the Big Tech companies that could, unlike Midha and his impoverished colleagues, afford the lawyers necessary to comply.
Except that was all disinformation promulgated by Andreessen Horowitz and the other moneyed interests that might actually have been affected as backers of billion-dollar enterprises. In fact, small models and startups would have been only trivially affected because the proposed law specifically protected them.
It’s odd that the very type of purposeful cutout for “Little Tech” that Horowitz and Andreessen routinely champion was distorted and minimized by the lobbying campaign they and others ran against SB 1047. (The architect of that bill, California State Senator Scott Wiener, talked about this whole thing recently at Disrupt.)
That bill had its problems, but its opposition vastly overstated the cost of compliance and failed to meaningfully support claims that it would chill or burden startups.
It’s part of the established playbook that Big Tech — which, despite their posturing, Andreessen and Horowitz are closely aligned with — runs at the state level, where it can win (as with SB 1047), meanwhile asking for federal solutions that it knows will never come, or which will have no teeth due to partisan bickering and congressional ineptitude on technical issues.
This joint statement of “policy opportunity” is the latter part of the play: After torpedoing SB 1047, they can say they only did so with an eye to supporting a federal policy. No matter that we are still waiting on the federal privacy law that tech companies have pushed for a decade while fighting state bills.
And what policies do they support? “A variety of responsible market-based approaches,” in other words: hands off our money, Uncle Sam.
Regulations should have “a science and standards-based approach that recognizes regulatory frameworks that focus on the application and misuse of technology,” and should “focus on the risk of bad actors misusing AI.” What is meant by this is we shouldn’t have proactive regulation, but instead reactive punishments when unregulated products are used by criminals for criminal purposes. This approach worked great for that whole FTX situation, so I can see why they espouse it.
“Regulation should be implemented only if its benefits outweigh its costs.” It would take thousands of words to unpack all the ways this idea, expressed so, in this context, is hilarious. But basically, what they are suggesting is that the fox be brought in on the henhouse planning committee.
Regulators should “permit developers and startups the flexibility to choose which AI models to use wherever they are building solutions and not tilt the playing field to advantage any one platform.” The implication is that there is some sort of plan to require permission to use one model or another. Since that’s not the case, this is a straw man.
Here’s a big one that I have to just quote in its entirety:
The right to learn: copyright law is designed to promote the progress of science and useful arts by extending protections to publishers and authors to encourage them to bring new works and knowledge to the public, but not at the expense of the public’s right to learn from these works. Copyright law should not be co-opted to imply that machines should be prevented from using data — the foundation of AI — to learn in the same way as people. Knowledge and unprotected facts, regardless of whether contained in protected subject matter, should remain free and accessible.
To be clear, the explicit assertion here is that software, run by billion-dollar corporations, has the “right” to access any data because it should be able to learn from it “in the same way as people.”
First off, no. These systems are not like people; they produce data that mimics human output in their training data. They are complex statistical projection software with a natural language interface. They have no more “right” to any document or fact than Excel.
Second, this idea that “facts” — by which they mean “intellectual property” — are the only thing these systems are interested in and that some kind of fact-hoarding cabal is working to prevent them is an engineered narrative we have seen before. Perplexity has invoked the “facts belong to everyone” argument in its public response to being sued for alleged systematic content theft, and its CEO Aravind Srinivas repeated the fallacy to me onstage at Disrupt, as if they are being sued over knowing trivia like the distance from the Earth to the moon.
While this is not the place to embark on a full accounting of this particular straw man argument, let me simply point out that while facts are indeed free agents, the way they are created — say, through original reporting and scientific research — involves real costs. That is why the copyright and patent systems exist: not to prevent intellectual property from being shared and used widely, but to incentivize its creation by ensuring that they can be assigned real value.
Copyright law is far from perfect and is probably abused as much as it is used. But it is not being “co-opted to imply that machines should be prevented from using data” — it is being applied to ensure that bad actors do not circumvent the systems of value that we have built around intellectual property.
That is quite clearly the ask: let the systems we own and run and profit from freely use the valuable output of others without compensation. To be fair, that part is “in the same way as humans,” because it is humans who design, direct, and deploy these systems, and those humans don’t want to pay for anything they don’t have to, and don’t want regulations to change that.
There are plenty of other recommendations in this little policy document, which are no doubt given greater detail in the versions they’ve sent directly to lawmakers and regulators through official lobbying channels.
Some ideas are undoubtedly good, if also a little self-serving: “fund digital literacy programs that help people understand how to use AI tools to create and access information.” Good! Of course, the authors are heavily invested in those tools. Support “Open Data Commons—pools of accessible data that would be managed in the public’s interest.” Great! “Examine its procurement practices to enable more startups to sell technology to the government.” Awesome!
But these more general, positive recommendations are the kind of thing you see every year from industry: invest in public resources and speed up government processes. These palatable but inconsequential suggestions are just a vehicle for the more important ones that I outlined above.
Ben Horowitz, Brad Smith, Marc Andreessen, and Satya Nadella want the government to back off regulating this lucrative new development, let industry decide which regulations are worth the trade-off, and nullify copyright in a way that more or less acts as a general pardon for illegal or unethical practices that many suspect enabled the rapid rise of AI. Those are the policies that matter to them, whether kids get digital literacy or not.
Source link