Tech Bro NonsenseFormer Google CEO Tells Congress That 99 Percent of All Electricity Will Be Used to Power Superintelligent AIbillionaire tech tycoon and former Google CEO Eric Schmidt comments to the House Committee on Energy and Commerce: "What we need from you is we need the energy in all forms, renewable, non-renewable, whatever. It needs to be there, and it needs to be there quickly.""Many people project demand for our industry will go from 3 percent to 99 percent of total generation... an additional 29 gigawatts by 2027 and 67 more gigawatts by 2030. If [China] comes to superintelligence first, it changes the dynamic of power globally, in ways that we have no way of understanding or predicting.”Meta Says It's Okay to Feed Copyrighted Books Into Its AI Model Because They Have No "Economic Value"In the ongoing suit Richard Kadrey et al v. Meta Platforms, led by a group of authors including Pulitzer Prize winner Andrew Sean Greer and National Book Award winner Ta-Nehisi Coates, the Mark Zuckerberg-led company has argued that its alleged scraping of over seven million books from the pirated library LibGen constituted "fair use" of the material, and was therefore not illegal.Meta's attorneys are also arguing that the countless books that the company used to train its multibillion-dollar language models and springboard itself into the headspinningly buzzy AI race are actually worthless. Meta cited an expert witness who downplayed the books' individual importance, averring that a single book adjusted its LLM's performance "by less than 0.06 percent on industry standard benchmarks, a meaningless change no different from noise." Thus there's no market in paying authors to use their copyrighted works, Meta says, because "for there to be a market, there must be something of value to exchange," as quoted by Vanity Fair — "but none of [the authors'] works has economic value, individually, as training data." Other communications showed that Meta employees stripped the copyright pages from the downloaded books.Tellingly, the unofficial policy seems to be to not speak about it at all: "In no case would we disclose publicly that we had trained on LibGen, however there is practical risk external parties could deduce our use of this dataset," an internal Meta slide deck read. The deck noted that "if there is media coverage suggesting we have used a dataset we know to be pirated, such as LibGen, this may undermine our negotiating position with regulators on these issues."Lauren Sánchez in Space Was Marie Antoinette in a Penis-Shaped RocketKaty Perry Boasts About Ridiculous Rocket Launch While NASA Is Scrubbing History of Women in Space“It’s about a collective energy and making space for future women. It’s about this wonderful world that we see right out there and appreciating it. This is all for the benefit of Earth.”Last month, the Orlando Sentinel first reported, NASA scrubbed language from a webpage about the agency's Artemis missions declaring that a goal of the mission was to put the first woman and first person of color on the Moon; just a few days later, NASA Watch reported that comic books imagining the first woman on the Moon had been deleted from NASA's website.A webpage for "Women at NASA" is still standing, but pictures of women and people of color — astronauts, engineers, scientists — have reportedly been removed from NASA's real-world hallways amid the so-called "DEI" purge. Per Scientific American, the word "inclusion" has been removed as one of NASA's core pillars. And as 404 Media reported in February, NASA personnel were directed to remove mentions of women in leadership positions from its website.OpenAI NonsenseOpenAI Is Secretly Building a Social NetworkOpenAI has been secretly building its own social media platform, which The Verge reports is intended to resemble X-formerly-Twitter — the social media middleweight owned by CEO Sam Altman's arch-nemesis, Elon MuskOpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical riskOpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.OpenAI also said it would consider releasing AI models that it judged to be “high risk” as long as it has taken appropriate steps to reduce those dangers—and would even consider releasing a model that presented what it called “critical risk” if a rival AI lab had already released a similar model. Previously, OpenAI had said it would not release any AI model that presented more than a “medium risk.”Saying 'please' and 'thank you' to ChatGPT...