Anthropic (maker of the AI model Claude) drew “ethical red lines.” The company has insisted that its AI not be used for mass domestic surveillance or fully autonomous weapons — positions rooted in safety and human-control concerns. The U.S. Defense Department wanted unrestricted use of Claude for “all lawful purposes” in military and defense operations, arguing that once a technology is lawfully acquired, the military shouldn’t be bound by a private company’s restrictions.
Don’t miss our latest episodes! follow us to stay updates news everyday.