Disrupting malicious uses of AI | February 2026
Our latest threat report examines how malicious actors combine AI models with websites and social platforms—and what it means for detection and defense.
Quick summary
Our latest threat report examines how malicious actors combine AI models with websites and social platforms—and what it means for detection and defense.
Related tags
Companies and people
Story threads
Continue with this story
Follow the same topic through connected articles, entity pages, and active story threads.
Show HN: Will AI take my job
Comments
15% of Americans say they’d be willing to work for an AI boss, according to new poll
According to a Quinnipiac University poll, 15% of Americans say they'd be willing to have a job where their direct supervisor was an AI program that assigned tasks and set sched...
15% of Americans say they’d be willing to work for an AI boss
Your human manager may soon be a chatbot. Across organizations, AI is being used to replace layers of management in what some are calling "The Great Flattening."
As more Americans adopt AI tools, fewer say they can trust the results
AI adoption is rising in the U.S., but trust remains low, with most Americans concerned about transparency, regulation, and the technology’s broader societal impact, according t...
ScaleOps raises $130M to improve computing efficiency amid AI demand
ScaleOps just raised $130M to tackle GPU shortages and soaring AI cloud costs by automating infrastructure in real time.
Ad slot
Article inline monetization block
A reserved partner slot for relevant tools, services, and contextual editorial integrations.
Related articles
More stories that share tags, source, or category context.
Show HN: Will AI take my job
Comments
15% of Americans say they’d be willing to work for an AI boss, according to new poll
According to a Quinnipiac University poll, 15% of Americans say they'd be willing to have a job where their direct supervisor was an AI program that assigned tasks and set sched...
More from OpenAI News
Fresh reporting and follow-up coverage from the same newsroom.
Helping disaster response teams turn AI into action across Asia
AI for Disaster Response in Asia: OpenAI Workshop with Gates Foundation
STADLER reshapes knowledge work at a 230-year-old company
Learn how STADLER uses ChatGPT to transform knowledge work, saving time and accelerating productivity across 650 employees.
Inside our approach to the Model Spec
Learn how OpenAI’s Model Spec serves as a public framework for model behavior, balancing safety, user freedom, and accountability as AI systems advance.
Introducing the OpenAI Safety Bug Bounty program
OpenAI launches a Safety Bug Bounty program to identify AI abuse and safety risks, including agentic vulnerabilities, prompt injection, and data exfiltration.