US Patent 12,513,102 B2, entitled "Simulation of a user of a social networking system using a language model," was awarded to Meta Platforms Technologies on December 30, 2025 This article explores patent deceased chatbots. . The system describes a system that trains a large language model (LLM) on a user's previous social media activity, including posts, comments, likes, chats, and voice messages, in order to generate responses, likes, comments, or even simulated audio/video interactions on their behalf.

The patent was first filed in November 2023 by Andrew Bosworth, the chief technology officer at Meta. By imitating the user's style, this AI "bot" aims to sustain engagement on social media sites like Facebook and Instagram when the user is unavailable, such as during a prolonged vacation or after death.

A pre-trained LLM is the foundation of the technology, which is then improved with user-specific information from interactions recorded in Meta's systems. Through an interface, users can grant permissions and select which data—such as private direct messages versus public comments—feeds the model, giving them some degree of privacy control. After being deployed, the bot searches newsfeeds for pertinent content, creates prompts with context (such as the poster's affinity score or relationship to the user), and outputs actions like "like" or comments that seem genuine.

For example, it may react differently to posts from friends than from family, taking into account user profile information or the timing of events like birthdays.

Legal and Ethical Issues A representative for Meta told Business Insider that the company has "no plans to move forward with this example," adding that patents only protect concepts and do not ensure that they will be put into practice. This comes after Facebook's user growth stalled and earlier AI "user" pilots that irritated actual people in late 2024. However, Meta's memorialization policy, which freezes deceased accounts in their current state, conflicts with the idea.

Critics such as Joseph Davis, a sociologist from the University of Virginia, caution that it interferes with mourning and causes confusion by mimicking presence: "Let the dead be dead." Risks are highlighted by precedents. Internally, Microsoft decided that its 2020 patent for deceased chatbots was "disturbing" and put it on hold.

In 2022, Amazon demonstrated Alexa imitating a grandmother using just one minute of audio, but they never put it into production. [query] After receiving "nightmare fuel" criticism for its AI avatars, startup 2Wai changed course. [question] Replika and You, Only Virtual came out of grief, but they are under ethical scrutiny in "grief tech."

This increases the possibility of impersonation from a cybersecurity standpoint. Similar LLMs could be used by malicious actors for post-death disinformation campaigns, phishing through "family member" accounts, or deepfake social engineering. Account hijacking could be made possible by training data leaks caused by weak permissions or data breaches. User-controlled kill switches, AI disclosure requirements, and strong verification for legacy access are some of the defenses.

Malware Bytes claims that Meta's invention highlights AI's dual advantage of either maintaining digital legacies or undermining authenticity.

Business incentives such as increased engagement loom, but societal unpreparedness on a philosophical, emotional, and regulatory level probably puts it on hold. To protect their actual afterlives online, users should check their privacy settings and legacy contacts for the time being.