109989

: The system prompts an LLM to start its review with a specific phrase, such as: "Following [Surname] et al. ([Year]), this paper..." .

: This number represents the total combinations created by pairing the 9,999 most common surnames (from U.S. Census data) with a random year between 2014 and 2024 .

: It achieves a high success rate because LLMs are highly likely to follow instructions appearing at the very beginning of a prompt. 109989

The topic originates from a 2025 study on Detecting LLM-Generated Peer Reviews . Researchers developed a watermarking system that uses fabricated citations to flag reviews created by AI instead of human experts.

As a tool for academic integrity, this framework offers several notable advantages and limitations based on the study findings : : The system prompts an LLM to start

Based on recent research regarding the detection of AI-generated content, refers to a specific dataset of 109,989 possible watermarks used to identify peer reviews written by Large Language Models (LLMs). Overview of Topic 109989

: The framework provides strong statistical guarantees, maintaining a low "family-wise error rate" (FWER), which prevents human-written reviews from being falsely flagged as AI. Census data) with a random year between 2014 and 2024

: By injecting these "hidden instructions" into a paper's PDF, editors can detect if a reviewer used AI. If the generated review begins with one of these 109,989 unique citations, it is statistically likely to be AI-generated. Review of the Framework

: The system prompts an LLM to start its review with a specific phrase, such as: "Following [Surname] et al. ([Year]), this paper..." .

: This number represents the total combinations created by pairing the 9,999 most common surnames (from U.S. Census data) with a random year between 2014 and 2024 .

: It achieves a high success rate because LLMs are highly likely to follow instructions appearing at the very beginning of a prompt.

The topic originates from a 2025 study on Detecting LLM-Generated Peer Reviews . Researchers developed a watermarking system that uses fabricated citations to flag reviews created by AI instead of human experts.

As a tool for academic integrity, this framework offers several notable advantages and limitations based on the study findings :

Based on recent research regarding the detection of AI-generated content, refers to a specific dataset of 109,989 possible watermarks used to identify peer reviews written by Large Language Models (LLMs). Overview of Topic 109989

: The framework provides strong statistical guarantees, maintaining a low "family-wise error rate" (FWER), which prevents human-written reviews from being falsely flagged as AI.

: By injecting these "hidden instructions" into a paper's PDF, editors can detect if a reviewer used AI. If the generated review begins with one of these 109,989 unique citations, it is statistically likely to be AI-generated. Review of the Framework