A tool capable of adding a watermark to AI-generated text to detect it has been developed
A study published in the journal Nature describes a tool capable of inserting watermarks into text generated by large linguistic models - artificial intelligence (AI) systems - thereby improving their ability to identify and track artificially created content. The tool uses a sampling algorithm to subtly bias the model's choice of words, inserting a signature that can be recognised by the detection software.