Well, this does happen to generate the "Related" and the "Questions that may already have your answer" lists, to provide suggestions of possible duplicates. But I'm fairly confident that they'll stay just that: suggestions, rather than a hard "you can't post this" restriction.
You agree that the accuracy might not be optimal, but I think you underestimate how annoying false positives would be. Imagine how frustrating it would be to write a question, but have the system tell you it was a duplicate of a question that you knew it wasn't. (There are many pairs of questions that use similar words, but are asking something completely different).
You'd be forced to resort to changing words in the title and description with the goal of making it less similar to the previous question. This both frustrates the user and leads to a lower quality question.
manpreet
Best Answer
2 years ago
Why doesn't Stack Overflow implement an algorithm able to detect the % of similarity in questions being asked and if that percentage is above a certain threshold the user is denied posting? Is it in progress? Any reason preventing that feature from being integrated?
While fully understand the complexity of this problem, I think it is possible to develop such a system unless there is a common belief that this feature isn't worth the effort!