Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
This is part of the issue with of some of these tools.
If humans got so incompetent that AI can trick the court, I guess vigilante or hire assassin.
Also AI companies will presumably store generated material long term so that any supposed evidence can be checked against AI company records.
I suspect law enforcement will ultimately have to use AI to scan if something has been made by AI and I assume that on a pixel level there would still be some inconsistencies that would occur from it having been generated and not recorded.
If it's impossible to determine if image/video evidence is real or not it'll simply be like how a large amount of evidence is already treated - as possibly ambiguous and suffering from reasonable doubt.
For example - a blood spot on someone's carpet from a murder victim.
Is it because the accused murdered them and failed to clean up 100% of the evidence?
Or is it because the murder victim was their friend and suffered a nosebleed a year earlier and it wasn't bleached out?
On a jury you have to weigh the balance of probability with all evidence today, the future will be pretty much the same.
We all know law enforcement is corrupt.
He has done videos on lawyers getting caught being sending in bogus AI generated paperwork already.
It's already happening.