For those who joined us for the webinar with Craig Ball last week, thanks for coming. For those that didn't, you missed some great stories about eDiscovery transitioning to forensics – you can watch the webinar on-demand. Not to worry, though, we have you covered with this blog.
We gave a sneak peek beforehand and talked about a "man in the middle" attack – you can read that blog here. During the presentation, Gavin and Craig talked about three more examples of cases moving from eDiscovery to forensics. We heard from our webinar participants that their favorite story was the one about document authentication, so we wanted to expand on that here in the blog. We're pretty sure that has to do with the Beatles reference… read on to learn more.
All this begs the question about what to look for when performing an eDiscovery review that would signal a forensics review is in order. Common mistakes made by those who forge documents are impossible dates (such as November 31st) or a date that's correct but on a day that didn't occur; for example, today is Wednesday, December 15th, 2021, but if someone is faking an email or contract, they might get the day wrong and say it was created on Thursday, December 15th, 2021.
But there are far more subtle questions to be answered about the authenticity of evidence. As Craig mentioned, in the case Rossbach vs. Medical Center, there was a claim of inappropriate information being texted. One of the lawyers analyzed the screenshots and found that there were certain emojis that could not have existed at the time the text was exchanged; also that the screen wasn't cracked in the same way.
It's a good strategy when working with email and images that can be easily altered. Forensics brings that deeper analysis by asking questions about what could or could not have existed at the time, like the emojis. Dr. Manes gave an example of a screenshot that was supposed to have been taken in 2009 that had the "5G" symbol in the top right corner, but that technology didn't exist at the time.
These small differences add up to something larger, and it takes a sharp eye and inquiring mind to start the line of questioning about document authenticity. Any of these things taken on their own might not mean much, but as Craig said, "once you pull everything together, it becomes clear as an impossible piece of evidence."
In a future webinar, we'll expand on these manual authentication detection techniques and discuss some automated ways to integrate that technology into platforms like iCONECT. Currently, we have automated techniques to detect bad dates and times, missing emails and email threads, PII detection, altered metadata, and hidden data within native documents (such as track changes in Word or hidden cells in Excel).
One of the more recent applications of this automated detection of data in the iCONECT system is the use of the PII detection engine, which can find social security numbers, phone numbers, and bank accounts. It can also select PHI (personal health information) within documents. In several cases over the last few months, we have been asked to search documents for this type of information. The most common case is where someone's email or servers have been compromised, and the client is looking to determine if they have a reporting requirement for that information. Finding out whether social security numbers exist in a set of emails has become easier than ever. The ability to redact this information is also available and can be done on a granular level – we can ask the system to redact all social security numbers but not phone numbers.
Adapting this forensics technique to an eDiscovery tool can be incredibly useful in a typical eDiscovery case or one that requires the use of both forensics and eDiscovery.
Stay tuned for further discussions about document authentication, automatic data detection, automatic redaction, and additional ways to help determine whether those documents are real.