Mauriella E. DiTommaso: Threat Modeling in the Age of Trusted AI Chatbots
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): The Good Men Project
Publication Date (yyyy/mm/dd): 2026/01/24
Mauriella E. DiTommaso is Program Director for forensic programs and adjunct faculty at Champlain College Online, overseeing Computer Forensics & Digital Investigations and Digital Forensic Science. She also serves as Chief Information Security Officer for Washington State’s Department of Social and Health Services, bringing over two decades of cybersecurity and digital forensics leadership. She holds an M.S. in Forensic Sciences from Champlain College, an MBA from Delaware Valley College, and a B.A. from Edinboro University.
Scott Douglas Jacobsen asked Mauriella E. DiTommaso how security thinking changes when people treat chatbots as trusted advisers. DiTommaso explained that classic threat-modeling methods still apply, but AI chatbots expand scope because they touch more systems and data. On social engineering, she warned that impostor sites with embedded bots can coax personal details for later exploitation. For sensitive transcripts, she stressed data-classification rules, storage and proper access controls, and required investigator clearances during digital-forensic work.
Scott Douglas Jacobsen: What new threat models emerge when users treat AI chatbots as trusted advisers?
Mauriella E. DiTommaso: Threat models provide a structured representation covering various aspects of an application, software, system, etc. that impact security and as AI-powered chatbots are relatively new or being more regularly deployed across information technology environments, the approach to threat modeling for technologies deploying AI chatbots as well as the chatbots themselves would still follow established threat modeling processes but would potentially encompass much larger and/or more complex scope of information being assessed.
Jacobsen: How can chatbots be weaponized for social engineering?
DiTommaso: A fake website crafted to impersonate a trusted entity with a chatbot deployed could be leveraged to collect personal information from an unsuspecting user, and that information in turn can be leveraged for social engineering. Information a chatbot in this example could collect would depend on the impersonated entity, however, the more personal data the chatbot can be programmed to coax out of the user the more information the actor will be able to obtain and utilize for social engineering.
Jacobsen: What privacy and digital forensics challenges arise when chatbot transcripts contain sensitive self-disclosures?
DiTommaso: From a privacy perspective, this will depend on the classification or category of the data being disclosed and the privacy/security rules governing the sensitive data. The ramifications of sensitive data being exposed via a chatbot interaction will also depend on the type of organization hosting the chatbot, and where/how the chatbot transcripts are handled (ie., stored, shared, accessed). From a digital forensic perspective, if the transcripts are part of an investigation, one of the main challenges would be ensuring the investigator(s) possesses the proper clearances to view and work with that data. This should be part of an investigator’s onboarding to the organization so they are prepared and have proper coverage to view specified sensitive data at any given time.
Jacobsen: Thank you very much for the opportunity and your time, Mauriella.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
