The Nature of AI Use Cases in Securities Compliance

Dave Feldman

Co-founder & CEO


Preface


These days, I wear many hats. I anticipated this when I decided to start a startup, but I've taken on even more roles than I expected. I expected to be an engineer, I never anticipated being a cookie delivery boy; I expected to be a salesman, never a janitor. You get the gist. 


One of my roles is to understand the nature of new technology like Artificial Intelligence and how it will change financial compliance. I feel fortunate that I get to work on this part of my job because it’s something that I have been interested in for a while. 


My first job out of college was consulting at Google, where I got to see the inner workings of the AdWords data science machine and then worked on establishing analytics for Google's recently launched smart speaker line, Google Home. AdWords made a ton of money but was a crummy product; Google Home was a good product that lost money. I was living in San Francisco, reverse commuting to the suburbs, and I met the author of my Johns Hopkins macroeconomics textbook in a restroom after a presentation.


None of this really made sense to me. So, in a fit of reckless ambition—or boredom, you tell me — I decided to join a company that was in an interesting position and thus was recruiting people with inadequate credentials into important positions. When I joined Castle Global, Inc. (now known as Hive), they had just decided — with $10M+ in venture funding from General Catalyst — to pivot away from mobile phone applications to become an enterprise AI company.


Back in those days, there weren't many enterprise AI companies. The Hive founders immediately foresaw many of the AI innovations of today. The projects I worked on included building what was — for a brief time — a market-leading data labeling solution (now a $10B+ business), an analytics business centered on using AI to measure TV trends using device data (now a $2B+ business), and one of the first systems — which I personally coded — to extract information from complex insurance documents at a top global reinsurer.


The document tech never made it out of pilot. It was too early — AI technology wasn't where it needed to be for the product to be viable with adequate accuracy. Always a pivot away from a unicorn valuation, Hive found success by moving into a different market and built the content moderation company that it is today. I changed jobs when I left San Francisco for a change of pace during the pandemic and built fintech systems used for things like Form 5500 filings and recordkeeping 401(k) plans at Guideline for a few years.


In 2023, Ed and I — friends since our 18th birthdays — drank too many mojitos while visiting Miami and decided to start a business together. Oddly enough, we noticed that if we started an AI financial compliance company, we would have a clear expertise advantage about the application of AI and, possibly, enough knowledge about compliance to be dangerous. We started Y Combinator in January of 2024. I panicked and enrolled in the NSCP law school course to become a Certified Securities Compliance Professional during the YC batch, and we were off to the races.

The nature of AI use cases in securities compliance

 

Imagine a world where a computer program can analyze a block of data representing human communication and correctly interpret the meaning 50% of the time. This program can be tuned to either over-infer or under-infer meaning from data patterns, based on its settings.


If the program reads all communications looking for impropriety, it might flag 5% of the information set, which might capture, say, 80% of improper communication occurring. Compliance teams at major financial institutions often do not have adequate resources to read 5% of communications, and there is no explicit rule that requires them to do so. But in a world where, say, 1% of communication is legitimately problematic, choosing a 1% sample to review from the programmatically refined dataset would statistically produce a whopping 1600% greater likelihood that any single communication is compliance-relevant.


This is a conservative estimate of the surveillance-enhancing capabilities of artificial intelligence, yet it illustrates a key truth that the compliance industry is just beginning to grapple with: there is strong statistical proof that AI technology can improve the surveillance capabilities of compliance teams by an order of magnitude.


In other words, not using artificial intelligence for compliance tasks will become indefensible within our lifetimes — likely within the next 10 years. It doesn’t matter if you love it, hate it, or couldn’t care less — technologists like Jensen Huang have once again "dealt the cards," and one minor effect among many will be major change in the financial compliance industry.


It’s about the people


The AI genie is out of the bottle and cannot be put back. What happens next? AI adoption will become de facto required by regulators over time. But this doesn't mean that compliance people will be put out of work. Regulators, after all, demand that compliance officers take reasonable steps to ensure effectiveness. Continuing to employ human compliance judgment via employees, consultants, lawyers, etc., will unquestionably continue to be more effective than full automation in a way that is also statistically significant.


Compliance officers will not be replaced by computers; they will be retrained to use new computer software (or will be replaced by compliance officers who are willing to retrain). If you’re a compliance officer - this isn't a reason to fret. You will continue to be an integral part of your organization, and you’ll actually be able to accomplish more (more on this in the recent Inside Information article about Greenboard).  


Versions of the future


At Greenboard, we're building software that we believe the whole industry will benefit from and ought to adopt.


How is it possible for our technology to be so much better than the alternatives available? Aside from our comparative expertise in working with AI versus the leadership of some of the mega-compliance corporations, we also don't have hundreds of millions (yes, you read that right) in annual profits riding upon maintaining the status quo. It's a classic case of an innovator's dilemma, and it's a structural force that is hampering innovation in this sector (and is also why incumbents are already falsely attacking our product - see below). 



Each and every securities compliance officer will face a choice in their careers at some point in the next 10 years. They can continue to trust the tools built by companies that have a vested interest in maintaining the status quo and risk being outdone by someone who adopts new technologies, or risk looking foolish for entrusting new technology too early.


Audentes fortuna iuvat — I'm no Latin scholar, and it's an old saying, but perhaps fortune will once again favor the bold.