David Vance SubstackRead More
I was amazed to read that in California’s community colleges, AI-powered bots posing as students now pose a significant threat to the system by perpetrating financial aid fraud and are exploiting open-access policies to siphon $$$$ millions in state and federal funds.
How does it work? These bots, driven by criminal enterprises, create fake student identities using stolen personal information, such as Social Security numbers, to enrol in online courses. By submitting AI-generated coursework, they remain enrolled long enough to claim Pell Grants (up to $7,400 per student)! They can be seen to “attend” on-line lectures and submit coursework so they look human, but they aren’t!!!
In 2024, about 25% of community college applicants were bots, rising to 37% by early 2025.
From September 2021 to December 2023, these schemes stole $6.5 million, with $13 million swindled in the year ending April 2024.
The impact of these fake students is profound.
At Southwestern College, professors report classes where most students are fake—only 14 of 104 were real in one case. Faculty is therefore forced to shift from teaching to fraud detection, eroding educational quality and community-building. The California Community Colleges’ application system struggles to filter bots before colleges receive new student lists, leaving institutions vulnerable. This highlights a systemic weakness: open-access policies, while inclusive as the leftists want them to be, lack robust verification to counter sophisticated AI-driven fraud.
How on earth can this problem be solved?
Enhancing authentication is critical. Southwestern College’s partnership with ID.me for identity verification shows promise, but scaling this across the state’s 116 community colleges is the next major step. Upgrading the application system CCC with AI-driven fraud detection—such as analysing application patterns or flagging suspicious IP addresses—could block the A1 bots early. Machine learning models trained on bot behavior (e.g., repetitive coursework patterns) could automate detection, reducing faculty burden.
In essence, AI will need to be developed to counter AI!
Stricter enrolment verification is needed such as requiring video interviews or biometric checks for online students. The California system will need to decentralise data-sharing to track bot activity across campuses, enabling faster response. Federal and state agencies will have totighten financial aid disbursement rules, such as delaying funds until course engagement is verified by human instructors.
While AI bots threaten California’s community colleges, combining advanced authentication, policy reform, public-private partnerships, and faculty support can mitigate some of the risk. The fraud underscores a broader lesson: unchecked AI adoption invites exploitation! Anything online can become a target of these AI bots and there is the possibility that, unchecked, entire on-line classes are full of bots with not a human in sight! The rise of AI has all sorts of unseen consequences!
****I put out at least three articles a day. If you enjoy all this can I ask you to consider to becoming a PAID subscriber, it’s only £5 a month, you can cancel if you don’t enjoy it but I know you will. I want to thank the kind people who already do this, without your help this becomes impossible. Thank you in anticipation of your support****