Statement on the NYS Comptroller’s audit of NYC’s Privacy and Security of Student Data

Statement on the NYS Comptroller’s audit of NYC’s Privacy and Security of Student Data

May 4, 2025

The audit from the State Comptroller’s office released today confirms what many NYC advocates have long known:  the privacy policies and practices of the NYC Dept. of Education are sloppy, irresponsible and show a lack of concern for keeping students’ personal information safe from breach and misuse.    This makes DOE’s insistent push to rapidly expand the use of Artificial Intelligence tools in our schools unwarranted, given how these tools represent an even greater risk to student  privacy and safety.

Even more troubling is the DOE contemptuous response to the auditors’ findings and recommendations to improve their processes, dismissing nearly each one as unfounded.  Altogether, the audit’s findings reinforce the lack of trust felt by many in DOE’s competence and caring when it comes to protecting student privacy.

The audit’s findings put in question the AI guidance’s assurances on DOE’s ability to keep student data safe

In the recent DOE AI guidance, they repeat over and over that student privacy is rigorously protected through a vetting  process  called ERMA (Enterprise Request Management Application).   Yet the findings in this audit show that  DOE’s privacy processes are inherently defective.   The DOE’s lack of responsiveness and willingness to improve their privacy policies provide yet more evidence  that their rush to expand the use of AI in our schools is reckless.   AI products represent a special risk to student privacy as many  data-mine personal data to improve their products, which violates the state student privacy law, Ed Law 2D, the NY State Student Privacy law passed by the legislature in 2014.

The audit’s findings, as well as repeated data breaches of NYC student data and its illegal use for commercial purposes reveal the inadequacy of the  DOE’s privacy vetting process.  As a member of the Chancellor’s AI Working Group, I along with other members proposed additional safeguards.  These included independent privacy impact assessments, data security audits, and tests for algorithmic bias that should be required for any educational product using AI.  DOE rejected all these recommendations.   Additional problems with the recently released AI guidance, including DOE’s refusal to rigorously comply with the state privacy law,  are described in our critique here.

The findings confirm DOE’s failure to properly control and safeguard personal student information

The auditors discovered that DOE maintains  no central records as to which vendors and other third parties have access to student personal information, and that they maintain no written policies covering data classification, risk assessment, or backup and recovery, as required by the NIST data security framework specified by Ed Law 2D.

In their response, DOE officials claim  that this conclusion is false, and that they are “able to determine which SIS or other applications that consume student data are in use by a given school or office.”  Yet just last week, on April 28, 2026, the DOE privacy office confirmed in an email to a parent that “at this time, there is no Central list of every educational technology tool used by each school.”

Moreover, according to Ed Law 2D, it is every parent’s right to know which vendors have access to their children’s data, and to receive a copy of the data held by those vendors  within 45 days of their request. Yet this right is chronically  violated by DOE officials, and when parents do receive data files from their vendors, the files can be empty of information.

There are more than 700  companies and other third parties that have access to personal student data according to the DOE website, though the number of the ed tech programs used is likely greater,  as some vendors provide schools with more than one product.   The number of products collecting and processing student data has steadily increased each year, and is even now even more rapidly growing, as DOE adds  new products with AI functionality to be used in classrooms throughout the city. 

Delays in recognizing and reporting breaches

Because DOE officials do not know which schools use which products, they are unable to ensure that when data breaches occur, they are able to inform affected families within the legally required timeline or identify which data elements may have been exposed.

The auditors reported that there were at least 141 breaches of NYC personal student data  between January 5, 2023 through February 27, 2025, and in 48% of cases, the DOE reported them to  NYSED past the legal deadline of 10 days.  In at least one case, it took over 460 days.  DOE also missed the 60 day deadline to inform parents that their children’s data had been breached in at least 11% of the time. [Note: 60 days is in itself too long; NY law requires breach notification by private businesses  and state agencies within 30 days.]

The Illuminate breach and problems with their privacy agreement

Some privacy vendor agreements are never even posted online in violation of the law – like that of Illuminate, which exposed the data of more than a million NYC current and former students in 2022, and yet whose privacy agreement was posted online only after the breach occurred.  Even then,  the agreement hinted that the data was not always encrypted, contrary to the requirements of the law, which turned out to be the case.

The Illuminate example also shows that  DOE does not independently investigate breaches but instead relies on the unreliable reporting of vendors concerning the number and identity of students affected. After the data of more than 800,000 current and former NYC students was breached by Illuminate between late December 2021 and early January 2022, their families were not notified by DOE until March 25, 2022.

Even worse, in May 2024, more than two years after the breach, a  second round of notifications to families revealed that about  380,000 more students and former students also had their information exposed.  This was also seven months after Illuminate had informed DOE of the additional students involved – far exceeding the 60 day deadline in the law, according to the information on the DOE website, which states that they started looking into this matter only after being told by Illuminate that more students were affected in October 2023.  This put additional students and former students at risk of identity theft and more, and unable to promptly acquire the insurance and credit monitoring offered by the vendor for free.

The PowerSchool breach and problems with their privacy agreement

After the massive nationwide breach of the PowerSchool student information system occurred in late December 2024,  parents throughout the country and elsewhere in the state were informed of the breach in early January 2025.  Yet at that time, DOE told a reporter they were still looking into whether any NYC schools or students were affected.

In fact, DOE refused to confirm which schools were involved even after Daily News reported on their names  on February 6, 2025, from information relayed by the State Education Department.  Only after the Daily News reported on this did parents whose children attended these schools receive emails saying DOE was still looking into this matter.  It was not until April  2025  that DOE confirmed to parents that their children’s data had been breached, long past  the 60-day deadline in the law.

To this day, the DOE has refused to post the names of the NYC schools affected by the PowerSchool breach on the webpage that reports on data security incidents, despite guidance from the NYSED that they should do so promptly, to alert the thousands of former students whose data was also exposed and put at risk of identity theft and worse.

As the former NYSED Chief Privacy Officer Louise de Candia wrote on Feb.3, 2025, “ There is no doubt in my mind that PowerSchool violated Education Law Section 2-d and Part 121 of the regulations which require compliance with NIST CSF as well as reasonable administrative, technical and physical safeguards to protect the security, confidentiality and integrity of PII.”

And yet  DOE continues to allow NYC schools to use as many as 16 other privacy-invasive PowerSchool products, including Naviance, which is employed in many if not most New York high schools for college guidance purposes. This is despite the fact that in 2022, it was reported that Naviance  targeted ads for colleges on its student-facing platform disguised as objective recommendations and was shown to allow colleges to discriminate by race by targeting ads only to white students.

More recently, it was announced that PowerSchool had agreed to settle a class action lawsuit  alleging that the Naviance  platform contained ad tracking technology that transmitted a wide range of student data to Google, Microsoft and a company called Heap, including their names, ID numbers, graduation years,  demographic information, photographs and survey responses, as well as  their private communications with teachers.  This would violate not only state privacy laws but also the federal wiretapping statute.   Even now, the DOE has refused to tell parents or students about the Naviance agreement or  inform them they can apply for a portion of the $17.25 million settlement.

The fact that the Illuminate and PowerSchool breaches exposed the data of many thousands of NYC students who had long graduated or otherwise left the system also shows that the data minimization and deletion by vendors required by Ed Lawa 2D is not enforced by DOE. More background here.

To make things worse, the PowerSchool privacy agreement still posted on the DOE website is clearly non-compliant with the law, as it says that the company will only conform to the privacy requirements in federal and state law or in their contract with DOE when it is “commercially reasonable.”

Other problems highlighted in the audit and the DOE’s official response

The Comptroller’s office also found significant weaknesses in DOE’s technical data security controls that should be corrected, including “issues with system monitoring, unsupported systems, and firewalls.” Understandably, the auditors only communicated the details of these security weaknesses to DOE in a separate confidential report.  In their response, DOE makes no commitment to address these technical problems, but instead says that they would address them separately, within the confidential report.

In its response, DOE  claims to have made “several improvements to its privacy practices and policies,” including updating the Chancellor’s Regulation A-820 to “restrict the use of “directory information.”

In fact, the recent amendment to the Chancellor’s Regulation weakened the protections for student data, by redefining  a wide and essentially unlimited range of personal student information, including but not limited to their names, addresses, telephone numbers, email addresses, photographs, grade level, participation in activities and sports, and more, as directory data that can be shared with third parties, even when they are not providing services to schools.  Only an unreliable parent opt out  process was provided to prevent these disclosures from occurring.

Finally, the auditors also revealed that DOE officials took an inordinate time to respond to their requests; and that documentation requests took over five months to fulfill, while requests for meetings took two months  to schedule.

Leonie Haimson is the co-chair of the Parent Coalition for Student Privacy, a member of the NYSED Data Privacy Advisory Committee, the Chancellor’s Data Privacy Working Group and the Chancellor’s AI Working Group

###

Problems with the NYC Dept. of Education’s AI Guidance

April 25, 2026

This one-page summary is also posted as a pdf here.  The DOE deadline for feedback is May 8, 2026 via their survey at on.nyc.gov/AiFeedbackNYCPS

The Guidance itself is posted here; our more detailed critique is here and embedded below, along with a partial list of AI programs currently used in NYC schools.  Also posted online is an annotated pdf of the AI Guidance with our comments.

On the survey, feel free to borrow any of the below points, provide your own, or write, “I urge you to implement a 2-year moratorium, so rigorous protections can be developed to prevent harm to students, including their privacy, their cognitive development, creativity, mental health and the environment – none of which this guidance sufficiently addresses.”

1: Lack of public input

Despite claims to the contrary, the DOE has not been responsive to the concerns of parents or the community in its determination to rapidly expand the use of AI in the classroom. Nor did this AI guidance document receive significant input from those most affected, either students, teachers or parents. Neither the members of the Data Privacy Working Group nor the AI Working Group appointed by the Chancellor Ramos were allowed to provide comment to the guidance before it was released– despite repeated assurances to the contrary from DOE officials.  And from the DOE’s actions, it is apparent that they intend to continue the rapid expansion of AI whatever the official feedback process consists of in the coming weeks.

2: There is no transparency about which AI products can be used, or that when AI is used at all, there needs to be full disclosure

The DOE AI guidance provides no clarity or transparency about which AI products can be used with students, or those that have gone through the DOE privacy vetting process known as ERMA. While the AI Working Group asked for the names of approved products that are currently used in schools, DOE officials refused, saying they had non-disclosure agreements with their vendors. Perhaps as a result, teachers continue to assign students to use off-the-shelf AI products that data-mine personal student information to improve their products –a commercial use specifically prohibited by the state student privacy law. Regardless of which AI tool is used by teachers or students, there needs to be full disclosure as to which program is being employed and for what purpose.

3: The AI guidance fails to rigorously protect student privacy

The DOE privacy vetting process is ineffective and primarily composed of a series of boxes which vendors are merely asked to check off in order to be approved.  This process has not worked to protect student privacy, as shown by recent breaches of personal information of over one million NYC students and the continued illegal use of student data for commercial purposes, as indicated by recent court settlements and consent decrees. Although AI represents an even higher documented risk to student privacy and safety, the DOE has developed no additional privacy safeguards for its use – despite recommendations from the Chancellor’s appointed AI working group and others to strengthen this process.

4: The AI guidance is inadequate, often confusing and even contradictory as to how teachers and students should use the technology

Instead, if offers a traffic light metaphor, with most potential applications in the “yellow” category, meaning used with caution, leaving it up to teachers to use their best judgement in most of these cases, without giving them clear direction. Other directives are contradictory – as to whether AI can be used for student placement. One bullet point says no; the other says placement decisions can be overridden by teachers or students – but how can that be done if there’s no clarity that the decision was made by AI in the first place? Many of the thorniest questions as to the proper and safe use of AI are punted, to be dealt with at some unspecified time in the future.

5: There is no attempt in the AI guidance to address many of the most serious concerns that parents and educators have about AI use

Growing evidence shows  how AI usage can undermine students’ cognitive development, their acquisition of fundamental skills, weaken their critical thinking and creativity, worsen their mental health challenges, and exacerbate climate change. Yet the guidance does not attempt to address any of those risks. Nor does the guidance provide any answers when it comes to the algorithmic biases often embedded in AI, or the technology’s rampant factual errors, called hallucinations. It also has nothing to say about AI’s tendency towards sycophancy— in which AI chatbots have been designed to agree with the user’s opinions, flatter them, and encourage them in whatever course they are considering, no matter how dangerous it may be. All of these are well-known problems with AI and in the latter case, it has even contributed to teen suicide, according to several ongoing lawsuits. The DOE claims that it will address some of these issues by the end of the year, but there needs to be a moratorium now, so that rigorous protections can be established with public input before the use of AI is further expanded in our schools.

For more information, email us at info@studentprivacymatters.org or check our website at www.studentprivacymatters.org Also, please  sign the AI Moratorium Coalition petition at  https://tinyurl.com/petitionAImoratorium  to be kept up to date on this issue.

PowerSchool/Naviance court settlement: your child may be eligible for a payment

Update: More on the settlement here.

April 2, 2026

It was recently announced that as part of a class action court settlement, the ed tech company PowerSchool and the Chicago Public Schools agreed to pay a total of $17.25 million to students whose privacy was violated by Naviance, a college advising company acquired  by PowerSchool in 2021. In turn, PowerSchool was bought by Bain Capital for $5.6 billion in 2024.

The lawsuit alleged that the Naviance platform contained ad tracking technology that transmitted a wide range of personal data to Google, Microsoft and a company called Heap, including student names, ID numbers, graduation years,  demographic information, photographs and survey responses, as well as  their private communications with teachers.

These practices, the attorneys argued, amounted to  “unlawful wiretapping” and “eavesdropping,” in violation of several federal and state privacy laws.

Naviance is widely used in schools throughout the country for college application and advising purposes, including in many NYC high schools.  Any student who logged into this platform at least once at school or at home between August 18, 2021 through January 23, 2026 is eligible for payment through the court settlement. A preliminary estimate by the attorney is that each student may receive about  $50, depending on how many apply.

You (or your child if they are over 18) is supposed to have been sent a notice by snail mail or email already on how to file a claim as part of the court settlement, along with a Class Member ID number.  But if you haven’t received this notice, you can still submit a claim here.

We have long been concerned about the privacy and safety of PowerSchool programs in general and Naviance in particular, and we have communicated our concerns with DOE’s Chief Privacy Officer, to no avail.

Several years ago, we had shared reports in the publication The Markup, showing how Naviance had been found to allow colleges to send targeted ads to students through its platform, in some cases ads that discriminated by their race.  These ads were purportedly disguised as objective college recommendations.  Using personal data to send targeted ads violates the provisions of the NY Student Privacy Law.

Then, as you may recall, in December 2024,  a massive breach of the PowerSchool student information system exposed the personal data of millions of students nationwide, including  thousands of current and former NYC students.  As a result of this breach, the company has been sued by  many states and districts for failing to implement the most basic data security and privacy protections.   After this occurred, I again urged DOE to cancel its contracts with PowerSchool, which offers many different, highly invasive programs to NYC schools, but received no response.

If your child uses Naviance, beware of any recommendations or other communications that they may receive through this platform.

I’d appreciate it if any parents whose child currently uses the platform might help us investigate the way Naviance works in more detail, to assess whether the company may still be continuing to violate our privacy laws and basic ethical standards, including through their new AI-powered chatbot called “PowerBuddy”.  If you and your child are willing, please email us at info@studentprivacymatters.org..  Please also let us know if you or your child has not received notice of this settlement, so we can inform the plaintiff’s attorneys.

Finally, whether or not you receive a settlement payout, it would be great if you would consider donating to Class Size Matters, earmarked to help fund the Parent Coalition for Student Privacy. Our amazing PCSP co-chair, Cassie Creswell, executive director of Illinois Families for Public Schools, worked with the attorneys on the class action lawsuit and helped identify the original plaintiff. We could really use your support.

AI Moratorium Coalition Rejects DOE Inadequate AI Guidance

See alsos articles in the media in the Daily News here, Gothamist here,  and again in the Daily News here.

FOR IMMEDIATE RELEASE;       3/24/2026

Media contact:
Edgar Alfonseca, NYC-DSA, tech.action@socialists.nyc, +12015891241
Liat Olenick, Climate Families NYC, Liat@climatefamiliesnyc.org, 917-930-2788
Kelly Clancy, PhD, PACES, parentsforaicaution@gmail.com, 512-589-6302
Martina Meijer, MORE-UFT, more@morecaucusnyc.org

New York: In response to the guidance on A.I. in schools released today, the AIM Coalition including NYC-DSA Tech Action, Climate Families NYC, Alliance for Quality Education, Parent Coalition for Student Privacy, MORE-UFT, the Coalition for Racially Just Public Schools, Class Size Matters, Parents for AI Caution and NY Kids PAC released the following statement:

The one indisputable statement in the AI guidance released by the Department of Education today is that ‘The long-term effects on how children learn,think, and develop in the era of AI are not fully understood. No school system in the world has accounted for all the implications.’  It is for this reason that more than 1500 parents and educators have signed our petition calling for a moratorium on its use, and the reason five Community Education Councils have approved resolutions in support this moratorium – to prevent the multiple, serious, and documented risks to children, including the growing evidence that its use in the classroom undermines student privacy, cognitive development, creativity, mental health and the environment.

As a coalition of parents, advocates, educators and community leaders we reject the DOE’s sham 45 day process and inadequate, cramped survey for what is clearly a foregone conclusion to embrace big tech at the expense of our students.   We call on Mayor Mamdani to act immediately in alignment with his own commitment to parent and community involvement as well as green and healthy schools and declare a moratorium on any implementation of AI, while holding in person feedback sessions to hear our concerns and those of the other members of the NYC Public School community. 

“The DOE is exposing kids to AI without any protection, let alone real understanding of impact on student learning, privacy, emotional health, equity or algorithmic bias. This is a reckless decision, making children guinea pigs when we should be acting carefully and judiciously,” said Zephyr Teachout, Professor at Fordham Law School. 

Said Katie Anskat, high school teacher and UFT member, “This guidance creates a structure where authority is centralized but accountability is pushed down to the school level. The DOE determines which AI tools are approved and sets systemwide rules, but the responsibility for how those tools are used is placed on educators and leaders in individual schools. By requiring human judgment, oversight, and review in all cases, the policy ensures that when something goes wrong, whether it is inaccurate information, bias, or a privacy issue, the burden falls on school based staff rather than the system that approved and promoted the tool.“

“I and other parents do not want taxpayers paying AI companies for products that use the data from our students and teachers to enhance their own bottom line. It’s predatory,” said Shannon Ritchey, a District 14 parent.

“We are pleased to see that NYCPS acknowledges the potential harms of AI, and has prohibited its use to create IEPs.  However, the  guidance admits that further evaluation is needed to assess the risks of various AI products with regard to algorithmic bias, negative impacts on instruction, inequitable outputs, and more.  DOE must first assess these products before recommending them to be used with any students in NYC schools. NYCPS must take more intentional steps towards engaging families and other stakeholders in these conversations, and we look forward to more in-depth community involvement in this ongoing process. In the interim, we will continue our call to pause the use of AI products in our schools,” said Smitha Varghese Milich, Senior Campaign Strategist of the Alliance for Quality Education 

Unregulated Big Tech AI companies are on the verge of convincing the Mamdani NYC government to spend precious public tax dollars on their anti-democratic, pollution-generating, and education-undermining AI products. Just like renters had a meaningful opportunity to inform housing policy in the Rental Ripoff Hearings, NYC parents deserve the same level of care and investment to have a genuine opportunity to influence AI policy in NYC schools. Mayor Zohran Mamdani must listen to the advocates and immediately instruct Chancellor Samuels to impose a two-year moratorium on the use of all AI products that are currently unregulated and pedagogically untested and could cause untold damage to 900,000 public school students,” said Edgar Alfonseca, NYC-DSA Tech Action Working Group & NYC-DSA Comrades with Kids.

“This guidance is confusing, contradictory in many places, and doesn’t address most of the serious concerns of parents and teachers.  The existing privacy practices of the NYC Department of Education have been shown to be ineffective as evidenced by repeated student data breaches. The ERMA process outlined in the document is nothing but a checklist that companies have often abused without sufficient verification or oversight by the DOE privacy office. For example, some of the AI products currently used in schools data-mine student information to improve their products according to their Privacy Policies, a practice  specifically outlawed by the NY State student privacy law.  Others collect biometric data which the State Education Department has said should not be allowed without parent input – though parents have denied any voice in their use. To make things worse, this document has been released without any feedback from the Chancellor’s appointed AI Working Group, despite repeated promises to the contrary, and despite Mayor Mamdani’s vow to strengthen parent and community collaboration, “ said Leonie Haimson, co-chair of the Parent Coalition for Student Privacy. 

“AI is driving climate collapse and global water bankruptcy. If the largest school system in the country uses its purchasing power to fuel more reckless and dangerous AI expansion, what future are we preparing our kids for? Mayor Mamdani made a commitment to Green and Healthy Schools for all NYC children. Fueling climate collapse at the expense of student privacy, mental health and ability to learn is the opposite of that commitment. Parents and students deserve concrete answers and real engagement not vague platitudes about the environment and listening to stakeholders. And our children deserve leaders who will use their moral authority to protect them, not corporate apologists who subject them to a surveillance experiment that will leave them a world on fire,” said Liat Olenick, MsEd, Public teacher and parent, Program Director, Climate Families NYC,

“The guidance document continues to allow  the classroom to be the Wild West in terms of how AI is used for student learning. It contains assertions about how AI can improve learning for students and prevent cognitive offloading, but there is no research behind these claims. Importantly, they quote from the Brookings report, but miss the most important conclusion:”at this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits. This is largely because the risks of AI differ in nature from its benefits—that is, these risks undermine children’s foundational development—and may prevent the benefits from being realized,” said Kelly Clancy, PhD, Parents for AI Caution in Educational Spaces, D20 CEC.

“CECs from across the city have passed resolutions calling for a moratorium on AI to protect student learning and have asked to meet with the chancellor. Despite the Mayor’s campaign promises to listen to parents, those requests have been ignored, and it’s clear from this document that the DOE is failing to listen to the concerns parents have about AI in the classroom. The lack of any engagement with CECs on this issue before releasing this document makes the failure of mayoral control clear. Parents need real decision making power over decisions like this that affect their children’s  lives,” said Alina Lewis, member of the CEC in District 20, which passed a resolution in favor of the moratorium.

“The MORE-UFT caucus sees the encroachment of AI into our schools as a labor issue. Educators face an unsustainable workload, and AI is not the solution to this problem. The AI guidelines fail to address the resistance from parents, students and teachers related to a myriad of concerns. These concerns include the impacts of cognitive offloading, the deskilling and deprofessionalization of teaching, the horrific environmental impacts of AI, racism embedded into the programming, and the connections of the supply chain to enslaved labor and environmental racism. The data “protection” is the same (ERMA) that the DOE uses now, and it’s not effective. We have seen far too many data leaks and a lack of accountability for the corporations profiting off of educator and student data, said Martina Meijer, teacher,  MORE-UFT.

“Goals like ‘responsible AI integration’ are meaningless. Is there any responsible way to use a technology that’s fundamentally unaccountable, prone to racial & gender bias, emotionally manipulative, environmentally disastrous, and frequently wrong?” asked Craig Garrett, parent and SLT member, District 14 .

“Science has a growing body of work on the harms of the use of AI on the brain, physical and mental health, bias of racial groups, and hallucinations resulting in cognitive decline in children and adults. Giving our vulnerable communities a faulty narrative on the so-called benefits of AI because the Department of Education doesn’t want to admit to harms they are already exposing children to is negligent and irresponsible. More and more parents and educators are asking for a moratorium on the use of AI in classrooms as we don’t have guardrails put in place. While we should be providing opportunities for our students to learn and understand engineering, technology and science, we should not be doing it at the expense of their cognitive development and their abilities to be problem-solvers, critical thinkers and innovators,” said Kaliris Salas-Ramirez, PhD, neuroscientist, medical educator and parent leader in East Harlem.

###