Juan Espinoza keeps the office lights low, as if the glow from a monitor could be a kind of confession. It is early, the kind of morning when a campus still feels like a movie set waiting for its first actor to remember a line. In Blacksburg, Virginia, the admissions season makes its own weather. A second cup of coffee goes cold at the edge of his desk while a queue of essays loads on screen, each one a small envelope of hope, each one asking to be understood quickly and deeply at the same time.
He is the associate vice provost for enrollment management at Virginia Tech, and he has read so many personal statements over the years that he can sense a familiar arc before the first paragraph is done. The sports injury that became a lesson. The service trip that became a calling. The grandparent that became a compass. Lately there is another pattern, harder to name. The sentences are polished. The transitions click into place. The metaphors behave. And yet something is missing, like a song performed with perfect pitch and no pulse.
Espinoza scrolls, then pauses. Beside him, another window shows a set of scores generated by a new tool his office uses to help triage a flood of writing. It does not decide who gets in. He is careful about that. It ranks essays by features that look a lot like human judgment translated into math: clarity, organization, sentence variety, a rough estimate of mechanical errors. It is fast. It is consistent. It does not get hungry or impatient. It does not have a headache. And that, in a year when applications keep rising, is the point.
The irony is not lost on him. Students have been warned, sometimes sternly, not to let a chatbot write their essays. Yet here he is, watching a machine help read them.
• • •
In the last decade, college admission has been forced to become a kind of forensic art. The traditional signals still exist, but many of them have been scrambled, softened, or strategically arranged. High school grades are higher than they used to be, even as national tests of learning suggest many students have struggled to regain ground after the pandemic. Standardized testing has entered a new era where optional often does not mean optional, and required can return with little notice. Essays, once treated as the closest thing to a student speaking in their own voice, can now be drafted in seconds by software that has read more prose than any teenager could in a lifetime.
Inside admissions offices, the daily reality is more mundane and more technical than the myths that swirl around it. Most decisions are not made by a single gatekeeper who falls in love with one sentence in a personal statement. They are made by systems: workflows, rubrics, committee notes, data dashboards, institutional goals, and sometimes software that helps manage the mess. The work is not simply to pick the best students. It is to build a class.
At the scale of American higher education, building a class now means dealing with volume that would have been unthinkable a generation ago. The Common Application reports nearly 1.5 million distinct first year applicants in the 2024 to 2025 season, with application volume rising even faster than applicant counts. With that scale comes pressure, and with pressure comes automation.
Technology has always been in the room. For years, enrollment management has used customer relationship software to track emails, campus visits, and yield predictions. Public universities have long relied on rule based screens to sort applicants by grades and course requirements. What is new is how far the tools can now reach into the messy, human parts of an application. Natural language models can summarize letters of recommendation. Computer vision systems can extract grades from transcripts. Essay scoring engines can evaluate writing features in seconds. The question is no longer whether admissions will use machines. The question is how, and at what cost.
• • •
The signals that have become unstable
The easiest way to understand what colleges say they want, and what they actually do, is to look at the signals that have become unstable.
Start with grades. High school GPA remains the most important academic factor at most colleges, but it has become harder to interpret across schools and districts. ACT has reported grade inflation over the past decade, with average GPAs rising even as test scores have declined. Researchers at the Calder Center have documented shifts in grading practices during the pandemic, including increased emphasis on completion and reduced emphasis on mastery in many settings. Even as classrooms returned to normal rhythms, the grades often did not fall back to earlier baselines.
Admissions officers have known this for years, which is why transcripts are not read in isolation. A transcript arrives with a school profile: course offerings, grading scale, distribution of grades, the share of students in advanced courses, sometimes even a list of what colleges graduates attend. Selective colleges do not just ask, How high is the GPA. They ask, What did this student have access to, and what did they do with it. A 3.7 at one school can mean a different kind of work than a 4.2 at another.
That contextual reading is also why the myth of the inflated GPA as a simple cheat misses the point. The problem is not that grades are meaningless. The problem is that they are local. In a national system that forces colleges to compare students across thousands of different high schools, local signals create uncertainty.
Standardized tests were built to reduce that uncertainty. Then came a pandemic that made test centers unreliable. Colleges responded with test optional policies that spread rapidly, and for a while it seemed like the SAT and ACT might fade into history like cursive handwriting. Instead, the test optional era created its own kind of confusion. Families asked whether submitting scores helped or hurt. Counselors struggled to advise students when policies differed by campus and changed year to year. Students who could afford tutoring kept testing anyway, just in case.
By the 2024 to 2025 season, only about five percent of Common App member colleges required scores, down from roughly half before the pandemic. Yet the Common App also reported that the number of applicants reporting scores began to rise again, a reversal after several years of decline. That reversal was partly driven by policy shifts at highly selective schools. Dartmouth returned to requiring the SAT or ACT. Yale introduced a test flexible requirement that lets students submit SAT, ACT, AP, or IB scores. Brown reinstated testing for the class entering in 2025. Harvard did the same.
The messaging was careful. These schools argued that scores, interpreted in context, helped identify talent in under resourced settings, and improved predictions of academic performance. What those announcements revealed was not simply a renewed love of testing. It was an admission, sometimes quiet, that in a world of inflated grades and polished applications, colleges crave comparative anchors.
• • •
The essay problem
Then there is the essay, the most emotional and the most contested piece of the modern application.
For decades, the personal statement played two roles at once. It was a writing sample and a narrative device, a place where a student could show voice, insight, and a sense of self that no transcript could capture. It also served as a kind of equity patch, an opportunity for students from less known schools to tell a reader what numbers could not.
The problem is that essays have always been easy to manipulate. Long before chatbots, students hired private coaches, traded drafts with adults, and learned the art of writing a story that sounded vulnerable without sounding messy. Generative AI did not invent the polished essay. It industrialized it.
In January 2025, Pew Research Center reported that about a quarter of US teens had used ChatGPT for schoolwork, up from roughly half that share in 2023. Common Sense Media found that a large share of teens had tried generative AI tools, often with limited guidance from adults. If that is the baseline at home and in classrooms, it is naive to think college applications are somehow protected.
Admissions offices responded in two ways that appear contradictory but are, in practice, part of the same adaptation. They warned applicants not to use AI to write essays. And they began, quietly and sometimes loudly, to use AI to manage the evaluation process.
Virginia Tech is an example of the loud approach. Espinoza has talked publicly about using an AI reader to score large volumes of essays quickly, saving thousands of hours of staff time. Other schools have taken a quieter path. At the University of North Carolina at Chapel Hill, documents reported by The Daily Tar Heel described automated scoring of writing quality in admissions essays, raising questions about transparency and consent. The blowback was immediate. The public tends to tolerate algorithms when they recommend a movie. They are less forgiving when a machine seems to be weighing a future.
Yet even in offices that do not use AI for writing evaluation, software is everywhere. It parses activities lists. It flags missing documents. It routes files to readers. It checks for plagiarism in some contexts, though detection of AI generated text remains unreliable and can produce false positives, particularly for students who write in a second language. Many admissions leaders avoid AI detection arms races because they do not want to punish a student for being fluent in a style that resembles a model trained on billions of sentences.
So the essay remains, but its meaning has changed. In a world where anyone can produce competent prose, the job of the essay is no longer to prove that a student can write. It is to prove that a student is present.
• • •
What authenticity looks like
Ask an admissions officer what authenticity looks like, and the answer rarely comes as a slogan. It comes as texture.
Authenticity is an essay that takes a risk with specificity. It names the exact smell of a grandfather's workshop, the particular way a teacher corrected a misconception, the awkward moment that made a student rethink a claim. It is not a montage of virtues. It is evidence of attention.
Authenticity is also inconsistency, the kind that makes an application feel human. A student can be brilliant and uncertain. Ambitious and tired. Confident in math and bewildered by literature. AI tends to smooth those contradictions into a single brand. Humans leak.
The admissions reader, trained by repetition, learns to look for these leaks. They read short answers the way a mechanic listens to an engine. They compare voice across sections. They notice when a student writes with one vocabulary in a personal statement and another in a quick response about a favorite class. They pay attention to whether the activities list matches the essay's self portrait. They use recommendation letters as triangulation, not as truth, but as another angle on the same person.
This triangulation is one reason why the essay is not dying. It is being surrounded.
Colleges are adding or reshaping prompts that are harder to outsource cleanly. The Common App has refined its additional information areas, including questions that let students explain challenges and responsibilities that shaped their academic record. Some colleges ask for short responses about why a student wants to attend, not because they expect a poetic answer, but because the answer reveals whether the student has done real research, or simply pasted a few campus buzzwords. Some programs want portfolios, graded papers, or records of sustained projects. Some use interviews not as auditions but as calibration.
None of this guarantees fairness. Each additional requirement can become another place where resources matter. A portfolio can favor students with access to mentorship. An interview can reward cultural comfort. Even a short answer can be polished by an adult. But the layering reflects a truth admissions officers rarely say out loud. One signal is easy to fake. A pattern is harder.
• • •
How the process actually works
Behind every authenticity story is the deeper issue: admissions is not a search for truth. It is a decision under uncertainty.
If you could watch an admissions office at work, the experience would feel less like literary criticism and more like logistics. Files are read in batches. Most selective colleges assign an initial read to a staff member, often responsible for a region. The reader scores components: academic rigor, grades in context, testing where applicable, activities, recommendations, and writing. They write a summary that frames the student for committee. Then the file moves to additional readers, sometimes a second staff member, sometimes a faculty member, sometimes a committee that meets for hours to debate edge cases.
The rubrics vary, but the logic is similar. Academics create a floor. Activities and recommendations create shape. Writing creates voice. Context changes what each element means.
At large public universities, the workflow can look different. When tens of thousands of applications arrive, rule based screens often sort applicants by course completion and GPA thresholds. Some schools use an index that combines grades and test scores when scores are available, then route certain cases for human review. Others use holistic review only for specific programs. The process can be more automated, not because staff value students less, but because the economics of volume require it.
Selective private colleges, despite their romantic branding, also rely on systems. They model yield, the probability that an admitted student will enroll. They shape the class by major, geography, and institutional priorities like athletics, arts, or first generation enrollment. They do not just ask, Is this student impressive. They ask, Will this student come, and will this student thrive here.
This is where the public feels the most anxiety, because these considerations are rarely transparent. An applicant experiences the process as a personal judgment. An institution experiences it as optimization.
• • •
AI accelerates optimization
AI tools accelerate optimization.
A transcript reader that can extract course titles and grades from a PDF is not glamorous, but it reduces errors and speeds decisions. Georgia Tech has described using AI to help evaluate transfer transcripts, reducing manual data entry and shortening wait times for applicants who want to know how credits will apply. Stony Brook has described tools that summarize essays and recommendations to highlight themes for human readers. In each case, the pitch is efficiency: fewer hours spent on clerical work, more hours available for nuanced judgment.
The trade off is that systems tend to become more opaque as they become more powerful. A human reader can explain why a line in an essay felt forced. A model that assigns a writing score can explain only in terms of its features, which may not map cleanly to what applicants think matters. And because models learn from past data, they risk reproducing the biases baked into prior admissions decisions, unless they are carefully designed and audited.
That is why, even as admissions offices experiment with AI, they often insist that humans remain accountable for final decisions. The phrase comes up again and again: human in the loop. The loop, however, can be a comfort blanket. If a model screens out an application before a human reads it, the human in the loop arrives too late.
• • •
What admissions officers are actually evaluating
So what, in practice, are admissions officers actually evaluating now.
They are evaluating readiness, which is different from achievement. Readiness is a prediction: can this student handle the academic demands of the institution they are applying to. This is why selective colleges still care about course rigor, why they value calculus, lab science, advanced writing, and sustained reading. It is also why some have returned to testing, arguing that scores add predictive power when grades have become harder to compare.
They are evaluating contribution, which is different from resume building. Contribution is not a list of clubs. It is a pattern of engagement that suggests a student will animate a campus. A job after school can be as meaningful as a leadership title. A long term commitment to family care can signal resilience and time management. Admissions officers often say they want depth, but what they are really looking for is evidence that a student can sustain effort when nobody is giving them a grade.
They are evaluating context, which is different from excuse. Context is the scaffolding around a record. It includes the opportunities a student had, the constraints they lived with, the curriculum of their school, the social and economic pressures that shaped their time. This is why school profiles matter, why counselors are asked to describe resources, why the Common App added new ways for students to describe responsibilities and circumstances.
And they are evaluating authenticity, which is not a moral virtue but an information problem. Authenticity is the signal that the student is not merely performing an identity, but demonstrating a mind at work. In an era of AI writing, authenticity becomes harder to detect in prose, and more visible in choices: the way a student frames a problem, the specificity of their curiosity, the risk they take in describing an uncertainty, the small details that cannot be easily scraped from a template.
The admissions process has not become blind to essays. It has become skeptical of them. And skepticism, in a system with high stakes and low trust, can turn into a hunger for new proxies.
• • •
Back in Blacksburg, Espinoza returns to the screen. The AI reader has already scored this essay as average. The paragraphs are clean, the grammar correct, the structure obedient. He reads anyway, because the scores are not the point.
Halfway down, the student describes a moment in a chemistry lab when a reaction did not happen as expected. The beaker stayed stubbornly clear. The lab partner grew anxious. The teacher shrugged, then smiled, and asked the student to explain what they thought was missing. The student writes that they felt, for the first time, that science was not a set of answers but a conversation with reality, and that their role was to keep asking better questions even when nothing changed color.
It is not a grand story. There is no tragedy. No triumph. Just a mind paying attention to a small failure and making something of it.
Espinoza leans back and lets the sentence sit. Outside his window, the campus has brightened. Students move between buildings, wearing headphones, carrying backpacks, trying to look as if they know where they are going. He thinks about the applicants who will never be seen beyond a score, and the ones who will. He thinks about the machines that can help him read faster, and the parts of reading that no machine can do yet, the parts that depend on being human in the presence of another human, even if that human is only a file on a screen.
He clicks to the next essay. The queue keeps coming. The season keeps turning.
Sources
- Associated Press. "AI may be scoring your college essay. Welcome to the new era of admissions." December 2025.
- Common App. "End-of-season report, 2024-2025: First-year application trends." August 13, 2025.
- Common App. "2025-26 updates." May 22, 2025.
- Common App. "Responsibilities and Circumstances info sheet for first year application." August 1, 2025.
- Pew Research Center. "About one in four U.S. teens have used ChatGPT for schoolwork." January 2025.
- ACT Research. "How Has the Predictive Validity of HSGPA and ACT Composite Score on Cumulative First-Year Hours Earned Changed Between 2018 and 2022?" (Research Report 2024-08). August 2024.
- ACT Research. "The Implications of Grade Inflation" (Research Report R2432). November 2025.
- Dartmouth College Office of Admissions. "Dartmouth Reinstates Standardized Testing Requirement." February 2024.
- Brown University News. "Brown to require standardized test scores for applicants beginning with the 2024-25 admission cycle." March 2024.
- YaleNews. "Yale announces test flexible policy for undergraduate admissions." February 2024.
- Harvard Gazette. "Harvard will require SAT or ACT for admission beginning with the Class of 2029." April 2024.
- Johns Hopkins University Provost. "Issue Summary: Standardized Testing." August 2024.
- University of North Carolina at Chapel Hill Undergraduate Admissions. "Does Undergraduate Admissions use AI and why?" (FAQ / policy statement). 2025.
- Inside Higher Ed. "Admissions offices, cautiously, start using AI." May 2023.
- NCES, The Nation's Report Card (NAEP). "Grade 12 Mathematics, 2024." Assessment year 2024.
- NCES, The Nation's Report Card (NAEP). "Grade 12 Reading, 2024." Assessment year 2024.
- NACAC. "Guide to Ethical Practice in College Admission" (includes guidance on artificial intelligence). Updated 2025.


