This (almost) monthly newsletter serves as your bridge from the real world to the advancements of AI & other emerging technologies, specifically contextualized for education.
Dear Educators & Friends,
You may have noticed Iâm a little behind on this newsletter! But Iâve decided itâs better to write when I can than not at allâespecially since weâre still seeing daily sign-ups and are now approaching 6,000 readers. I think youâll really enjoy this final issue of 2024!
First, some exciting Ed3 updates:
Weâre growing! Ed3 has been granted $500,000 supported by Google.org to scale our nonprofit. Read more here.
Ed3 also received a grant from Open Campus to develop a guide for deciphering competencies and creating competency-based assessments.
Over the past year, weâve delivered 22,500 continuing education units to educators across the country through our AI coursesâearning glowing reviews. Weâre incredibly proud of our pedagogy-infused, AI-powered professional learning and thrilled that so many educators are taking advantage of it.
If youâre a university or professional learning provider looking for high-quality AI coursework for educators, please reach out!
And for the rest of you, thank you for following our journey and supporting our work. From the bottom of our hearts, we really couldnât have done it without your encouragement. đ
Now, back to our regular programming!
Infobesity & AI
âŚsociety is poised to experience an unprecedented level of infobesityâŚ
Since May 2023, AI-generated content has increased by 1000% . Fake information is 6x more likely to be recirculated, amplifying its reach and consequences. Take for example, a widely shared image of a Hurricane Helene survivor âcompletely AI-generated yet viewed millions of times. Even after the image was confirmed as fake, its influence had already spread, with critics using it to attack the Biden (U.S.) administration's disaster response.
As generative AI becomes more powerful, its going to become more difficult to differentiate between real and fake content. In fact, the recent use of AI-generated content in the new U.S. administrationâs campaign signals what could come in the next four years - we may face even more misleading AI-generated information.
To this day, no AI detection tools have successfully outperformed newer forms of generative AI, and itâs unlikely they ever will. With such a massive influx of content, society is poised to experience an unprecedented level of infobesity, which is an overconsumption of nutrient-poor information.
Infobesity extends beyond fake news. It also includes augmented content on social mediaâthe kind that young people often compare themselves to, contributing to what Jonathan Haidt calls, an anxiety epidemic.
In this issue, Iâm introducing a new big idea to add to those Iâve shared over the past year on designing an AI-enabled future that avoids repeating the blind spots of our past.
To recap, the first two ideas I introduced:
⥠Big idea #1: After we identify the right problem, we need to map the potential externalities of solutions.
⥠Big idea #2: Letâs design for the human experience, not the optimization of technology.
Big Idea #3 focuses on an underlying catalyst behind misinformation and infobesity: Logical Fallacies.
Logical fallacies are flawed arguments that rely on invalid reasoning to persuade. They can be deceptively convincing, making arguments appear soundâeven when theyâre not.
Logical Fallacies
There are more types of logical fallacies than I wish to quantify or explain, but Iâve identified four that I believe are particularly dangerous when it comes to AI.
Hasty Generalization fallacy:
This fallacy occurs when a conclusion is drawn based on limited or insufficient data. In the context of AI, I interpret this as biased data selection. Large language models (LLMs) often rely on internet data to generate content, but this data is heavily skewed toward the Global North. When there are âdata voidsââtopics with little to no available informationâAI may either produce inaccurate answers or completely fabricate them.
An example of this is Googleâs AI Overview recommending glue as a pizza topping due to a data void. When we interact with LLMs, we assume they have access to all the data, and therefore, their conclusions must be correctâexcept when they produce an obvious hallucination. This assumption, however, is a logical fallacy.
Post Hoc Ergo Propter Hoc fallacy:
This fallacy involves assuming causal relationships where none actually exist, often creating misleading conclusions. In the context of AI, I apply this to biased algorithms.
Developers design algorithms and neural networks to connect pieces of data and transform them into useful information. However, the rules they create for pattern matching are not bias-free. Do you remember when Gemini was producing images of only people of color regardless of the context? While that was an extreme and obvious case, every LLM operates on algorithms that inherently contain biases, many of which are far more subtle and harder to detect.
False Premise fallacy:
This fallacy occurs when multiple conclusions are drawn from a false starting point. If youâve ever used a LLM, youâll notice that once it begins generating content based on an initial idea, it tends to build upon that point without reevaluating its own logic.
This is why step prompting or iterative prompting is so crucial. Rather than providing the LLM with a large, information-heavy prompt all at once, itâs more effective to feed it bite-sized pieces of information and iterate as it responds. This approach helps guide the model logically and avoids compounding errors that stem from a flawed starting point.
Appeal to Authority fallacy: This final fallacy is particularly harmful. Itâs when we believe something because an authority figure states it, even without providing sufficient evidence. Whether itâs an influencer, a doctor, or even a president, we tend to trust their statements blindly. With AI, we often treat the information provided by these LLMs as authoritative without questioning their validity. However, as demonstrated by the previous three fallacies, AI must be constantly critiqued and challenged.
So whatâs the big idea here?
⥠Big idea #3: We must evaluate our Logical fallacies before we trust AI outputs.
Logical fallacies are everywhere, influencing the way we think and make decisions every day. With AI however, these fallacies become even more dangerous due to the sheer volume and scale of outputs. If we donât pause to critically evaluate what AI produces, we risk amplifying flawed reasoning on an unprecedented level.
Rise of AI Agents
Letâs escalate this danger further! Have you seen the recent developments in AI agents? If youâre not familiar with agents, hereâs a quick primer. AI agents are becoming increasingly autonomous, offering enormous potential to transform our lives. They promise to make us more efficient, handle routine tasks, and multiply our abilities. (Check out this unbelievable example of Googleâs AI Studio with Gemini 2.0, narrated by Mike Peck.)
In the context of education, I can imagine AI agents:
adjusting schedules and activities throughout the school day by observing a studentâs engagement and alertness levels,
assigning personalized work to students based on outcomes without teacher intervention,
emailing potential employers for internships based on credentials or competencies achieved by students,
adjusting a fiction story to appeal to the reader,
applying for a scholarship program on behalf of a student as it evaluates student abilities.
âŚThe possibilities are endless.
I am super hopeful about agents but with an important caveat.
In order for us to successfully leverage agents to multiply our productivity, we must keep logical fallacies top of mind and safeguard our self-governance. Remember my first big idea: we need to map the potential externalities of the solution. If AI agents are going to automate so much for us, we need to ensure they do it accurately and that we avoid cognitive laziness.
Get Ready for 2025
As we step into 2025, hereâs my ask of you. As you engage with AI and navigate our new political environment, letâs prepare for potential dangers and manipulation by keeping our three big ideas in mind:
Evaluate the potential externalities of the tools youâre using and the actions youâre taking.
If youâre an educator, design for students, not for the efficiency AI offers.
Critically evaluate your own logical fallacies - donât fall into the trap of infobesity.
These ideas are crucial to helping us navigate new worlds critically toward a brighter future full of incredible possibilities.
Wishing you a wonderful holiday season and strong start to the new year. See you in 2025!
Warmly yours,
Vriti Saraf
Iâm Vriti and I lead two organizations in service of education. Ed3 DAO upskills educators on emerging technologies through research-based pedagogies. k20 Educators builds metaverse worlds for learning.