Join the best erotica focused adult social network now
Login

AI and education

last reply
4 replies
182 views
3 watchers
0 likes
0 likes

AI programs are infiltrating the education system, and teachers are confronted with plagues of plagiarists claiming the work of bots as their own - in other words, AI-assisted cheating. While schools will very likely adapt to these new technologies, finding new ways to spot fake and fraudulent student work (perhaps by using the very same tools to detect it), it does raise questions about how we think about education. What the prevalence of AI-generated work shows is we’re approaching education all wrong. We’re too focused on signs of achievement and evaluation in schools and not enough on the actual learning and development of the ability to think. There is little value in going to school if you graduate without actually having learned anything, and cheating in this way dilutes the value of the degree. Using AI is a tempting shortcut, but it means that you’ve actually outsourced your own personal development to robots, and have little interest in building your own abilities to think for yourself.

Perhaps this is fine. Perhaps we’ve placed too much emphasis on universal education, forcing students to participate when they (and often their parents) fail to see the value in receiving an education. There will always be those who do recognize the value, and perhaps we focus on them, and leave the others - those who would cheat instead of doing the work - to their own devices. As the saying goes, the world needs ditch-diggers, too (at least until automation and AI absorb the last of those jobs). And maybe we ought to put less stock in the sheet of paper they give you when you graduate, and more in the demonstrable abilities of the individual they give it to, regardless of the degree attained.

Don't believe everything that you read.

Active Ink Slinger
0 likes

Multiple universities are experiencing lawsuits for punishing students for using AI, but with no proof besides an AI bot telling them the paper was AI generated.

The problem is teachers are ironically relying on AI and not their own senses to sniff out generated papers but the tools themselves are quite unreliable. Then students have little to no procedural actions to defend themselves. The computer says you cheated so you did.

In lab settings where they had students write papers in front of them, the AI anti-cheating tools still flagged several papers as AI generated. Experts noted formal tones and good grammar would often result in false flags.

To your other points, the education system is poorly optimized and more concerned with standardized testing than education, vocational training, or life skills. I’m an accountant, I used to be an engineer, and I have never used 80% of my math lessons from middle school forward. Students are drilled on memorizing the quadratic formula and not taught how to find healthcare, change a tire, file taxes, or other life skills.

AI will thrive in environments that focus on jumping through hoops and meeting certain metrics. Students are disillusioned by mandated testing and subjects they don’t care about and don’t need. AI plagiarism will become more prominent and harder to detect in these conditions.

Active Ink Slinger
0 likes

This is such an interesting topic!

The rise of AI-assisted cheating really underscores the need for a shift in educational focus. Instead of emphasising rote memorisation and standardised testing, we should prioritise critical thinking, creativity, and problem-solving skills. These are areas where AI cannot compete.

AI highlights the flaw of the didactic method of education (a topic long discussed by men in togas in Philosophy of Education circles). Students in schools are subject to this method centered around teachers imparting information for students to remember. Written exams become a ranking of memorisation, not understanding, with students often getting their information from a single source and absorbing it as fact. The alternative to this is the dialectical method, which centers on dialogue and the exchange of ideas to teach reasoning. Most of the benchmarks in a dialectical method would be achieved through verbal testing. Of course, not all subjects lend themselves easily to this method, but minus STEM there's really no reason why people are graded through papers rather than questioning, particularly now when AI can transcribe them for record.

I really hope AI contributes to this conversation, it would make all the cheating for the greater good 😇

"A dirty book is rarely dusty"
Sexual Connoisseur
0 likes

I would hate going into surgery and finding out that my doctor had received his/her medical doctorate by using AI.

Please read our latest story.

Lorraine's Mansion of Lust

"insensitive prick!" – Danielle Algo
0 likes

Some teachers I know use AI tools to help them write better lessons and tests, to find better examples, etc. In doing so, bad teachers will quickly get exposed, as AI has a tendency to 'hallucinate', and so it may make up some 'truths' here and there. But a good teacher will catch those errors, in which case AI can be a tool that might help to improve education.

Since we're in the initial stages of general AI being available, current teachers will have had a non-AI education themselves. Students do not have the required knowledge yet to spot the AI-errors easily. And with a world wide web that will increasingly be filled with AI-generated content, it may get harder and harder to find trustworthy sources that might (in)validate AI-generated content.

I think the future of AI in education is that we need to learn student how they can benefit from the use of AI in such ways that it helps them develop their own thinking skills. What are the benefits, other than outsourcing, and what are the downfalls/dangers? Learn them how to ask the right questions, how to instruct AI in ways that go beyond copy/pasting the assignment into the AI's input, and how to validate AI output.

Tools to distinguish AI-generated content from non-AI generated content will have little effect I reckon, as the whole purpose of AI is that it learns, and therefore evolves. And so it shouldn't be too hard to train one AI tool, based on what the another tool will or will not flag as AI-generated.


===  Not ALL LIVES MATTER until BLACK LIVES MATTER  ===