Urban75 Home About Offline BrixtonBuzz Contact

I've started a remote course for web development, and one of the upcoming sessions is going to be about "Ethical AI"

NoXion

his soul is marching on
So I was wondering what other people might think of this.

It's a course run by the City of Bristol College, in association with the Code Institute. I will be learning HTML, PHP, JavaScript, and CSS. This will be my first ever foray as a student in the age of AI. The last time I studied was over a decade ago with the Open University. The email describes the session as "How to use AI responsibly to support your learning without crossing into academic dishonesty", which I thought was interesting for multiple reasons. It seems that there's at least a recognition that students will inevitably use AI, and that pedagogy will incorporate AI's use instead of simply forbidding it outright.

I will of course report back here once that session happens, but in the meantime what are everyone else's thoughts?
 
I asked AI what they thought about your question....

I've started a remote course for web development, and one of the upcoming sessions is going to be about "Ethical AI"

ChatGPT said:​

That's great! Web development and AI are increasingly connected, and understanding the ethical implications of AI is crucial. In a session about Ethical AI, especially in the context of web development, here are some key areas you’re likely to cover—or should be prepared for:


🔍

Ethical AI refers to designing and deploying artificial intelligence systems that:

  • Respect human rights and privacy
  • Avoid bias and discrimination
  • Are transparent and accountable
  • Serve the public good

⚖️

  • Bias and Fairness: AI can unintentionally reflect societal biases present in training data (e.g., racial, gender bias).
  • Privacy: Collecting user data for training AI must respect data protection laws (like GDPR, CCPA).
  • Transparency: Users should know when they’re interacting with AI and how decisions are made.
  • Accountability: Who is responsible when AI causes harm? Developers? Companies?
  • Accessibility: AI should not exclude or disadvantage any user group.

🌐

As a web developer, you might:

  • Integrate AI-based tools (e.g., chatbots, recommender systems)
  • Handle user data that might be used for AI training
  • Need to explain or design around AI decisions in user interfaces (UX/UI)
Questions to consider:

  • Is the AI model I'm using trained on diverse and inclusive data?
  • Are user interactions with AI clearly labeled?
  • Is user data being collected ethically and securely?

🧰

  • Fairness tools: IBM AI Fairness 360, Google’s What-If Tool
  • Privacy tools: Differential privacy frameworks, encryption
  • Transparency tools: Model interpretability libraries like SHAP, LIME

🧑‍🏫

  • Hiring Algorithms: Biased decisions based on race/gender if trained on historical data.
  • Facial Recognition: Often less accurate on people with darker skin tones.
  • Social Media Algorithms: Promoting divisive content to drive engagement.

📚

Here are a few good starting points:

 
I beta tested a course on AI and Ethics run by a Finnish Uni a couple of years ago, just before OpenAI released the public version of ChatGPT.

Thinking back to what we covered, I don't think too much of it is still relevant.
 
Sounds like a super course NoXion, I'll be interested in any updates. Wonder if the 'ethical use of AI in learning' is any more complicated than "tell us if you've used AI in this assignment or we'll mark you down if we can prove you have". Whichever, still interesting and yes please do update us.
 
I asked AI what they thought about your question....

You should have included the description of the session in your input prompt, since I'm getting the strong impression that the session will be more focused on how we should use AI ethically as students, rather than how to construct and design AI systems in an ethical manner, as in the output you've provided. Here:

My input:

I've started a remote course for web development, and one of the upcoming sessions is going to be about "Ethical AI". It's a course run by the City of Bristol College, in association with the Code Institute. I will be learning HTML, PHP, JavaScript, and CSS. This will be my first ever foray as a student in the age of AI. The last time I studied was over a decade ago with the Open University. The email describes the session as "How to use AI responsibly to support your learning without crossing into academic dishonesty", which I thought was interesting for multiple reasons. It seems that there's at least a recognition that students will inevitably use AI, and that pedagogy will incorporate AI's use instead of simply forbidding it outright.

ChatGPT output:
[The formatting is a bit of a mess, since it seems to be done in some form of Markup and I can't be arsed to convert it properly to XenForo BBCode - NoX]

That’s a really thoughtful reflection—and you’re absolutely right: it is significant that the course is approaching AI use from a nuanced, responsible perspective rather than just banning it outright. It suggests the institution recognises a fundamental truth of our time: AI tools like ChatGPT are now part of the learning landscape, and the real challenge is learning to use them ethically, not avoiding them altogether.

Here are a few angles that might help you further think about or engage with the upcoming session:

---

### 🌐 AI as a Learning Tool, Not a Shortcut

The idea that AI can support your learning—by explaining code, debugging, brainstorming, or helping with research—is empowering. It’s like having a tutor on hand 24/7. But if it's used to replace your work, say by generating entire assignments without understanding them, that’s where academic dishonesty enters the frame.

---

### 🧠 Active Use vs Passive Reliance

You might consider asking yourself during the course:

“Am I learning from what the AI gives me, or just copying it?”

If you’re using AI to clarify things, experiment, or get past blocks, that’s active use. If you’re pasting outputs into assessments without modification or reflection, that’s passive and ethically risky.

---

### 📚 AI and the Future of Pedagogy

It’s interesting that this course is willing to discuss AI at all—it shows a forward-thinking curriculum. Educational institutions are starting to shift from a focus on producing answers to understanding process and reflective practice. AI makes that shift necessary.

---

### 🤖 Bringing Your Experience Into the Discussion

Since you've studied before—especially in the self-directed format of the Open University—you might find it fascinating to compare that experience to this new model. You’ve likely built independent learning habits that AI can now amplify, rather than replace.

You might even bring up some thoughtful discussion questions in the session, like:

  • Where is the line between “support” and “substitution” when using AI?
  • Should students disclose when they’ve used AI to help debug or format code?
  • How might AI use be transparently and ethically integrated into assessment?

---

If you'd like, I can help you prep for that session—maybe with a few talking points, questions to raise, or examples of ethical vs unethical AI use in web dev. Just let me know!

There was a five-part introductory exercise we were required to complete before signing up for the course, and I will admit that I did use ChatGPT a couple of times when I was stuck and couldn't work out where I was going wrong. Since my problems were those of incorrect syntax, it was a simple question of comparing my own code with the examples provided by ChatGPT, and adding in what was missing from mine. So I did find that helpful. But I remain wary that trying to use ChatGPT for solving more complex problems than sophomoric formatting errors might lead to a dead end or worse, actively sabotage my ability to learn.

Thankfully on this course there will be human beings who can actually understand what a problem is and how to help me fix it, instead of having to rely solely on stochastically generated outputs.
 
You should have included the description of the session in your input prompt, since I'm getting the strong impression that the session will be more focused on how we should use AI ethically as students, rather than how to construct and design AI systems in an ethical manner, as in the output you've provided. Here:

My input:

I've started a remote course for web development, and one of the upcoming sessions is going to be about "Ethical AI". It's a course run by the City of Bristol College, in association with the Code Institute. I will be learning HTML, PHP, JavaScript, and CSS. This will be my first ever foray as a student in the age of AI. The last time I studied was over a decade ago with the Open University. The email describes the session as "How to use AI responsibly to support your learning without crossing into academic dishonesty", which I thought was interesting for multiple reasons. It seems that there's at least a recognition that students will inevitably use AI, and that pedagogy will incorporate AI's use instead of simply forbidding it outright.

ChatGPT output:
[The formatting is a bit of a mess, since it seems to be done in some form of Markup and I can't be arsed to convert it properly to XenForo BBCode - NoX]

That’s a really thoughtful reflection—and you’re absolutely right: it is significant that the course is approaching AI use from a nuanced, responsible perspective rather than just banning it outright. It suggests the institution recognises a fundamental truth of our time: AI tools like ChatGPT are now part of the learning landscape, and the real challenge is learning to use them ethically, not avoiding them altogether.

Here are a few angles that might help you further think about or engage with the upcoming session:

---

### 🌐 AI as a Learning Tool, Not a Shortcut

The idea that AI can support your learning—by explaining code, debugging, brainstorming, or helping with research—is empowering. It’s like having a tutor on hand 24/7. But if it's used to replace your work, say by generating entire assignments without understanding them, that’s where academic dishonesty enters the frame.

---

### 🧠 Active Use vs Passive Reliance

You might consider asking yourself during the course:



If you’re using AI to clarify things, experiment, or get past blocks, that’s active use. If you’re pasting outputs into assessments without modification or reflection, that’s passive and ethically risky.

---

### 📚 AI and the Future of Pedagogy

It’s interesting that this course is willing to discuss AI at all—it shows a forward-thinking curriculum. Educational institutions are starting to shift from a focus on producing answers to understanding process and reflective practice. AI makes that shift necessary.

---

### 🤖 Bringing Your Experience Into the Discussion

Since you've studied before—especially in the self-directed format of the Open University—you might find it fascinating to compare that experience to this new model. You’ve likely built independent learning habits that AI can now amplify, rather than replace.

You might even bring up some thoughtful discussion questions in the session, like:

  • Where is the line between “support” and “substitution” when using AI?
  • Should students disclose when they’ve used AI to help debug or format code?
  • How might AI use be transparently and ethically integrated into assessment?

---

If you'd like, I can help you prep for that session—maybe with a few talking points, questions to raise, or examples of ethical vs unethical AI use in web dev. Just let me know!

There was a five-part introductory exercise we were required to complete before signing up for the course, and I will admit that I did use ChatGPT a couple of times when I was stuck and couldn't work out where I was going wrong. Since my problems were those of incorrect syntax, it was a simple question of comparing my own code with the examples provided by ChatGPT, and adding in what was missing from mine. So I did find that helpful. But I remain wary that trying to use ChatGPT for solving more complex problems than sophomoric formatting errors might lead to a dead end or worse, actively sabotage my ability to learn.

Thankfully on this course there will be human beings who can actually understand what a problem is and how to help me fix it, instead of having to rely solely on stochastically generated outputs.

Out of interest - what kind of job could you potentially get with this qualification?

I'm running very low now and everyone is telling me to reskill. Ethical AI would be great, but what actual job could you get?
 
Out of interest - what kind of job could you potentially get with this qualification?

I'm running very low now and everyone is telling me to reskill. Ethical AI would be great, but what actual job could you get?

This course should enable to me to get my foot in the door of a web development career, that is, creating, building and maintaining websites and web-based applications. As I understand it, a lot of mobile apps are basically a web browser in a bespoke wrapper, so there's that sort of thing as well. I have applied for and received full funding for this course, and I don't have to pay anything towards it until I earn 27k p/a or more. My last job was 26k p/a and with my Housing Association rent and lack of dependents, I thought that was a pretty comfortable wage.
 
It's an interesting dilemma.

When you're in your job as a web developer, you'll be using AI to write code all the time. My mate does this and says that code for a particular functionality on a website that used to take him a couple of days to write now takes him about an hour. He gets ChatGPT to write it then he tests it and fixes any bugs, which is what takes the hour.

But clearly when you're a student, it's a bit different. If you just get AI to do everything, what are you learning?

Tricky.
 
A bit off piste, but I had to have a meeting with a social worker the other day about my son. My career, or lack of (currently), came up. I mentioned that AI has destroyed my industry which resulted in an interesting discussion about how it could affect hers. Surely social workers can't be replaced. I mean I get that graphic designers like me can, well, have been replaced, we're all fucked. But apparently even social workers are worried about it :eek:

Very very much over it all now.
 
It's an interesting dilemma.

When you're in your job as a web developer, you'll be using AI to write code all the time. My mate does this and says that code for a particular functionality on a website that used to take him a couple of days to write now takes him about an hour. He gets ChatGPT to write it then he tests it and fixes any bugs, which is what takes the hour.

But clearly when you're a student, it's a bit different. If you just get AI to do everything, what are you learning?

Tricky.

Well, when I did a couple of short science courses with the Open University, there were no limits on the usage of a scientific calculator at any point, I still had to understand say, the Hubble relationship and which operations to use and when in order to calculate redshift.

I'm expecting something similar with web development.
 
Well, when I did a couple of short science courses with the Open University, there were no limits on the usage of a scientific calculator at any point, I still had to understand say, the Hubble relationship and which operations to use and when in order to calculate redshift.

I'm expecting something similar with web development.
I think the difference there is that you use a scientific calculator to plug in numbers, but in that example, you could use ChatGPT to calculate redshift for you and also ask it to show all its workings. You can check the workings and it may well make errors that a scientific calculator would never make, but if it's been trained on that particular set of equations, it will probably get its method right.

FWIW I've been training ai models recently in doing maths and science, and my top tip for anyone using it is to check all the calculations - even simple sums - as that is where it often goes wrong, even when it has a good method. If you are using AI to solve problems, you need also to have a scientific calculator to hand. LLMs are spectacularly bad at counting still. Nobody quite knows why.
 
Last edited:
That sounds like a really interesting session. I’m doing a course on business and law and I really hope they address AI.
 
  • Like
Reactions: izz
I think the difference there is that you use a scientific calculator to plug in numbers, but in that example, you could use ChatGPT to calculate redshift for you and also ask it to show all its workings. You can check the workings and it may well make errors that a scientific calculator would never make, but if it's been trained on that particular set of equations, it will probably get its method right.

FWIW I've been training ai models recently in doing maths and science, and my top tip for anyone using it is to check all the calculations - even simple sums - as that is where it often goes wrong, even when it has a good method. If you are using AI to solve problems, you need also to have a scientific calculator to hand. LLMs are spectacularly bad at counting still. Nobody quite knows why.

I generally wouldn't trust an LLM to reliably do mathematics. They seem to be rather poor at any problem that is not either A) simple enough to check with a calculator, in which case why bother with an LLM, or B) common enough to be found all over the internet and thus frequently cropping up in their training data. Even under the best conditions, since output is determined stochastically rather than deterministically, I would still expect the occasional mistake to creep in even with the most common and obvious mathematical problems.

Which is strange really, I've heard mathematics being described as a language, but I guess that actually using it effectively requires a degree of comprehension that is beyond the capabilities of a typical LLM. Give one a novel problem, and I would generally expect it to flunk.
 
I generally wouldn't trust an LLM to reliably do mathematics. They seem to be rather poor at any problem that is not either A) simple enough to check with a calculator, in which case why bother with an LLM, or B) common enough to be found all over the internet and thus frequently cropping up in their training data. Even under the best conditions, since output is determined stochastically rather than deterministically, I would still expect the occasional mistake to creep in even with the most common and obvious mathematical problems.

Which is strange really, I've heard mathematics being described as a language, but I guess that actually using it effectively requires a degree of comprehension that is beyond the capabilities of a typical LLM. Give one a novel problem, and I would generally expect it to flunk.
Yes. I've been paid to create maths problems that fool LLMs, then I have to try to guide them to the right answer. It's not that difficult to fool them, and I think they're making a massive category error in trying to teach AI to reason, which ostensibly is what I'm doing. It doesn't reason. It pattern-matches.

When it has had the appropriate training, it can give the impression of being very clever, solving some pretty hard maths. Then it makes a crass error and you remember that it's not reasoning at all.
 
The university I am associated with came out with its AI policy last year. There are a lot of boring facets to it, including considerations related to GDPR (because most AI tools take the data into a server in the US and the processing of that data doesn’t meet GDPR standards). You can summarise it broadly as “feel free to use AI as part of your idea generation process and to help you do your literature review, but seek advice from the relevant university department before trying to use it for any kind of data analysis, and don’t pass off AI output as your own work. Also, please note that it generates bullshit, so don’t rely on it.”
 
  • Like
Reactions: izz
Sounds like a super course NoXion, I'll be interested in any updates. Wonder if the 'ethical use of AI in learning' is any more complicated than "tell us if you've used AI in this assignment or we'll mark you down if we can prove you have". Whichever, still interesting and yes please do update us.
Seconded, this is being taught all over the place. Please also share what you can by way of explainers, they don't age well and are often best triangulated.
 
Back
Top Bottom