top of page

Who is afraid of ChatGPT?



By Catherine Lalanne

These last few weeks, I have felt a bit like a bloated LLM model being overfed ChatGPT and other Generative AI bite-size analysis on LinkedIn and all media, I have decided to try and take back some control: write down my latest thoughts from my personal experience as a product manager in the educational software space, and as a concerned citizen of the world.


First of all, those new Generative AI Tools like ChatGPT as chatbot or for creating text analysis and DALL-E for generating images from a text prompt are AMAZING.


Prompt: I need a quick intro slide to present to my CFO why we need the budget for our new data marts. Hi ChatGPT, explain to somebody specialized in Finance when you should use a data lake rather than a data mart and vice versa?


“As a finance professional, the choice between a data lake and a data mart would depend on the specific needs and goals of the organization. Here are some general considerations:


[good , accessible explanation here..]


It's worth noting that some organizations use both a data mart and a data lake, with the data lake serving as a staging area for raw data before it's transformed and loaded into the data mart. This allows for a more flexible approach to data storage and analysis, while still maintaining the benefits of a pre-aggregated data mart.”


Great, that’s a copy-paste into the slide deck!


It is also obvious it will make writing essays for homework far too easy, and a lot of schools have already banned its use in class. In fact, the list of exams ChatGTP “passed” is growing: OpenAI just announced GPT-4, an updated chatbot that can pass everything from a bar exam to AP Biology.


Though banning usage in some circumstances is completely understandable, look at another mind-shattering tool which caused a complete review of how math was taught: the calculator!


And just imagine if students, having mastered the principles of calculations on paper, were not also taught how to use calculators nowadays?

ChatGPT also has a great list of potential use cases for incorporating in the K-12 Education market, all of which I am looking into. And, obviously, the company behind ChatGPT, OpenAI has anticipated the risks , and is pushing responsibilities squarely to the application providers, as when I asked: what are the risks in using ChatGTP now with Tutoring and Mentoring K-12 students?

“As with any technology, there are potential risks associated with using ChatGPT for tutoring and mentoring K-12 students. Here are some of the risks to consider:


  • Privacy and Security: There is always a risk of data breaches or cyberattacks when using any online service, including ChatGPT. To mitigate this risk, it's important to ensure that proper security protocols are in place and that sensitive student data is kept confidential.

  • Accuracy and Bias: While ChatGPT is capable of providing accurate and helpful responses to student questions, there is still a risk that it may provide inaccurate or biased information. This is because ChatGPT's responses are based on the data it has been trained on, which may contain biases or inaccuracies. It's important to monitor ChatGPT's responses and provide corrections or additional information as needed.

  • Overreliance on Technology: There is a risk that students may become overly reliant on ChatGPT or other technology for their learning. This can lead to a lack of critical thinking skills and the inability to solve problems independently. It's important to use ChatGPT as a tool to enhance learning, rather than a replacement for traditional teaching methods.

  • Lack of Human Interaction: ChatGPT is an AI-powered tool, and as such, it cannot provide the same level of human interaction and emotional support as a human tutor or mentor. While it can be a helpful supplement to human-led instruction, it's important to ensure that students still have opportunities for face-to-face interaction with teachers and other students.

In summary, while ChatGPT can be a useful tool for tutoring and mentoring K-12 students, there are potential risks that need to be considered and addressed to ensure its effectiveness and safety.”


Well, the point regarding the Accuracy might be a bit of an understatement !

To my question: what steps should I take to calculate 12 multiply by 6 on paper?

[yes, there is a typo, but I could well imagine 5th grader making the same mistake]


“Write the number 12 on the top of the paper and the number 6 underneath it.


Multiply 6 by the digit in the ones place of 12 (which is 2). Write the result (which is 12) underneath the line, starting from the right-hand side.

1 2

x 6

-----

1 2 “


Grand.


Oops..


(I did take my calculator to check , as I - the bloated human version of the language model - thought I might be getting a bit wonky)


This shows what ChatGPT and other LLM models are : LANGUAGE models.


Unlike us, and even actual parrots, who are building world models from a range of experiences, up from babies / chicks using language, pain feedback from foolishly jumping from a 3m high wall, as well as mathematical models, to make sense of the world and to communicate, LLMs learn from replicating statistically the content they are shown.


And thanks to Yann Lecun for your post on Facebook, you made my day! “ Looks like calling LLMs "stochastic parrots" might constitute an insult to parrots 🦜” https://news.northeastern.edu/2023/04/21/parrots-talking-video-calls/


So, on the bad news side.


Bias , as covered by Is ChatGPT bias? Ask OpenAI.

Intellectual Property : our friends who work in creative arts - writing, painting, music… - have long had problems making a living of their skills . Most musicians make money mostly from playing Live , for example. Imagine when you can “create” a new song by asking the latest model for writing a song in the style of Nick Cave for example? Who gets the royalties?

Though I agree with Nick Cave, the result here was “a grotesque mockery of what it is to be human”. - ‘This song sucks’: Nick Cave responds to ChatGPT song written in style of Nick Cave | Nick Cave | The Guardian , I can see a lot of Easy Listening radio stations and video soundtracks companies not having such qualms.


It is great to see companies like ShutterStock preemptively taking steps to support a “responsible AI-generation model that pays artists for their contributions'' and “ in mitigations against the biases that may be inherent in some of our datasets, and we are continuing to explore ways to fairly depict underrepresented groups.” Shutterstock Introduces Generative AI to its All-In-One Creative Platform - Press and Media


However, in an unregulated world, who would make the most profits?

MASSIVE upscaling of dubious / misleading content

This is the last, but not least, concern I want to highlight here.

LLMs models can be extraordinarily convincing, they can pass exams, get most of the content ready for journalists preparing an article, but they can also be plain wrong.


In a very convincing way.


Currently small armies of contractors check content for liability to cover Facebook , Google and, not so much anymore, Twitter. What will happen when an exponentially growing amount of convincing content is produced?

What if those LLMs are trained with content biased towards the aims of a country at war, or those of a political party with a hidden agenda?

I don’t believe a 6-month moratorium will solve those risks , however it is urgent to put in place reasonable regulations to ensure transparency and responsibility, such as the proposed EU AI Act and educate our communities to the risks.


In summary, the opportunities provided by ChatGPT , as well as the widely expanding Generative AI tools list, are amazing and will change the way we work and coalesce as a society dramatically. However, the regulations in place - whether at the EU or individual countries level - are still at the infancy of evaluating the possible risks.


As members of the Women in AI , we are well placed to understand the risks from different points of view and influence the decision makers: join us at Global Network | Women in AI (WAI)!


The views expressed here are my own and do not necessarily represent those of HMH.



0 comments

Related Posts

See All
bottom of page