shape
carat
color
clarity

Chat GPT

acaw2015

Brilliant_Rock
Premium
Joined
Jun 30, 2015
Messages
911
What are your thoughts regarding chat gpt? In my country this is being discussed as one of the most important news, people saying it will change our lives and work completely and soon, today. I work at a university and we have discussed this at length this week, trying to understand what it is in more detail and the implications regarding examination but also regarding what our students will need to learn in this new future.
I would love to hear your thoughts and how this is being discussed where you are.
 

kenny

Super_Ideal_Rock
Premium
Joined
Apr 30, 2005
Messages
33,344
I'm sure it will have benefits, but overall in the long run do great harm to society, as has TV, the Internet, social media and smart?Phones.
Many will be laid off, while the 1% will get even richer - yet again. :nono:
 
Last edited:

Matata

Ideal_Rock
Premium
Joined
Sep 10, 2003
Messages
9,069
I think AI is an existential threat to humanity. From the developers website, bolding is mine:
"Artificial general intelligence has the potential to benefit nearly every aspect of our lives—so it must be developed and deployed responsibly."

We are incapable of the bolded part.
 

kenny

Super_Ideal_Rock
Premium
Joined
Apr 30, 2005
Messages
33,344
So far the only good thing I've heard about AI is its effectiveness for radiology.
When examining X-rays for the earliest signs of cancer, AI finds more cancers than humans find.
Many lives will be saved.
That's very good.

AI has the entire world of knowledge at its command, and at blinding speed.
A human doctor? Not so much.
 
Last edited:

acaw2015

Brilliant_Rock
Premium
Joined
Jun 30, 2015
Messages
911
I am worried. I do see benefits, easier access to knowledge. But who decides what that knowledge is? Will the ease of access make people take short cuts, disregarding critical thinking?
I asked chat gpt a few questions yesterday and got many interesting answers. But I also got one completely wrong answer. When I questioned the answer, the AI excused itself and said it had looked into this question further, but continued replying the wrong answer. When asked for sources, it stated a few false sources, sources that do not exist. Completely made up by the AI. Every answer was written in a convincing way and in a tone that was also very convincing like it was the only truth. I am sure the answers will get better over time, but the way of describing things like it is the only correct answer gave me chills. I realise that this AI is not really made for searching for information, but it sure makes it seem like it.
 

qubitasaurus

Brilliant_Rock
Premium
Joined
Dec 18, 2014
Messages
1,655
Open AI hired Scott Aaronson to help build in ways of detecting AI generated text, so perhaps the education system will later be able to validate (but perhaps not as Google apparently has their own large language model which is even more powerful than ChatGPT-- just with longer latency time for model inference. And there is apparently significant differences between purely AI generated text and AI and human co-written text so it is hard to see how you'd catch all cases). I'm mildly curious what he will do.

I can't immediately see how you watermark AI generated text, you'd need to embed a signature which remains identifiable even after the text has been locally abridged/rewritten (as people will rewrite parts themselves). Most ways of singing text -- like a hash function in cryptography -- are highly sensitive to small perturbations in the text as they are designed to detect tampering (imagine you are trying to certify that none of the digits in a telegraphic transfer request have been tampered with -- that's what a hash function is for. Here you want the opposite of that technology). But Scott is very sharp and you might imagine he will be able to build some kind of large scale pattern into the text.

Regardless education system will probably have to embrace living with it. Because it really is the university's role to lead technological advancement -- and you can't do that while hiding under a log hissing at the new gadget in town.

The technology is large language models and thus its really for reproducing human discourse, rather than correctly answering questions a la mode wikipedia. Although it is now connected to the internet so again maybe that is about to change.

Interestingly it is 30 cents per oracle call -- user call to the API. Which means a lot of electricity is going into running that thing. It's also whats called a transformer model which I think means it tends to read the entire conversation thread from the beginning each time, assign an attention score to certain parts of the conversation. Then use these attention score and sentence fragment pairs to generate an output. Seems like a terrible model -- like we almost went backwards technologically. I was told that already in the training of the networks to go up to more parameters there are significant slow down. It almost feels like an interim technology as we search for something better.
 
P

Petalouda

Guest
I teach sophomore level college courses and I failed four students last session for blatant use of ChatGPT. I typed in my own prompt and what ChatGPT produced was what was copied word for word on student’s work. However, AI is rapidly progressing along with ‘tips on how to use’. I have a student this session whose writing has all the sudden become extremely academic. However, I can’t prove they used AI this time.
 

Lookinagain

Ideal_Rock
Premium
Joined
May 15, 2014
Messages
4,613
I teach sophomore level college courses and I failed four students last session for blatant use of ChatGPT. I typed in my own prompt and what ChatGPT produced was what was copied word for word on student’s work. However, AI is rapidly progressing along with ‘tips on how to use’. I have a student this session whose writing has all the sudden become extremely academic. However, I can’t prove they used AI this time.

I'm glad to hear that you failed them. If passing college level courses only requires someone to type a prompt into a computer, they should save their tuition money and forget about college as it seems they wouldn't be learning anything anyway. Just my opinion of course.
 

Avondale

Brilliant_Rock
Joined
Oct 31, 2021
Messages
1,083
I find it fascinating to watch what I can only call a literal explosion of scientific advancement in real time. It's happening so fast. The future is now.

What my friends and I discuss most about chatGPT is the possibility of AI sentience. It's a purely theoretical discussion because if one thing is for certain, it's that neither chatGPT, nor any other similar language model is sentient. It doesn't think. It works on the basis of association - it associates words with other words. That's not only all it does, it's all it can do. The newest tools are so advanced and so well trained that they're capable of producing an illusion of intelligence, but that's all it is.

When we consider sentience and intelligence we have to take into account the fact that we discuss these topics within the scope of our existence as and knowledge of carbon based life. We, as biological organisms, have senses, ways to experience the world in different ways. There's hearing, there's smell, there's seeing. When I say an apple, you associate this apple with the way it looks, its shape, its colour. But also its smell and flavour, its weight, how hard it is compared to a soft peach, for example.

NLP tools, however, are not carbon based life. They do not possess senses and ways of experiencing the world aside from data, words. ChatGPT can associate a word with other words only, it has no way of creating different and additional associations. It's kind of like, trying to explain colour to a person who's been blind all their life. The only thing you can use to explain one word for colour is a bunch of other words which undoubtedly will provide the blind person with abundance of information, but it won't enrich their understanding of colour in the slightest.

Of course we can always dive deeper in the philosophical questions of what intelligence is, what life is... We only know carbon based life, but does that mean that silicon based life is impossible, even if we can't quite imagine it? Maybe true AI is fundamentally different than human intelligence and trying to equate one to the other, using human intelligence as criteria for AI, is an initially flawed concept?

All of this is truly fascinating and on one hand I enjoy observing it. We're right at that point where technological advancement in the field picks up and accelerates progressively every single day. Only five years from now our life might be completely different due to AI advancements.

On the other hand I'm a little bit worried where all of this is going to take us. The estimated effect of different AI tools on jobs is of troubling proportions. And let's not forget that these tools are commercial projects - they're products offered for a price. We're headed impetuously towards withdrawing work from human professionals so that we can have that same work done by AI at a fraction of the cost. The result of this will be that resources will be taken away from tens or even hundreds of millions of people so that they can instead be re-directed to few individual corporate entities. And if that doesn't scream destruction of the middle class and dystopia to you, I don't know what would.
 

Ionysis

Brilliant_Rock
Joined
Oct 1, 2015
Messages
1,936
We are in deep discussions at my company about the potential uses of this. I work in a bank and one of my responsibilities is driving digital advancement and process efficiency. Ironic really as I still read paper books and I find the whole concept of AI genuinely terrifying. ChatGPT makes me want to run around screaming “skynet is coming! Beware the terminators!” Don’t tell my boss.
 

elizat

Ideal_Rock
Joined
Mar 23, 2013
Messages
4,000
I teach sophomore level college courses and I failed four students last session for blatant use of ChatGPT. I typed in my own prompt and what ChatGPT produced was what was copied word for word on student’s work. However, AI is rapidly progressing along with ‘tips on how to use’. I have a student this session whose writing has all the sudden become extremely academic. However, I can’t prove they used AI this time.

I actually see this as a huge problem for both high school and college level academics. Honestly, I could even see it being a problem in graduate level programs. I know that there is software to look for cheating and failure to attribute sources now, but I think that there are going to be a lot of students that try to use AI in order to get around doing their own work. I think failing them was the right thing. I think it sends a message that it's not going to be tolerated. If you had given them the chance to redo the assignment, it would just send the message that if you get caught, there are consequences, but not bad consequences. It's incredibly academically and intellectually dishonest.

Plus, how are these people actually going to function when they have to do their own work? Are they just going to continue to use AI?
 

telephone89

Ideal_Rock
Premium
Joined
Aug 29, 2014
Messages
4,224
I think it's going to be the way of the future. I'm not sure how I feel about that yet (I'm pretty anti AI/robot overlords lol). I do agree that it poses issues with class work, but I can also see the potential as a time saver. I don't necessarily see it as taking away human jobs, but more so humans working + AI to be more efficient. For some reason I think of paralegals when I think about the future of this. Lawyers are not going to get rid of paralegals and just use chat AI for their work. But, I do see paralegals who use AI as being able to support more lawyers, get through more work, do more research, drafting documents, etc.

I also watched a small segment on the news, and it talked about how this will also create new jobs, ones that we havent even thought of yet. Like before we had computers, I'm sure people couldn't even imagine we'd have some of the jobs we do today. So even if it takes away some jobs, it could also add more in a different way.
 

elizat

Ideal_Rock
Joined
Mar 23, 2013
Messages
4,000
I think it's going to be the way of the future. I'm not sure how I feel about that yet (I'm pretty anti AI/robot overlords lol). I do agree that it poses issues with class work, but I can also see the potential as a time saver. I don't necessarily see it as taking away human jobs, but more so humans working + AI to be more efficient. For some reason I think of paralegals when I think about the future of this. Lawyers are not going to get rid of paralegals and just use chat AI for their work. But, I do see paralegals who use AI as being able to support more lawyers, get through more work, do more research, drafting documents, etc.

I also watched a small segment on the news, and it talked about how this will also create new jobs, ones that we havent even thought of yet. Like before we had computers, I'm sure people couldn't even imagine we'd have some of the jobs we do today. So even if it takes away some jobs, it could also add more in a different way.

I'm a lawyer. There are already tools based upon AI that we use now. There is software within research databases that actually combs through and make sure that your citations are correct, and makes certain that your quotations are correct, etc. It can also suggest additional cases to add, and as someone that uses the software about 50% of the time, it makes a suggestion worth considering, and the rest of the time not so much. Of the time that it's worth considering, it's not always applicable or correct. At least for me.

Before that, and even before online research, you had to use physical books. I don't think that AI is going to gut the legal industry as some people predict. I do think that if someone is in a transactional practice, where they are doing the same form again and again, it is possible that there would be an impact and they would need less staffing. However, there are already programs that we can use that take existing forms and put them on different cases. It could be possible that firms that have only one to three lawyers may not need a full-time assistant, if AI gets very good with checking things for filings and things like grammar checking. But those already exist and people do use them. The legal industry has been streamlining over time anyway and it used to be that you would have a ratio of one attorney to one assistant years ago, then it became common to have two attorneys per one assistant, and now there are many law firms that have one assistant supporting even five or more attorneys. I'm not saying that it works well, because it doesn't, but we're already headed in that direction from an efficiency standpoint.

I tend to view it as a more complementary product, rather than a replacement for the legal industry in particular.

There are some areas within the profession that I think are going to end up being streamlined and I think staffing could be one of those. I also think that attorneys that only do discovery or document review, are going to find that there are less jobs for that because they are going to use more technology versus people to analyze materials. However, it's still going to require an assessment of the materials and the documents to determine the relevancy and impact on the case. But, instead of having an entire department or room of document review attorneys, you might only have a couple that you use technology.
 

Matata

Ideal_Rock
Premium
Joined
Sep 10, 2003
Messages
9,069
When I see the assault on education from parental and government interference in the form of book burnings and bannings and the whitewashing of history, the intent to ensure that no kid feels bad about anything done by the forefathers, my biggest concern is that AI will make us even dumber than we're trying to make ourselves.

What happens to critical thinking, which I think is necessary for our survival and evolution, when we depend on AI to do most of the thinking for us?
 

telephone89

Ideal_Rock
Premium
Joined
Aug 29, 2014
Messages
4,224
I'm a lawyer. There are already tools based upon AI that we use now. There is software within research databases that actually combs through and make sure that your citations are correct, and makes certain that your quotations are correct, etc. It can also suggest additional cases to add, and as someone that uses the software about 50% of the time, it makes a suggestion worth considering, and the rest of the time not so much. Of the time that it's worth considering, it's not always applicable or correct. At least for me.

Before that, and even before online research, you had to use physical books. I don't think that AI is going to gut the legal industry as some people predict. I do think that if someone is in a transactional practice, where they are doing the same form again and again, it is possible that there would be an impact and they would need less staffing. However, there are already programs that we can use that take existing forms and put them on different cases. It could be possible that firms that have only one to three lawyers may not need a full-time assistant, if AI gets very good with checking things for filings and things like grammar checking. But those already exist and people do use them. The legal industry has been streamlining over time anyway and it used to be that you would have a ratio of one attorney to one assistant years ago, then it became common to have two attorneys per one assistant, and now there are many law firms that have one assistant supporting even five or more attorneys. I'm not saying that it works well, because it doesn't, but we're already headed in that direction from an efficiency standpoint.

I tend to view it as a more complementary product, rather than a replacement for the legal industry in particular.

There are some areas within the profession that I think are going to end up being streamlined and I think staffing could be one of those. I also think that attorneys that only do discovery or document review, are going to find that there are less jobs for that because they are going to use more technology versus people to analyze materials. However, it's still going to require an assessment of the materials and the documents to determine the relevancy and impact on the case. But, instead of having an entire department or room of document review attorneys, you might only have a couple that you use technology.

Yes, this is basically how I see it. It wont eliminate everything, but it sure makes it easier to work with. I've not heard of the current system (not a lawyer or paralegal!) but it does sound similar, and with improved efficiency and capability could be soo powerful.

I think it sounds a bit naive to say humans + AI working together, but honestly technology has come so far already. We are communicating with people across the world in seconds using the internet. I can google any recipe and watch a youtube video of a chef making it. Its wild to think of what could happen in the future. I also think about things like, what makes us human? How can you design and thrive in an environment that only a human can? Is writing a paper specifically a human thing? (I don't think so.) How can we change that perspective and take advantage? Idk. I think there is a lot of change coming in the future. I don't think I'm smart enough to be on the precipice though haha.
 

Cinders

Shiny_Rock
Joined
Jul 30, 2021
Messages
446
These are interesting times. Lots of echos of 1984 (the book) and the Terminator movies.

It's amazing how much has changed as a result of just the iPhone/smartphone. AI is fascinating and I'm sure we'll get to a point where we barely remember what it was like before it existed. Whether that ends up being a good thing or a bad thing is yet to be seen.

As far as job displacement, I can see the application of it in the legal field but as an additional tool, as @elizat said. I don't see it as a true substitute for lawyers because the necessity of analytical, critical thought pertaining to the specific case will still be required. The role of the paralegal will probably change fairly significantly.

It could become a major disruptor in the world of writing. And, it's already an unwieldy beast in the world of education. I feel for those who have to deal with the immediate consequences of it during this strange time of transition.
 

acaw2015

Brilliant_Rock
Premium
Joined
Jun 30, 2015
Messages
911
Interesting how a few of you described law and education, this actually being my field.

Sorry for my bad English... but I feel quite at a loss here. Teaching law at a higher level means making the students being able to discuss different solutions, ethics and critical thinking. That is not always easy to show in a written exam because it is a knowledge that requires thinking and reflection over time. This is why we prefer the students to produce essays, at least a few times, during their studies. I am not sure if that will actually be possible in the future as a form of examination. Someone suggested letting the students do oral exams instead... But those cannot show that type of knowledge, only superficially. I am afraid that the available examination will decrease the quality of the education. We will reduce what students learn to what is possible to test in a written exam.

On top of that... Who decides what underlying values the AI will have when writing it's answers/texts..? The company owning the AI?
 

Cerulean

Ideal_Rock
Joined
Sep 13, 2019
Messages
5,078
I actually see this as a huge problem for both high school and college level academics. Honestly, I could even see it being a problem in graduate level programs. I know that there is software to look for cheating and failure to attribute sources now, but I think that there are going to be a lot of students that try to use AI in order to get around doing their own work. I think failing them was the right thing. I think it sends a message that it's not going to be tolerated. If you had given them the chance to redo the assignment, it would just send the message that if you get caught, there are consequences, but not bad consequences. It's incredibly academically and intellectually dishonest.

Plus, how are these people actually going to function when they have to do their own work? Are they just going to continue to use AI?

Short answer to bolded part = yes

long answer = i work in tech and the general saying going around is "you won't lose your job to AI, but you'll lose your job to someone who uses AI"

i think this is 100% true. i've been using AI plug-ins to work more efficiently for 2ish years...actually they were all recently disabled on company computers for privacy reasons (makes sense, but big blow to my efficiency)

given that schools already cost a bajillion dollars, they can easily issue laptops with restrictions. you can just gate keep certain apps and websites. easy peasy. yes, students can get their own laptops...but students should be failed for writing w/ it for certain classes like writing classes.

but banning it entirely will inevitably be a handicap IMO and curricula may have to be a little more creative to get kids to think critically although that will take time and herculean effort
 

elizat

Ideal_Rock
Joined
Mar 23, 2013
Messages
4,000
Short answer to bolded part = yes

long answer = i work in tech and the general saying going around is "you won't lose your job to AI, but you'll lose your job to someone who uses AI"

i think this is 100% true. i've been using AI plug-ins to work more efficiently for 2ish years...actually they were all recently disabled on company computers for privacy reasons (makes sense, but big blow to my efficiency)

given that schools already cost a bajillion dollars, they can easily issue laptops with restrictions. you can just gate keep certain apps and websites. easy peasy. yes, students can get their own laptops...but students should be failed for writing w/ it for certain classes like writing classes.

but banning it entirely will inevitably be a handicap IMO and curricula may have to be a little more creative to get kids to think critically although that will take time and herculean effort

What sort of AI plug ins do you find useful? My work is likely very different but I have used a few grammar apps. I have used specific things for legal research. But I have never used it to draft court filings and honestly, probably would not for a lot of reasons. I wonder if in certain fields it's more valuable than others?

I could see use as a teacher for lesson planning help, as another example.

I am in court a lot and do a lot of depositions and mediations, etc. I am in civil litigation. So, it's more of little things to complement the work.

I would not even use AI to draft discovery responses, where you are answering questions and producing documents. The AI would not know everything we have, the answer to fact based things you have figured out in investigation, what is privileged, etc.

But, if your practice is more forms and wash rinse repeat, I can see it being of a larger help, like for landlord tenant or debt collection work.
 

Cerulean

Ideal_Rock
Joined
Sep 13, 2019
Messages
5,078
What sort of AI plug ins do you find useful? My work is likely very different but I have used a few grammar apps. I have used specific things for legal research. But I have never used it to draft court filings and honestly, probably would not for a lot of reasons. I wonder if in certain fields it's more valuable than others?

I could see use as a teacher for lesson planning help, as another example.

I am in court a lot and do a lot of depositions and mediations, etc. I am in civil litigation. So, it's more of little things to complement the work.

I would not even use AI to draft discovery responses, where you are answering questions and producing documents. The AI would not know everything we have, the answer to fact based things you have figured out in investigation, what is privileged, etc.

But, if your practice is more forms and wash rinse repeat, I can see it being of a larger help, like for landlord tenant or debt collection work.

Its rare for the content I write to be rinse and repeat. I’m a content designer - meaning I’m a UX designer who specializes in a combo of skills: writing, information architecture and product design

I use writing-based tools like wordtune, grammarly, compose AI, hyperwrite - there are others. None are perfect but they are all useful

For ChatGPT - it should never be trusted to complete a job for anything that matters. but I feed it all sorts of stuff before I use it - you can create templates, information hierarchy guidelines, privacy policies, voice and tone instructions, etc. I usually dump pages of guidelines I keep in a spreadsheet before I start (I did not create all of the guidelines myself)

Fictional examples:
“Follow APA style”
“Follow this format:
H1
H2
Body
CTA”
“Show me what this sentence looks like in the top 10 most used languages globally”
“Write in a celebratory tone based on X definition”
“Only use second person pronouns”
“Give me 5 definitions for this term in 150 characters or less without using the terms X and Y”

You get the idea…

Basically, you can “teach” it to write how you want it to and create a script with limitations and guidelines you want it to use!

Is it perfect? Nope. Do I usually correct the content it generates? Yup. But can I create new content many times faster with it? Yup! It’s helpful for speed and to get ideas, maybe to proofread long form content but I don’t trust it enough yet
 
Be a part of the community Get 3 HCA Results
Top