shape
carat
color
clarity

Using genAI to evaluate gemstones

glitterata

Ideal_Rock
Joined
Apr 17, 2002
Messages
4,774
There are several interesting posts about using generative AI to evaluate gemstones in @RRfromR's thread about her beautiful new emerald ring, https://www.pricescope.com/community/threads/how-is-this-emerald.290324/

I thought it was a topic worthy of its own thread.

Clearly the programs can discern SOMETHING about gemstones. They get some things right, sometimes a lot of things. Just as clearly, they get a lot of things wrong. And it's very hard to tell which of their confidently spouted "facts" and opinions are on the mark and which are BS.

What are your experiences using genAI in this hobby?
 
I use ChatGPT quite frequently for a variety of things and I find it to be…..chatty. It has a wonderful ability to be conversational about almost anything, even things it has no idea about. I’d say the same applies to gems. It’s probably being conversational and doing a great job of sounding like it knows what it’s talking about. Would not trust it at all.

This is a program that claims one of my friends was a healer in the past life and that she lived up in the mountains with her current daughter (who was mute and my friend’s teacher in the past life) then it (ChatGPT) proceeded to write several letters to my friend from her daughter in each of their past lives. So yeah…….I don’t believe what it says about gems.
 
We actually had a thread about this topic somewhat recently, I'll link to it so the two are connected: https://www.pricescope.com/communit...-in-evaluating-gemstones.289654/#post-5414047

doing a great job of sounding like it knows what it’s talking about. Would not trust it at all.

This is everything that needs to be said on the subject, basically. Anything else would just be elaborating on this conclusion.

The problem with AI that's publicly accessible to the masses right now is it doesn't really possess a comprehension about anything it's talking about. It doesn't have the technical ability to gain said comprehension yet. What it does do really well, though, is analyse statistical dependencies between words and phrases, which is why it's able to sound like it knows what it's talking about. At least, as long as you also don't know what it's talking about.

Because the moment you know what it's talking about, it becomes pretty clear very quickly that it's babbling nonsense.

A specialised AI certainly can be trained on images of gemstones and learn to recognise patterns in order to be able to become a real, useful aid in evaluating gemstones. But I don't see such specialised AI being available to the masses at this point in time.
 
... which is why it's able to sound like it knows what it's talking about. At least, as long as you also don't know what it's talking about.

Because the moment you know what it's talking about, it becomes pretty clear very quickly that it's babbling nonsense.
...

I think that's the best description of the effectiveness of, yet shallowness of, the growing misuse of the term "optics" that I've ever encountered.
 
Oh, apologies! I should have done a better search before starting this thread--or maybe I should have asked an AI to do one. ;-)

Nonsense, the other thread is old enough to warrant opening a new one instead of necroing the old one.

I just like things neat, tidy and interconnected to a disturbingly pedantic degree.

As the old saying goes, it’s not you, it’s me.
 
Out of curiosity, I asked ChatGPT to find whatever info it could about several PS members, such as gender, profession, family, where they live, to see if these LLMs have made posting here markedly more unsafe than before.

It did a fairly bad job, missing a lot of info people have provided about themselves. Unless you post details explicitly, it's likely to miss stuff about you. It correctly identified one beloved member as living in New Zealand and correctly identified another as having 6 children, including the university study subject of one child, but it was unable to identify the sex (or gender) of another member who posted about having given birth, which should have been a clue. One member often posts about her loving relationship with her husband, and I've never noticed her mentioning having children; the LLM calls her a "single mom." It did correctly identify the general area where I live, but it assigned me an incorrect initial. (Apparently I'm obsessed with Victorian and antique jewelry, which--fair enough.)

Maybe future versions of these LLMs will be better at doxxing, but the current ones aren't.
 
I use AI at work to optimize and check my code. I use it very sparingly in my personal life because it uses enormous amounts of energy and it's terrible for the environment overall. Some people get psychosis from using it, they develop an alternate universe, it's actually quite dangerous for those with mental issues.
It's pretty amazing at data classification, so for gems, it's quite good at guessing objective criterion such as origin. Looking at microscopic data, i have no doubt that it can identify type and origin with 100% accuracy.
Qualitative ones. Not so much, especially for antique, hand made things. It cannot understand the magic and beauty of color :)
 
GET 3 FREE HCA RESULTS JOIN THE FORUM. ASK FOR HELP

Featured Topics

Top