shape
carat
color
clarity

Whiteflash''s "A Cut Above" under the BrillianceScope

Status
Not open for further replies. Please create a new topic or request for this thread to be opened.

RockDoc

Ideal_Rock
Joined
Aug 15, 2000
Messages
2,509
I have data on how different brands result in the B Scope.

I have imaged many Eightstars, A Cut Aboves, Superb Certs and lots more branded and unbranded ones.

However, to release test results that I have done for consumers, would be a breach of confidentiality.

I can say this..

The Brilliance Scope is very repeatable. Yes, there is variation, but not much.

When a seller provides a B Scope and the consumer wants it verified, about 99% of the time the results are very close.

I have had a few occasions where there is a variance. When this happens, the information is transmitted to Gemex, and they immediately resolve the situation.

In one instance, the variance proved that either my B Scope or the sellers B Scope had a problem.

Gemex traveled to me to go over my machine, and also traveled personally and checked out the other machine. It was concluded that the other machine had a problem with the filters in it, and they of course corrected the situation.

I don''t know of any equipment company that makes such a concerted effort to keep things straight and accurate in the manner that Gemex does.

If there are changes that are observed the software is updated. Due to improvements in cutting and facet design, ignoring improvements would be negligent. Gemex is intensely supportive of the results the Bscope provides.

I am getting a stone this week, that is being sent by Gemex for research purposes. The results of this stone I will have permission to post, as it gets a very unusal result, even though the stone doesn''t have the "classic ideal proportions". This stone has been tested in various Bscope machines and appears to "break all the formerly assumed rules". The question is WHY? Gemex wants to know, and certainly I am also interested intensely as to the facts of why this is happening. Is this the exception to what we''ve learned so far? Is it that these proportions will consistently yield such incredible results? Can the WHY really be answered? I will have the stone next week and will begin an intensely made study on it. The results of this can be reported to the public.


Further opinion expressed by some sellers is not current. The camera, computer board, software, laser alignment have all been improved recently. Gemex''s owner and tech support personnel have actually traveled all over the world to make these corrections. Very few if any other equipment suppliers, will make such a deliberate and concerted effort for their customers.

As to consisitent ratings of each brand of stone, or type of stone, while I can''t disclose the details of my reports, I can say that there is variance in every brand of stone. As I''ve written a "zillion" times. DIAMONDS ARE UNIQUE AND AS SUCH THERE IS VARIANCE, EVEN IN THE SAME BRANDS. IN THE BROAD CONSIDERATION OF THE RESULTS, ALL OF THE FINER CUT STONES ALL ARE RATED WELL. BUT SOME ARE BETTER THAN OTHERS. While I could certainly request of former clients if I could publish their results publically, I believe that this wouldn''t be useful. To do this without far more interpretation of each test result and a narrative written or oral explanation of the results would be unfair to the sellers of each branded diamond.

There is another "problem" that is sort of consumer generated. Every one expects the stone they pick to get the maximum results of very high. There are plenty of very acceptable stones that grade high as well. Recently consumers have recognized this, and are a lot more flexible. Part of the interpretation of the light return results require more interpretation of each individual analysis, and analysis of the results of both the Analyzer as well as the viewer. In the early days of the BScope consumers felt really dissappointed if their stone submitted didn''t get the highest ratings, and they chose other stones, that weren''t necessarily better, and this did generate some "sour grapes" with sellers if their stones didn''t always get the ultimate grading.


Gemex was pressured by these sellers to have their stones rate differently. But Gemex didn''t give in to this. Instead research, study and a very concerted effort was made by them (and still is) to provide the most comprehensive report available.

The world changes everyday, and so does technology. To rest on one outdated standards potentially obsolete information, and not constantly try to improve or be more accurate isn''t what Gemex is about.

There is a vast difference between the attidudes of those selling vs. those who are analytical. This is why a lot of consumers have me review the results of testing, so that they get confirmation that the reusts of which are unbiased and impartial.

I will be posting the information of the stone that is coming as soon as it can be completed, maybe I will post results of the various tests as I do them.

Gemex also has a stone that is sort of their benchmark testing stone. After my machine was upgraded this stone was sent to me, and I tested it to confirm that my ratings were the same as the results from other machines they have tested it with.
My results were the same within a very close tolerance. It was mentioned in this thread that there is a question about a 5% difference in test results and how worthwhile the testing is. Keep in mind that the 5 % variance is about the greatest variance there is, and that if there is a variance it is far more commonly a lot less of a difference between machines.

For my own satisfaction, I do image just about every stone, whether the consumer wants the results or not. I do this for my own level of interest in how repeatible the results are, and to constantly observe any differences. I supppose this might be considered unessecesary "hair splitting" but I learn and improve my knowledge on an ongoing basis too.


Rockdoc
 

JohnQuixote

Ideal_Rock
Joined
Sep 9, 2004
Messages
5,212

This is beginning to sound like every other tug-of-war over BS.

emotion-20.gif
I’ll try to return to the original question with prior discussion in mind.




We hear and understand other views. We respect others’ rights to use BrillianceScope in their businesses. We just do not endorse it ourselves.

We are a company that feels strongly about accurate science. We gave this device our expert assessment – for over a year - and it does not meet our standards of repeatability or accuracy. That's just us. Others have found a place for it and that is fine. Our problems with BS have nothing to do with results. They are the same problems some of our peers in the scientific community have: Repeatability, methodology and light source. For years GemEx has been asked to submit BS for peer review. They have declined to have BrillianceScope evaluated in this manner.

ACAs perform well on BS. While we had BS and tested it our diamonds were displayed online with GemEx reports. They scored Hs and VHs. Rockdoc can verify that ACAs score well. That’s not the point. The point is that we feel this device does not tell the whole story about a diamond – and we’re not even certain about the part it is purporting to tell. And, despite the skill of some operators like Bill who may be able to overcome the given error, the fact is that we choose not to accept it.

Yes, BS reports might help us sell more diamonds, but at this time we choose not to accept it.

Remember that BS is not unique as an assessment machine. There are other technologies out there that we don’t use either, some proving superior. The Imagem machine Dave Atlas has incorporates very interesting technology. It accurately measures to .004 of a micron, grades color & clarity consistently and has repeatable light performance results (read those last 4 words again – hmm).

Ideal-scope is repeatable and we use naturally correlated light with it. It meets our accepted standards. When the time comes that we are comfortable with the accuracy and repeatability of a mechanical performance assessment device we will look into adopting it.

Again, we have no problem with those who elect to use it. At this time we do not.
 

valeria101

Super_Ideal_Rock
Premium
Joined
Aug 29, 2003
Messages
15,809
Date: 4/8/2005 5:58:56 PM
Author: JohnQuixote



This is beginning to sound like every other tug-of-war over BS.

Am I dreaming awake or "BS" is actually better known for some (irreverent) else in good old English ?
9.gif


How could they !!!!
 

mdx

Brilliant_Rock
Joined
Mar 1, 2002
Messages
570
Date: 4/8/2005 1:43:14 PM
Author: RockDoc
I have data on how different brands result in the B Scope.



Gemex also has a stone that is sort of their benchmark testing stone. After my machine was upgraded this stone was sent to me, and I tested it to confirm that my ratings were the same as the results from other machines they have tested it with.
My results were the same within a very close tolerance. It was mentioned in this thread that there is a question about a 5% difference in test results and how worthwhile the testing is. Keep in mind that the 5 % variance is about the greatest variance there is, and that if there is a variance it is far more commonly a lot less of a difference between machines.

For my own satisfaction, I do image just about every stone, whether the consumer wants the results or not. I do this for my own level of interest in how repeatible the results are, and to constantly observe any differences. I supppose this might be considered unessecesary ''hair splitting'' but I learn and improve my knowledge on an ongoing basis too.


Rockdoc
Hi Rock
Your reference to the master stone is rather interesting
It suggests that the technology is measuring deviations from six pixel maps using software.
In simple terms the little camera takes a picture at each of the six positions using a master stone of known good performance creating six pixel maps. The software that drives the performance bars does calculations based on the deviations from the standard (benchmark stone)

If this assumption is correct (of cause it may be totally incorrect) then the repeatability should be pretty good.


It would also explain why Gary (Cut Nut) suggests that slight position changes of the lights would dramatically change the results.


It would also explain why stones that we all know with popular Ideal dimensions would always perform well.


This also means that a clever designer could possibly cut a mediocre stone that could artificially score very well.


Rock have you ever tested some of the non-traditional ideal cuts, like shallow pavilion/steep crown?


Rock we had an experimental cut we where working on tested on a BS and got a very high score on a stone that certainly would not impress the Pricescope pundits. I have an idea that because the software did not recognize the facet design it gave an incorrect result. (Gary I am sure you remember looking at this stone in Melbourne)


I would not like to post the report here as it may compromise the contract cutter that used his BS to run the test. I would however be happy to send you the stone to play around with and report back (all in the cause of science)


John Q if my idea of how this technology works is correct and as I mentioned I could be totally wrong then don’t you think the type of lighting would be totally irrelevant to the result.


Here is an interesting thought for you scientific types, What if one could apply some form of neural logic to the software of one of these devices that it gets more and more clever with each test.


Johan
 

strmrdr

Super_Ideal_Rock
Joined
Nov 1, 2003
Messages
23,295
Date: 4/8/2005 5:58:56 PM
Author: JohnQuixote
The Imagem machine Dave Atlas has incorporates very interesting technology. It accurately measures to .004 of a micron, grades color & clarity consistently and has repeatable light performance results (read those last 4 words again – hmm).
You should have know I was going to call you on this one.
Prove it.
They are no more willing than gemex is to submit it for testing.
If your going to knock gemex for it then it goes double for imagem because atleast the b-scope has been around long enough for people to get an idea of what it does and doesnt do.
I for one totaly reject imagem because we dont need another "magic" box we need real peer reviewed science.
 

RockDoc

Ideal_Rock
Joined
Aug 15, 2000
Messages
2,509
Hi Rock
Your reference to the master stone is rather interesting
It suggests that the technology is measuring deviations from six pixel maps using software.
In simple terms the little camera takes a picture at each of the six positions using a master stone of known good performance creating six pixel maps. The software that drives the performance bars does calculations based on the deviations from the standard (benchmark stone)

If this assumption is correct (of cause it may be totally incorrect) then the repeatability should be pretty good.


The stone from gemex has been "run" on many different machines. The "master" report results is what we compare our units results to. Results just aren''t the bars. The images are compared as well.


It would also explain why Gary (Cut Nut) suggests that slight position changes of the lights would dramatically change the results.


I disagree. Like I''ve written previously the analyzer unit has pre- fixed light entry positions. The viewer doesn''t the light can be positioned a totally variable distances and angles. I have found that small deviations in the light angle from the Analyzer are very similar in the Viewer even when adjust a bit. There is gradual change, as when you increase or decrease the light entry angle the light exit result changes. But if one takes the stone in the viewer you can see with your eye, that basically all the images are very close in appearance when the two results are compared.


It would also explain why stones that we all know with popular Ideal dimensions would always perform well.


Not true ... I''ve had some stones with the right proportions not do as well as others. The range of the proportions of ideals I believe is too broad, hence the variance of the light performance results.


This also means that a clever designer could possibly cut a mediocre stone that could artificially score very well.


Again - not so... The B Scope has nothing to do with proportions.... it is a light return measuring device. It doesn''t care what is in it (Providing it is small enough to put in the hemisphere it has). If the item returns light it shows what it returns in the images. It does however have some metrics based on those light images that converts the data to a "rating". Those metrics have relational values for diamonds.


Rock have you ever tested some of the non-traditional ideal cuts, like shallow pavilion/steep crown?


Yes. not every stone that comes in here is one of the "premium" cut stones. I have seen stones with lousy proportions return light wonderfully.


Rock we had an experimental cut we where working on tested on a BS and got a very high score on a stone that certainly would not impress the Pricescope pundits. I have an idea that because the software did not recognize the facet design it gave an incorrect result. (Gary I am sure you remember looking at this stone in Melbourne)


When was this? The software and some of the hardware has been updated, so the "experiments" you ran may really not be comparable. The software now is a bit more "picky" and the camera, camera software and board is far more "sensitive".


I would not like to post the report here as it may compromise the contract cutter that used his BS to run the test. I would however be happy to send you the stone to play around with and report back (all in the cause of science)
While I certainly don''t need those results to run my own testing. I would like the data files of the four imagings rather than a printed report. Eventually, I''d have to see them for comparison, but initially I wouldn''t need those. I am certainly willing to test it, and when finished find a way to have them compared together. If you have a suggestion on how this could be done, I''d certainly be agreeable to possibly you submitting the former reports to an independent place, and me submitting my reports there too. then both released together.

However, I suspect your data is old, and there in fact may be a larger variance when the stone is run with the newer BScope and its software.

John Q if my idea of how this technology works is correct and as I mentioned I could be totally wrong then don’t you think the type of lighting would be totally irrelevant to the result.


I don''t think any of the light sources replicate the varying environments of "real world" light. To attempt to analyze this to me appears to be fruitless. Most of the machines are good for comparing one stone to another under the same or identical enviroments, and to that end they are all helpful in the information they provide.


Here is an interesting thought for you scientific types, What if one could apply some form of neural logic to the software of one of these devices that it gets more and more clever with each test.


People with intent to try to fool the testing units, can probably be accomplished. Gemex has a system now where they audit the images we provide, and can''t publish them until this is done. No one else with various testing equipment does this that I am aware of.




Johan, if you want to have me test and "play" with the stone, I''d be agreeable provided it is done in a way that nothing can be manipulated or influenced unfairly. If the imaging was done before 3 months ago, it would be advisable to have the contract cutter you did re-image it on his updated machine, then submit it to me for comparison. I would also agree to image the stone enough times to verify that the repeatibility is within reaonsable declared limts, or in the alternative, not repeatable within the specified tolerances.
Thanks for the offer and proposal,

Rockdoc

 

RockDoc

Ideal_Rock
Joined
Aug 15, 2000
Messages
2,509
Date: 4/8/2005 7:12:26 PM
Author: strmrdr

Date: 4/8/2005 5:58:56 PM
Author: JohnQuixote

The Imagem machine Dave Atlas has incorporates very interesting technology. It accurately measures to .004 of a micron, grades color & clarity consistently and has repeatable light performance results (read those last 4 words again – hmm).
You should have know I was going to call you on this one.
Prove it.
They are no more willing than gemex is to submit it for testing.
If your going to knock gemex for it then it goes double for imagem because atleast the b-scope has been around long enough for people to get an idea of what it does and doesnt do.
I for one totaly reject imagem because we dont need another ''magic'' box we need real peer reviewed science.
Strm

When word was released that the IMAGEM was going to be marketed, I called IMAGEM and asked them about being a place to analyze stones with their machine.

It has now been two months, and I haven''t heard a peep from them, even though I have called several times...( always got an answering machine) and left messages.

I think consumers would like both reports and endeavoring to do the best I can - and independent and unbiased about what I do - I am of the opinion that running dual reports would benefit the consumer, myself and the industry.

I sort of know what the B Scope does. I can open the unit and look inside. Although I am not an engineer or advanced computer person, I can see the basic prinicple on how the hardware works.

From other people I have heard that IMAGEM doesn''t even want to open the "box". I certainly understand proprietary secrets and the need not to create a copycat, but I have no intentions of competeing with either company. My interest is "scientific" and to provide the best I am financially and academically capable of doing. (This fancy dancy equipment isn''t cheap).

The more pieces of equipment that I can have to provide more and more information for consumers, the better I would feel about it.

I would like two items that I don''t already own. However, these two items will cost $ 100,000. I am also looking into getting an electron microscope, but so far I haven''t found one that will be useful to examine items of the "thickness" that I would need. Most of them will only "see" into items with a maximum of about 3mm in thickness. I like one that was capable of 20 mm. thicknesses to do the research that I want to do.

I hope that IMAGEM reads this and decides to contact me about it, but based on their previous non-responsive position, I am not getting my hopes up.

Rockdoc
 

Dancing Fire

Super_Ideal_Rock
Premium
Joined
Apr 3, 2004
Messages
33,852
Date: 4/8/2005 8:36:44 AM
Author: esqknight
Good morning.

Is a 5% difference high when you''re talking about measuring light return with a BrillianceScope? I know it would be a high rate of error if I was an engineer building a bridge (oops, well the measurement for the support beam was only 5% off!). I''ll admit I don''t know enough of the science to say how useful a measurement light return is with a 5% rate of error. Again, it seems undisputed that a BrillianceScope can determine the great diamonds cuts, but probably can''t distinguish between degrees of greatness (or idealness). Mara, I agree with you that I wouldn''t buy a diamond based upon a BrillianceScope. I wouldn''t buy a diamond based only on an idealscope. I''d like to see the certificate, sarin report, idealscope, hearts and arrows image and maybe the BrillianceScope (the usefulness of which is currently on the table).

Later,
Eric
never hurts to try and get as much information as you can on a stone.
 

JohnQuixote

Ideal_Rock
Joined
Sep 9, 2004
Messages
5,212
Date: 4/8/2005 7:12:26 PM
Author: strmrdr
Date: 4/8/2005 5:58:56 PM

Author: JohnQuixote

The Imagem machine Dave Atlas has incorporates very interesting technology. It accurately measures to .004 of a micron, grades color & clarity consistently and has repeatable light performance results (read those last 4 words again. hmm).


You should have know I was going to call you on this one.
Prove it.
They are no more willing than gemex is to submit it for testing.
If your going to knock gemex for it then it goes double for imagem because atleast the b-scope has been around long enough for people to get an idea of what it does and doesnt do.

Strm, read the post again. We are not embracing any mechanical performance assessment as scientifically sound yet - that goes for all such devices. After a cursory review our impressions about Imagem are positive regarding measurements (better than Sarin) and color and clarity grading (consistent). Additionally - though we don''t know enough about it yet - the light performance results are repeatable.

We are not out to ''prove'' anything. This is simple. Brian observed a diamond run more than once on Imagem with the same results in the light performance categories it assesses each time, that''s all. Alternately, here are results observed for a single diamond (60.8, 56, 34.6, 40.8) run three times on BrillianceScopes (with software prior to their latest): (1) VH1 VH2 VH1 (2) VH2 VH1 H3 (3) VH1 VH2+ H3.

That''s where my statement on repeatability was derived. Whether we agree with what Imagem measures remains to be seen, but as far as repeatability that is a much better start than BS had - and years later BS still has an error.

I for one totaly reject imagem because we dont need another ''magic'' box we need real peer reviewed science.

That''s a scary attitude Strm. Don''t go flat earth society on us
2.gif
. Many people ''totally rejected'' ideal-scope at first. Those who kept an open mind now understand light return much better. Many people ''totally rejected'' the H&A viewer at first. Those who kept an open mind have understand much more about facet construction. Many people ''totally rejected'' macintosh computers at first. Those who kept an open mind developed operating systems, graphics and music programs that raised the bar for both platforms.

It''s important to understand that we do not ''totally reject'' BS. We will keep an eye on its development - and the development of other performance assessment machines.
 

JohnQuixote

Ideal_Rock
Joined
Sep 9, 2004
Messages
5,212
Date: 4/8/2005 747 PM
Author: mdx

John Q if my idea of how this technology works is correct and as I mentioned I could be totally wrong then dont you think the type of lighting would be totally irrelevant to the result.

Here is an interesting thought for you scientific types, What if one could apply some form of neural logic to the software of one of these devices that it gets more and more clever with each test.

Johan, great post.

For purposes of pixel counting it's interesting to speculate what could be done that BS does not currently do: Gauge light performance over a range of conditions, rather than trying to separate white light and colored light: We believe this is fundamentally flawed, since WLR and DCLR combine and interact to create the life observed in a diamond. They will never be separate outside of the box.

As far as type of lighting - if you proceed from the concept offered above, it makes correlation to natural light even more important! When you see the photos BS offers - what are you really seeing? I consider those photos the most interesting part of BS, but the reactions you observe do not correlate to any real world condition, so for the basic consumer they must be interpreted by an expert who can translate the artificial into the real.

Garry Holloway states that naturally correlated lighting is a nice idea, but not so important to ideal-scope. I can buy that to a degree since we're just looking for leakage. Nevertheless, at Whiteflash we are going to by-gosh use NATURAL lighting because that is the standard we are committed to, and our customers in-house truly appreciate the fact that what they see through IS in the tweezers backlighted with natural daylight is the same image they see in our IS photos. (more here on our setups) Our approach is a marriage of evolving technology with Brian's 5th generation, old-school tenacity.
1.gif


You are correct in what you wrote to Rock: Diamonds can be cut to max out the BS performance metric. Martin Haske has offered this view before, as well as noting that the BS scope tests diamonds "for light return" in a lighting environment the diamond will never see again, and doesn't see in nature, so what is it telling the consumer...? (refer to this thread)

Many of us consider Marty to be tops in the science community, so we're in good company with our views regarding current BS technology.
 

Rhino

Ideal_Rock
Trade
Joined
Mar 28, 2001
Messages
6,340

I've read most of this thread and since my assistant Tim is back I have a little time to respond.



I would like to address the points that are brought up here posed primarily by my dear friend John regarding the repeatability of the BrillianceScope and would like to put this to rest.



John's comment, to quote him is that the differences are *staggering*.



Mara has commented that it is not repeatable even with the same diamond on the same machine (much less others).



I'm sorry but this information is painting a picture that is just plain WRONG and I would like to PROVE IT with some real examples.



Here is a diamond we had scanned in with the OLD BrillianceScope hardware and the old B'scope software and its results.



BR115DSI1BSOLD.jpg
 

Rhino

Ideal_Rock
Trade
Joined
Mar 28, 2001
Messages
6,340
Here is the same identical diamond using the new BrillianceScope hardware and the latest BrillianceScope software.

This is hardlly staggering. A half tick on one scale is hardly misrepresenting the results of a diamond. We frequently send diamonds to Bill for appraisal who also has a BrillianceScope and there is never any disparaging differences between properly calibrated BrillianceScope machines. Would you like to see more examples?

BR115DSI1BSNEW.jpg
 

JohnQuixote

Ideal_Rock
Joined
Sep 9, 2004
Messages
5,212
Date: 4/9/2005 1:58
6.gif
3 PM
Author: Rhino

John''s comment, to quote him is that the differences are *staggering*.

...Here is a diamond we had scanned in with the OLD BrillianceScope hardware and the old B''scope software and its results.

Jonathan,

I believe you misinterpreted the use of that word. It was in reference to someone comparing BS given error to Sarin given error.

Here is the whole post with the quote in context:

Author: JohnQuixote

It is not an apt comparison.  Sarin measures angles which are exacting and can be substantiated by similar devices.  What it measures (unlike BS data) is necessary to evaluate proportions.  Angles are non-subjective and may be verified.

We don''t even know what BS is trying to measure.

But...For purposes of the thread let''s pretend the technologies are comparable (and both are meaningful).  Technically speaking Sarin is over 20 times more accurate than BS.  GemEx has a given error of ±5.0%.  Sarin claims accuracy of 20 microns or ±0.2 degree.

Using Aljdewey''s analogy, if the Sarin was telling you your height was 5''0, 5''3" and 4''9" on different tries, Sarin would be within 1/4 of an inch each time.

And - we''re still working to find better accuracy than Sarin (Helium, Imagem, etc).  By comparison, the ''accepted'' BS error is staggering.
 

Mara

Super_Ideal_Rock
Joined
Oct 30, 2002
Messages
31,003
Rhino, I'd love to see examples of the same diamond run on the same machine with the same software, 4-5 times. Are the results the exact same each time? From what I have heard, that is not the case. That is where my comment stems from. Please prove me wrong.

Secondly regarding the change of one small tick...as you know..that may not be a huge difference to all PS'ers or customers but to some it IS a huge difference. There have been discussions in the past here where people post a BS and it has H H VH+ or something and ask why it's not performing up to snuff. My point is that PS'ers are a very anal bunch, many of them, when it comes to spending $$ on a diamond. So in terms of results and reports, it is best to have as much as possible, assuming that the results can be consistent on the same machine and that people are truly educated on what those results MEAN. I think that is also where the BS gets flak..is that there is no clear table as to what truly designates a great stone on the BS. Is it H? VH? VH+? For Sarins or AGS0s etc we at least have some numbers to go by...is there something like that for BS?


I want to say again that, regardless, I like the BS as a tool and would use it if it were available, but there has always been discussion surrounding results. For me it is one tool of many to use if possible.
 

strmrdr

Super_Ideal_Rock
Joined
Nov 1, 2003
Messages
23,295
Date: 4/9/2005 12:19:41 PM
Author: JohnQuixote
Date: 4/8/2005 7:12:26 PM

I for one totaly reject imagem because we dont need another ''magic'' box we need real peer reviewed science.


That''s a scary attitude Strm. Don''t go flat earth society on us
2.gif
. Many people ''totally rejected'' ideal-scope at first. Those who kept an open mind now understand light return much better. Many people ''totally rejected'' the H&A viewer at first. Those who kept an open mind have understand much more about facet construction. Many people ''totally rejected'' macintosh computers at first. Those who kept an open mind developed operating systems, graphics and music programs that raised the bar for both platforms.


It''s important to understand that we do not ''totally reject'' BS. We will keep an eye on its development - and the development of other performance assessment machines.

I have asked over and over again for information on the light conditions it is spose to represent and a light model so it can be compared ran thru diamcalc and other testing and have been rebuffed each time.
It is clear that the position is that its meaningfull because we say so.
I reject that and say prove it in an open manner.

We both know the following but im going to go into detail so others can follow.

1> diamonds can be tuned to perform best under specific light conditions using crown/pavilian angle ralationship and the minor facets.

2> The biggest difference between varies brands/cutters of super-ideals is the spot they want to hit on the light condition vs. performance curve.
Everyone of them places the tradeoff at a different place and some people will prefere one look over another.

Therefore the diamond that scores highest in one light condition may not be the one that is the best overall performer.
You have used the same arguement against the b-scope in open forum :}

Without knowing what conditions the diamond is being tested under it is impossible to know if its even relavant to the conditions you most often view the diamond in.

............
I took my diamond and look at it thru the flame of a bic lighter in a dark room and scored it 9.2 boguspoints for brightness 3 times in a row.
I then took my signity star cz and scored it 8.8 3 times in a row.
Repeatable yet totaly meaningless :}
......

Garry has put a ton of work in explaining and education about how the ideal-scope works.
I thought it was a joke when I first saw it but now consider the info critical to buying a diamond. I dont see imagem putting in the same effert and thats what its going to take to get me to accept it.

Jon has done the same for the b-scope and at the same time iv kept an ear open to both sides and drew my own conclusions from all available information and continue to rework those conclusions as more info has become available.

I am willing to study the imagem machine to the best of my ability and learn from those that engage in peer review of the device but I wont accept "because I say so" and neither should other consumers.
 

JohnQuixote

Ideal_Rock
Joined
Sep 9, 2004
Messages
5,212
Great post, Strm.

Date: 4/9/2005 3:23:34 PM
Author: strmrdr

…Therefore the diamond that scores highest in one light condition may not be the one that is the best overall performer.

You have used the same arguement against the b-scope in open forum :}

Yes! And this goes to the heart of the matter. What is BS testing? Brilliance? Maybe? We’re not sure. It’s a light source the diamond will never see again. Even so, we never claimed to have the ‘most brilliant’ diamond. Brian never set out with that as a goal when designing “A Cut Above.”

Our objective is for ACAs to have the best visual balance over a broad range of real lighting conditions. There is no machine we are aware of that tests this - so we use our eyes, other experts in the field and the eyes of thousands of customers to get our real-world feedback.
 

Rhino

Ideal_Rock
Trade
Joined
Mar 28, 2001
Messages
6,340
Shall we continue this discussion in another new thread or here?
 

JohnQuixote

Ideal_Rock
Joined
Sep 9, 2004
Messages
5,212
Rhino - Thanks.

Do you want to begin a new one pertaining to overall philosophies on BS? "The Great BrillianceScope Debate" or similar?

Bill put one up today about the "Live Report," but I don't know that it's topical to what we're hashing out hither, which isn't topical to the title of this thread.

Your play.
 

WinkHPD

Ideal_Rock
Trade
Joined
May 3, 2001
Messages
7,516
Date: 4/7/2005 6:57:59 PM
Author: aljdewey

Date: 4/7/2005 6:44:46 PM
Author: DiamondExpert

I too would like to see B''Scopes of ALL the brands (e.g., ACA, 8*, Infinity) side-by-side simply because it might settle the ''mystery'' once and for all as to just what it is, OR IS NOT, that the B''Scope can tell us
In theory, that would be a great idea......but there''s one problem. In order for that to be any meaningful comparison, you''d have to be able to achieve the same B-scope reading on a given stone every time you run it......and that''s been the problem some folks have run into.


Some have found B-scope readings aren''t consistently repeatable. If may give a reading of x, x, x, x on my stone today, and reading of x, x, y, y tomorrow on my same stone tomorrow, and yet another result when taken a third time.

As such, comparing b-scopes wouldn''t really be comparing the stones. It would comparing what the b-scope said those stones were ON THAT SCAN. That''s hardly meaningful.

Absolutely excellent comment aljdewey. That is precisely the reason why ALL of the major labs have declined to use this equipment. It simply does not do the job with repeatablility. Although many in the public love it and seem to demand it at times, it simply is not up to laboratory standards and will not give the reliable data that our clientele are hungry for.

Wink
 

WinkHPD

Ideal_Rock
Trade
Joined
May 3, 2001
Messages
7,516
Date: 4/7/2005 11:29:14 PM
Author: strmrdr

Date: 4/7/2005 9:49:43 PM
Author: Dancing Fire
what about the sarin machine ,are they consistent?
Not very.
Different models, different software, different patches to the software, skill of the operator, cleaness of the diamond and the machine all add in variations.

I had the $5,000 version and got AGS cutting grades from 0 to 5 on the same stone. I got the 0 when grading the stone then showed how it worked on this beautiful AGS 0 stone only to get an AGS 5 grade on the stone with my client there. I did save the sale, but the machine went back the next day after I tried a dozen more stones and found some pretty large deviations from test to test.

I hear the the 25k version is better, but I do not need one that badly!

Wink
 

strmrdr

Super_Ideal_Rock
Joined
Nov 1, 2003
Messages
23,295
Date: 4/10/2005 7
6.gif
7:10 PM
Author: Wink
Date: 4/7/2005 11:29:14 PM

Author: strmrdr


Date: 4/7/2005 9:49:43 PM

Author: Dancing Fire

what about the sarin machine ,are they consistent?

Not very.

Different models, different software, different patches to the software, skill of the operator, cleaness of the diamond and the machine all add in variations.


I had the $5,000 version and got AGS cutting grades from 0 to 5 on the same stone. I got the 0 when grading the stone then showed how it worked on this beautiful AGS 0 stone only to get an AGS 5 grade on the stone with my client there. I did save the sale, but the machine went back the next day after I tried a dozen more stones and found some pretty large deviations from test to test.


I hear the the 25k version is better, but I do not need one that badly!


Wink

Im not sure how much I can say but the 25k version while better still has problems.
The biggest being patches to the software that one user may have and another not.
Also there are various patches that net a more precise but much slower scan and some make for a faster less accurate scan.
Which patch is someone going to use that runs hundreds of stones thru it a day hmmmmmm.
From what I can gather at its best it can come close to the level of any scanner on the market but the cost is high in the area of speed of doing the scans.

disclaimer: thats just my understanding of the situation and I could be wrong.
Wouldnt be the first time :{
 

Garry H (Cut Nut)

Super_Ideal_Rock
Trade
Joined
Aug 15, 2000
Messages
18,461
In my experiance a well maintained scanner with a clean stone and the best scan level (more images) - Sarin dDiamension is better than Ogi Megascope. Both use nasty algorithms to make the stones facets meet better which results in some averaging out.

At this time the results from OctoNus''s Helium scanner are far superior. This can be seen in the accuracy of facet building in the crowns and chevron facets in princess cuts - a very difficult stone model to build.
 

mdx

Brilliant_Rock
Joined
Mar 1, 2002
Messages
570
Date: 4/8/2005 7:40:12 PM
Author: RockDoc


Again - not so... The B Scope has nothing to do with proportions.... it is a light return measuring device. It doesn''t care what is in it (Providing it is small

However, I suspect your data is old, and there in fact may be a larger variance when the stone is run with the newer BScope and its software.

John Q if my idea of how this technology works is correct and as I mentioned I could be totally wrong then don’t you think the type of lighting would be totally irrelevant to the result.



I don''t think any of the light sources replicate the varying environments of ''real world'' light. To attempt to analyze this to me appears to be fruitless. Most of the machines are good for comparing one stone to another under the same or identical enviroments, and to that end they are all helpful in the information they provide.



Here is an interesting thought for you scientific types, What if one could apply some form of neural logic to the software of one of these devices that it gets more and more clever with each test.



People with intent to try to fool the testing units, can probably be accomplished. Gemex has a system now where they audit the images we provide, and can''t publish them until this is done. No one else with various testing equipment does this that I am aware of.





Johan, if you want to have me test and ''play'' with the stone, I''d be agreeable provided it is done in a way that nothing can be manipulated or influenced unfairly. If the imaging was done before 3 months ago, it would be advisable to have the contract cutter you did re-image it on his updated machine, then submit it to me for comparison. I would also agree to image the stone enough times to verify that the repeatibility is within reaonsable declared limts, or in the alternative, not repeatable within the specified tolerances.

Thanks for the offer and proposal,

Rockdoc

Hi Rock


I see the stone was scanned 5/3/04 so it certainly is a lot older than 3 months.
I am not really trying to establish if one machine gives the same result as another.
What I would find interesting is why this stone got such a good report when it obviously has pretty bad light leakage, I was expecting low to medium on all three, It got three very highs.

So what I will do Rock is ship it to you with the report, perhaps you can rescan it and see if there is any change, If it still shows three very highs you can maybe make some suggestions to the reason.


Rock the stone is a fancy cushion shape so we can perhaps also look at this idea.


If it does in fact measure light return then the shape should make no difference.


Johan
 

RockDoc

Ideal_Rock
Joined
Aug 15, 2000
Messages
2,509
Johan.....

Shape is a major consideration in the B Scope.

In that there are many more rounds and princess shapes that have been tested, the database is quite sufficient to work with sampling data and test results.

In that there have not been the number of other cuts scanned, the previous test results for comparisons to other stone shapes is not complete.

The subject in this thread is about the repeatibility of the testing, and if stones will receive the results that are the same when run multiple times in the same or different B Scopes.

In that your stone is a cushion shape, I am not sure how much or many stones that are out there which have been imaged, but when a stone comes in that is poorer than what has been previously imaged, or better than what has been imaged, Gemex does update this data, and the "rating system" expressed by the bars is changed. I can certainly contact Gemex, and find out about the data on previously graded cushion cuts, and ask how complete the analysis is for that shape stone.

I''ll be glad to run your stone through any tests I have and report my findings from an independent standpoint, but before you send it, I''d sure like to know what issues of this stone you''d like answered.

Also if you are going to send it, I should review the shipping details about it, to make sure you have the correct shipping information. Also when I ship the stone back what is necessary to ship back so you don''t have a Tax or Vat situation, and what documents have to be completed. In that you''re in Australia, you need to outline the return procedure with me, so no problems develop with the return of the stone back to AU.

PM to me or email at [email protected]..

Rockdoc
 
Status
Not open for further replies. Please create a new topic or request for this thread to be opened.
Be a part of the community Get 3 HCA Results
Top