They are excellent blades. I especially like them, prefer them, in my iKon slant. For me,they are both sharp and smooth.
I tried these in roughly Fall/Winter of last year (2017) & remember having mixed results. In reviewing some of my notes from my journal entries, I got 8 CCS shaves out of each blade with my DE89, Maggard Slant, & Merkur 37C Slant. They weren't the closest of shaves, but were "reasonable" (IMHO).
In contrast...
When used in my Maggard V3, V3A, & V3OC, I got DFS results on shaves 1-4 with them, but had a marked increase in irritation on shaves 5-8.
I was also using the Stirling Executive Man soap & Nivea Sensitive post-shave balm (I think.)
At least, that's what I recall. If you'd like to check out the details of the entries I made in my journal when the experiences were fresh in my memory, they are on page 6-7 of my journal here: BassPlayerBoz's Shave Journal
That's thorough documentation! Thanks for adding to the conversation...
I like the idea of quantitative testing, but I question his methodology and results. I give the guy a lot of credit for all of his measurements, though. It's not easy and no one else has done it. However, I've seen that site mentioned now and then as if it's all true without any challenge.
How does he maintain a consistent tension in the test media? Even if the tension is consistent, that doesn't mean that the results truly represent sharpness, at least how we perceive it in real-world conditions when cutting hair on skin. According to the results, a Derby Extra is almost as sharp as a Feather after one use. Really? I'm supposed to believe that? Personal experience in real-world conditions of cutting hair on the face with those blades in different razors says otherwise. Am I alone on this? Also, according to those quantitative test results, the BIC is significantly sharper than the Astra SP, so much so that after one use, the BIC becomes sharper and as sharp as a Feather right out of the wrapper. Really? I don't think so. Again, personal real-world experience says otherwise.
According to our experienced user survey data, the BIC is almost as sharp as an Astra SP:
Comprehensive Double-Edge (DE) Razor Blade Data Table
Personal opinions vary, of course, but I trust our qualitative survey data from experienced users more than that quantitative data that does not come from real-world conditions and might stem from a flawed methodology and interpretation. There are a lot of oddities in that test data.
I like the idea of quantitative testing, but I question his methodology and results. I give the guy a lot of credit for all of his measurements, though. It's not easy and no one else has done it. However, I've seen that site mentioned now and then as if it's all true without any challenge.
How does he maintain a consistent tension in the test media? Even if the tension is consistent, that doesn't mean that the results truly represent sharpness, at least how we perceive it in real-world conditions when cutting hair on skin. According to the results, a Derby Extra is almost as sharp as a Feather after one use. Really? I'm supposed to believe that? Personal experience in real-world conditions of cutting hair on the face with those blades in different razors says otherwise. Am I alone on this? Also, according to those quantitative test results, the BIC is significantly sharper than the Astra SP, so much so that after one use, the BIC becomes sharper and as sharp as a Feather right out of the wrapper. Really? I don't think so. Again, personal real-world experience says otherwise.
According to our experienced user survey data, the BIC is almost as sharp as an Astra SP:
Comprehensive Double-Edge (DE) Razor Blade Data Table
Personal opinions vary, of course, but I trust our qualitative survey data from experienced users more than that quantitative data that does not come from real-world conditions and might stem from a flawed methodology and interpretation. There are a lot of oddities in that test data.
I work in product development and any sort of quantative testing is going to be better than qualitative testing involving a human face and something that is quantitative like sharpness. It’s not like he only tested two blades. He tested dozens. What is the chance that out of dozens of blades Astras come near the bottom and somehow they are really towards the top? There are enough sample points to show the ability of his methods to show relative performance. I would say that you can’t definitely say that two blades that are next to each other couldn’t be inversed in standing. But no way Astra is jumping 15 places due to sub-optimal media tension. I definitely don’t trust all the qualitative reports of Astra SPs being sharp when on a qualitative test, even if flawed, they are on the dull end for new out of the box, after 1 shave, and after 2 shaves. I honestly think people are drunk in love with $9.95 per 100 price and it’s PPI St Petersburg pedigree.
I trust Chase's method more than I trust a poll of self styled experienced users. My personal experience is that user reviews are pretty useless, Chase's findings a lot more valuable. Yes I do think Bic's are sharper.
.If BICs are sharper than Feathers, then I'm sure that survey data from experienced users would reflect that, as well.
We disagree about the reliability of user reviews. .
It depends on methodology. I can come up with quantitative testing that yields garbage, nothing useful for the issue here. Qualitative data from experienced users can be better. It depends. There are odd results in that test data that I didn't see with our survey data, and I'm confident that if (or when) I continue with blade surveys, there wouldn't be odd results as long as the number of experienced users offering opinions on each blade were large enough. It is true that price affects opinion, but I think that the influence was negligible with our experienced users who offered ratings.
I agree, one CAN make a testing protocol that is garbage. However, a face on a human that knows what blade is being tested and has all the bias issues inherent in all humans for the sake of measuring something that is quantative IS garbage.
In the industry I’m in, subjective testing is done by validated professionals and done blind. They don’t know what they are testing. And all subjective tests are done against a witness. The tests are typically witness, product 1, product 2, product 3, witness re-run. People shaving aren’t doing this.
Chase's findings I have found to be both relevant and reliable.
To be clear, I don’t think a quantative test will decide that blade A is better than blade B. Nor do I think that there is a best blade. But sharpness is not subjective. It can be measured. Sure, the method is unlikely to give an absolute result, but outside of downright incompetence, when plenty of blades were closely ranked, there is no way that a blade with a “49” is quantatively sharper than a blade with a “39” like a Nacet. It may very well feel better, it might even, considering many factors, even shave closer.
Ouch! It doesn't look like the surveys that I conducted with B&B members were garbage. The results seem reasonable. You and others can focus on sharpness, but if you want ratings for smoothness, longevity, and consistency, then surveys are the best that you're going to get.
You're right that blind testing is best. If anyone would like to attempt such a feat, then I would wish him good luck and look forward to reading about his methodology and results.
He's got some odd results that might stem from a flawed methodology. Here are just two examples:
1. A Derby Extra is almost as sharp as a Feather after both have been used once
2. After one use, a BIC becomes just about as sharp as a new Feather
.