What's new

The Defense Thread

Here's a super fun conversation I just had with chatGPT (edit down to avoid tl;dr). Enjoy!!!

NAOS:
Can you please share an outline of a collaborative workflow that involves the sharing of data from a data exploration exercise? To be more specific, I'm interested in how teams work together starting with the early/provisional results of a data exploration. What kind of information is typically acceptable to share? What are the best practices for the group that receives this information?

ChatGPT:
Certainly! Collaborative data exploration is a crucial step in any data-driven project. Here's an outline of a collaborative workflow that involves the sharing of data from a data exploration exercise, including what to share and best practices for the teams involved:

1. Data Exploration Phase:
a. Data Collection: …
b. Data Cleaning: …
c. Data Normalization and Scaling: …:
d. Data Analysis: Conduct initial data analysis…

2. Sharing Initial Findings:
- Summarize the data exploration process and the context of the data.
- Highlight key findings, patterns, and trends discovered.
...

NAOS:
Other than data normalization and data scaling issues, is it generally good practice to avoid any additional data transformations until after you've looked at the data you've received?

ChatGPT:
In most cases, it's a good practice to avoid making additional data transformations until after you've had a chance to thoroughly examine the data you've received during the initial data exploration phase. Data transformations, such as feature engineering or complex scaling methods, can introduce unintended biases or distort the underlying patterns in the data. Here are some reasons to postpone extensive data transformations until after the initial data inspection:

Preserve Data Integrity: …

Assess Data Structure: …

Identify Data Issues: …

Evaluate Transformations' Necessity: …

Avoid Overfitting: …

Iterative Approach:...
...

NAOS:
Is it fair to tell your interlocutors that you aren't presenting any "conclusions" at the end of a data exploration, but instead merely showing them "what initially pops out", or something like that?

ChatGPT:
Yes, it is fair and advisable to communicate to your stakeholders or interlocutors that you are not presenting "conclusions" but rather sharing what initially stands out or what you've observed in the data exploration process. This kind of transparency and clear communication is important for several reasons: …
And I asked chatgpt for an ice cream recipe and it called for 1/4 cup of kosher salt. Yum!
 
That still doesn't change the fact that there are 49million kangaroos in Australia and 3.5million people in Uruguay which means if the kangaroos were to invade Uruguay, each person will have to fight 14 kangaroos ..
That's terrifying. If they invaded America it would just be known as the NRA vindication invasion.
 
Bruh this is hilarious. For one, NAOS can't get enough of me even after the supposed "mute" haha. Second, he's asking very pointed questions to freaking ChatGPT to try to look smart haha. This opened up a world of hilarity I did not even know what was possible.

Ask ChatGPT the questions I stated. Even just the first one, "Is the pearson coefficient a data transformation?". He doesn't know what a Pearson coefficient is and I proved that by letting him ramble on about how it's a data transformation. It is not. Seriously, it takes 10 seconds to as ChatGPT to do this. Anyone can do this and everybody including NAOS can see that he doesn't know what he's talking about.

This is what happens when you have someone larping as a data literate person. It's ok if you're not, there was plenty of discussion without the Pearson coefficient. My argument stands with or without it. But it is freaking hilarious that NAOS does not believe he has to provide an explanation (while also demanding explanations from me) because of this strict and rigid process.

What we're seeing is what would happen if someone tried to apply for a data analysis related job without any experience and only the help of Chatgpt hahahaha. He tried so hard to make Chatgpt say it's inappropriate to use Pearson during "data exploration". And yet it still spit out this answer:

In addition to the Pearson Correlation Coefficient, data analysts often use a variety of statistical and visualization methods during data exploration.

Hahahahahaha. So what is it...Can we, or can we not use Pearson during data exploration? Is it, or is it not a data transformation?
 
I have multiple degrees related to analytics. I've worked in analytics for years. I'm literally at an analytics conference right now where I presented. I truly hate to be the guy who appeals to authority. Having that experience does not make me right or wrong, I am not playing this card to say "I'm right because of expertise or authority". I feel like I'm right because of the rationale I provided and no expertise is needed to get to the same points and conclusions.

I'm just mentioning that because it's freaking hilarious for someone to try to lecture me on what the correct process is for data exploration. This "data exploration" angle is being used as the justification as to why NAOS does not have to have reasoning or rationale and it also the reason why I cannot use my own rationale after being attacked for providing no rationale.

Asking pointed at ChatGPT to try to defend this stance only to have it spit out the wrong thing. Funniest thing ever. Pearson is not data transformation and it can also be used as a part of data exploration, all provided in the response to NAOS's prompts.
 
NAOS:
I've found that it can be easy to miscommunicate about what constitutes a data transformation.

chatGPT:
Absolutely, the term "data transformation" can be interpreted in various ways, leading to potential miscommunication. To avoid confusion, it can be helpful to clarify the context and provide specific examples when discussing data transformations....

NAOS:
I was taught that the term "data transformation" applied to any practice that took the original dataset and added or changed values in order to investigate relationships.

chatGPT:
The term "data transformation" can indeed have a broad or specific interpretation depending on the context and the field of study. Your understanding aligns with a more inclusive definition where any practice altering or adding values for investigating relationships is considered a data transformation.
 
Bruh this is hilarious. For one, NAOS can't get enough of me even after the supposed "mute" haha. Second, he's asking very pointed questions to freaking ChatGPT to try to look smart haha. This opened up a world of hilarity I did not even know what was possible.

Ask ChatGPT the questions I stated. Even just the first one, "Is the pearson coefficient a data transformation?". He doesn't know what a Pearson coefficient is and I proved that by letting him ramble on about how it's a data transformation. It is not. Seriously, it takes 10 seconds to as ChatGPT to do this. Anyone can do this and everybody including NAOS can see that he doesn't know what he's talking about.

This is what happens when you have someone larping as a data literate person. It's ok if you're not, there was plenty of discussion without the Pearson coefficient. My argument stands with or without it. But it is freaking hilarious that NAOS does not believe he has to provide an explanation (while also demanding explanations from me) because of this strict and rigid process.

What we're seeing is what would happen if someone tried to apply for a data analysis related job without any experience and only the help of Chatgpt hahahaha. He tried so hard to make Chatgpt say it's inappropriate to use Pearson during "data exploration". And yet it still spit out this answer:



Hahahahahaha. So what is it...Can we, or can we not use Pearson during data exploration? Is it, or is it not a data transformation?
yeah, I decided to unmute this when I had a few free minutes at the office today. I only go into the office once a month, and I knew there was a dude here who would laugh his *** off at your meltdown. It was right.
 
I have multiple degrees related to analytics. I've worked in analytics for years. I'm literally at an analytics conference right now where I presented. I truly hate to be the guy who appeals to authority. Having that experience does not make me right or wrong, I am not playing this card to say "I'm right because of expertise or authority". I feel like I'm right because of the rationale I provided and no expertise is needed to get to the same points and conclusions.

I'm just mentioning that because it's freaking hilarious for someone to try to lecture me on what the correct process is for data exploration. This "data exploration" angle is being used as the justification as to why NAOS does not have to have reasoning or rationale and it also the reason why I cannot use my own rationale after being attacked for providing no rationale.

Asking pointed at ChatGPT to try to defend this stance only to have it spit out the wrong thing. Funniest thing ever. Pearson is not data transformation and it can also be used as a part of data exploration, all provided in the response to NAOS's prompts.
There isn't a correct process for data exploration. You're the person who's policing what a data exploration is and isn't. Not me. How can you be twisted about this feature of our back-n-forth? Answer: cuz you're melting down and lashing out... got all tied up in a knot.

The chatGPT stuff is just for fun. I went with that angle because you can't have a conversation, apparently.
 
Last edited:
There isn't a correct process for data exploration.

The chatGPT stuff is just for fun. I went with that angle because you can't have a conversation, apparently.

Yeah, I know that. You can't say this and also say that it must be done in such a specific way lmao. I was not the one saying you cannot use the Pearson coefficient in data exploration because it's a data transformation.

The ChatGPT stuff is a lot of fun. Even your massaged prompts explicitly justify my use of the Pearson coefficient.
 
Yeah, I know that. You can't say this and also say that it must be done in such a specific way lmao. I was not the one saying you cannot use the Pearson coefficient in data exploration because it's a data transformation.

The ChatGPT stuff is a lot of fun. Even your massaged prompts explicitly justify my use of the Pearson coefficient.
You are so dense. Please show me once instance where I prescribed a single “must do” for a data exploration. There’s only a couple of pages to reference, so please do it.

Nobody has ever said that the Pearson coefficient was misapplied. The disagreement was over whether it constitutes a data transformation, and reasonable people can disagree about this. And the whole disagreement only took the shape it did because you didn’t explain yourself for four hours, and when you did, it was just barely. How dare anyone ask you to show your work!!

That’s your one main takeaway from the ChatGPT session? Well I guess that what the critics have been saying: the imbeciles will engage with it and walk away with the one thing they wanted to walk away with. Good riddance, dude.
 
You are so dense. Please show me once instance where I prescribed a single “must do” for a data exploration. There’s only a couple of pages to reference, so please do it.

Nobody has ever said that the Pearson coefficient was misapplied. The disagreement was over whether it constitutes a data transformation, and reasonable people can disagree about this. And the whole disagreement only took the shape it did because you didn’t explain yourself for four hours, and when you did, it was just barely. How dare anyone ask you to show your work!!

That’s your one main takeaway from the ChatGPT session? Well I guess that what the critics have been saying: the imbeciles will engage with it and walk away with the one thing they wanted to walk away with. Good riddance, dude.

Let's get this straight, reasonable people cannot disagree that Pearson is a not data transformation. Just because ChatGPT said there can be a wide definition it does not mean that it can include Pearson. Pearson does not transform the data, a data transformation must require a transformation of the data. If you consider Pearson a data transformation, you are not reasonable and that's that. None of this changes the fact that you set a strict definition of data exploration that could not include Pearson.

By the way, you keep whining about it taking 4 hours. One, that's ridiculous to put a timer on it in the first place. You haven't explained your rationale at all and it's been days. I think after days and days of exploration you might have actually discovered something. Two, I did not use it at all during the first 4 hours. The argument stood without it, but you kept whining that I did not do enough. Why would I explain something I had not used yet? I explained it in the same post I initially used the Pearson coefficient. It was not needed to explain something that is very obvious to begin with. It happened to supplement everything I was already saying and it's no surprise because once again, it's obvious.

Here you are specifically saying I misapplied it.

In order to mop up this mess, let's just point out that your whole argument appears to hinge on the quote above that I've emboldened. It could very well be an excellent point, but there are at least two problems with it in this discussion so far. First, it is based on a data transformation that is done after an initial data exploration. I'm going to guess that you know that. Seeing that all I did was an initial data exploration (and I let everybody know that), your first reply could have been something along the lines of 'Those are interesting-but-potentially-misleading results; in order to see the real relationship here, you need to apply the following data transformation after you've completed your exploration....' But of course you didn't do that.

Here you are trying to bait out ChatGPT into saying you must wait to a data transformation (even though Pearson is not a data transformation). Of course you try to twist and say, show me where you said "must do". That's not what I accused you of. I accused you of saying "can't do" as in can't use the Pearson because you're exploring.

Other than data normalization and data scaling issues, is it generally good practice to avoid any additional data transformations until after you've looked at the data you've received?
Do you think it would be fair to say that applying the Pearson Correlation Coefficient is something that would be done after an initial data exploration, depending on what the exploration uncovers?
in general, how often is it apparent that the Pearson Correlation Coefficient should be applied in an initial data exploration?

Even in your attempts to massage these questions, your little friend ChatGPT only confirmed what I was saying. Pearson is data exploration. It is not data transformation. There is no need to wait and hold off on it. The idea of witholding information in a back and forth is freaking ridiculous to begin with. It's very telling that in response to my arguments, you don't say they're wrong or have a counterpoint. You say it's not ok to use that yet hahaha.

Good riddance? Yeah, I doubt it. You will probably go back to following me around unprompted haha. You just can't help yourself.
 
Let's get this straight, reasonable people cannot disagree that Pearson is a not data transformation. Just because ChatGPT said there can be a wide definition it does not mean that it can include Pearson. Pearson does not transform the data, a data transformation must require a transformation of the data. If you consider Pearson a data transformation, you are not reasonable and that's that. None of this changes the fact that you set a strict definition of data exploration that could not include Pearson.

By the way, you keep whining about it taking 4 hours. One, that's ridiculous to put a timer on it in the first place. You haven't explained your rationale at all and it's been days. I think after days and days of exploration you might have actually discovered something. Two, I did not use it at all during the first 4 hours. The argument stood without it, but you kept whining that I did not do enough. Why would I explain something I had not used yet? I explained it in the same post I initially used the Pearson coefficient. It was not needed to explain something that is very obvious to begin with. It happened to supplement everything I was already saying and it's no surprise because once again, it's obvious.

Here you are specifically saying I misapplied it.



Here you are trying to bait out ChatGPT into saying you must wait to a data transformation (even though Pearson is not a data transformation). Of course you try to twist and say, show me where you said "must do". That's not what I accused you of. I accused you of saying "can't do" as in can't use the Pearson because you're exploring.





Even in your attempts to massage these questions, your little friend ChatGPT only confirmed what I was saying. Pearson is data exploration. It is not data transformation. There is no need to wait and hold off on it. The idea of witholding information in a back and forth is freaking ridiculous to begin with. It's very telling that in response to my arguments, you don't say they're wrong or have a counterpoint. You say it's not ok to use that yet hahaha.

Good riddance? Yeah, I doubt it. You will probably go back to following me around unprompted haha. You just can't help yourself.
You're prompting me to continue in this thread, so I'll engage for as long as I'm prompted and it entertains me. What's one example from your last post that has prompted me to reply again? The fact that, despite zero to super-thin evidence, you seized on the idea that chatGPT was my only reference source for my position. That was very wrong and very predictable of you. I also noticed that you chose to say that I'd been engaged in "days and days of exploration" and come up with nothing—another wild speculation with zero evidence (after all, how the hell would you know where I'm spending my time or if I've even looked at the data for one second beyond my "all too quick glance"?) In sum, you're continuing with a pattern of lashing out and making claims without evidence. That pattern goes straight back to the beginning of your entry in the conversation that was bubbling after my initial posts.

Let's see if this "conversation" can turn in a new direction with an admission from me....

I don't specialize in data analytics. I have a mixed, mostly self-taught background in SQL and R. Because of this, I don't speak the lingo of a specialist. I'd been taught that any practice that runs calculations, experiments, etc. with the received data in order to test hypotheses of linear relationships was a data transformation (that's not the full definition of what I was taught, but it's the part that applies here). Based on what I've managed to learn through the mountains of ******** in this conversation, I see that isn't the best top-line description of data transformations. I think my position on this issue might've been prolonged by the fact that I work in R, where everything is vectorized, and I rarely (if ever) "transform" the data (instead I just whip out a new data frame and have two objects to play around with ). Anyway... none of this removes the fact that you were a terrible communicator—so for 4.5 hours of conversation (before you obliquely referenced "Pearson") it looked precisely like you were either sweeping data aside with an unsubstantiated claim or doing a data transformation (or it was fair to allege that you were because you weren't actually referring to the data that I posted [and you were quoting]). —Anyway, I'm moving forward from this moment with an improved idea about what a data transformation is. I wish I could thank you for it.

Nobody set a timer on anything, dude. You came swinging through a conversation that was happening independently of you, dowsed it with your branded bad vibes, communicated poorly, and refused to show your work for 4.5 hours. It's not a timer, it's an easy and accurate description of ****** behavior. Despite how highly you think of yourself, your argument did not stand on its own before you referenced Pearson (again, obliquely) since all that we saw was claims without evidence. You've made multiple claims to the "obviousness" of certain things, when they are not as obvious as you think they are. One could much more realistically claim that the 3-10 shot range is obviously a bad category for a shot, since it is forcing homogeneity on a bevy of different shots, than one could claim how this data point reveals obvious features about the game of basketball and what a defense can or cannot control. All your behavior here is super-clearly captured in the written record.

BTW, those weren't "massaged questions" to chatGPT. This allegation is one more clear instance that you refuse to see this social engagement (which you made yourself a part of) from more than one angle (i.e. yours). I'll refrain from telling you any more about these questions (their motivations, perspectives, etc.) because it is clear that you have no interest.

You keep asking me to explain my rationale. It's quite literally shocking to me that you still don't get it because it's been explained to you at least 4 or 5 times. Maybe I can just re-paste this bit here and something will finally kick-in place for you? Since you can't converse, let's opt for this again (so that you can see something that resembles a conversation, at least):

NAOS:
If I started a quick discussion with team members by saying "after a too-quick glance at last year's data, the following things popped out to me. I want to find time to investigate each of these things further...." Do you think it's reasonable to say that I was intending to check dependencies, etc?

chatGPT:
Absolutely, your statement is perfectly reasonable and conveys a thoughtful approach to data exploration. By mentioning that certain observations "popped out" during a quick glance at last year's data and expressing an intention to investigate each of these things further, you're signaling a few key points:...

NAOS:
Do you think it would be fair to say that applying the Pearson Correlation Coefficient is something that would be done after an initial data exploration, depending on what the exploration uncovers?

chatGPT:
Absolutely, that's a fair and reasonable approach. The decision to apply the Pearson Correlation Coefficient or any correlation analysis can indeed depend on what you uncover during the initial data exploration.


Anyway, you now have one admission from me. Here's a couple others:
(a) I want to wrap up this bad-vibing and ridiculous conversation in one or two more posts, if possible
(b) we probably won't be chatting in the future, so if you have some more you want to get off your chest, now is your time.
 
You're prompting me to continue in this thread, so I'll engage for as long as I'm prompted and it entertains me. What's one example from your last post that has prompted me to reply again? The fact that, despite zero to super-thin evidence, you seized on the idea that chatGPT was my only reference source for my position. That was very wrong and very predictable of you. I also noticed that you chose to say that I'd been engaged in "days and days of exploration" and come up with nothing—another wild speculation with zero evidence (after all, how the hell would you know where I'm spending my time or if I've even looked at the data for one second beyond my "all too quick glance"?) In sum, you're continuing with a pattern of lashing out and making claims without evidence. That pattern goes straight back to the beginning of your entry in the conversation that was bubbling after my initial posts.

Let's see if this "conversation" can turn in a new direction with an admission from me....

I don't specialize in data analytics. I have a mixed, mostly self-taught background in SQL and R. Because of this, I don't speak the lingo of a specialist. I'd been taught that any practice that runs calculations, experiments, etc. with the received data in order to test hypotheses of linear relationships was a data transformation (that's not the full definition of what I was taught, but it's the part that applies here). Based on what I've managed to learn through the mountains of ******** in this conversation, I see that isn't the best top-line description of data transformations. I think my position on this issue might've been prolonged by the fact that I work in R, where everything is vectorized, and I rarely (if ever) "transform" the data (instead I just whip out a new data frame and have two objects to play around with ). Anyway... none of this removes the fact that you were a terrible communicator—so for 4.5 hours of conversation (before you obliquely referenced "Pearson") it looked precisely like you were either sweeping data aside with an unsubstantiated claim or doing a data transformation (or it was fair to allege that you were because you weren't actually referring to the data that I posted [and you were quoting]). —Anyway, I'm moving forward from this moment with an improved idea about what a data transformation is. I wish I could thank you for it.

Nobody set a timer on anything, dude. You came swinging through a conversation that was happening independently of you, dowsed it with your branded bad vibes, communicated poorly, and refused to show your work for 4.5 hours. It's not a timer, it's an easy and accurate description of ****** behavior. Despite how highly you think of yourself, your argument did not stand on its own before you referenced Pearson (again, obliquely) since all that we saw was claims without evidence. You've made multiple claims to the "obviousness" of certain things, when they are not as obvious as you think they are. One could much more realistically claim that the 3-10 shot range is obviously a bad category for a shot, since it is forcing homogeneity on a bevy of different shots, than one could claim how this data point reveals obvious features about the game of basketball and what a defense can or cannot control. All your behavior here is super-clearly captured in the written record.

BTW, those weren't "massaged questions" to chatGPT. This allegation is one more clear instance that you refuse to see this social engagement (which you made yourself a part of) from more than one angle (i.e. yours). I'll refrain from telling you any more about these questions (their motivations, perspectives, etc.) because it is clear that you have no interest.

You keep asking me to explain my rationale. It's quite literally shocking to me that you still don't get it because it's been explained to you at least 4 or 5 times. Maybe I can just re-paste this bit here and something will finally kick-in place for you? Since you can't converse, let's opt for this again (so that you can see something that resembles a conversation, at least):

NAOS:
If I started a quick discussion with team members by saying "after a too-quick glance at last year's data, the following things popped out to me. I want to find time to investigate each of these things further...." Do you think it's reasonable to say that I was intending to check dependencies, etc?

chatGPT:
Absolutely, your statement is perfectly reasonable and conveys a thoughtful approach to data exploration. By mentioning that certain observations "popped out" during a quick glance at last year's data and expressing an intention to investigate each of these things further, you're signaling a few key points:...

NAOS:
Do you think it would be fair to say that applying the Pearson Correlation Coefficient is something that would be done after an initial data exploration, depending on what the exploration uncovers?

chatGPT:
Absolutely, that's a fair and reasonable approach. The decision to apply the Pearson Correlation Coefficient or any correlation analysis can indeed depend on what you uncover during the initial data exploration.


Anyway, you now have one admission from me. Here's a couple others:
(a) I want to wrap up this bad-vibing and ridiculous conversation in one or two more posts, if possible
(b) we probably won't be chatting in the future, so if you have some more you want to get off your chest, now is your time.


I knew you’d be back! Currently on a train and gone for the weekend so let’s keep it short as possible. When I say you have provided no argument, I’m talking about the notion that allowing lots of floaters is bad. I obviously disagree, and despite your claims I have actually had several different arguments as to why. I was not using the Pearson coefficient to begin. It’s not like I was sitting on this as an imo reverse card for hours. You just kept complaining that there was no rationale provided. So I did more and more until you were satisfied. The only argument you seem to have prevented about the actual claim you replied to is that there are even worse spots. No need to repeat my response to that, look at it again if you like. You have repeated these talking points about data transformation and/or why it is a problem to use Pearson. You have not put together a sound argument as to why allowing a lot of floaters is a bad thing.

It shouldn’t matter that you’re not an expert. It doesn’t make anyone right or wrong if they are or aren’t an expert. But if you are not an expert, the least you can do is not shoot down arguments under the guise of a “correct” process that requires that data transformation after. Also try not to use that process to justify why there are no arguments to the actual floater discussion. The only reason I brought my background is to let you know that you probably should not be faking it. My background doesn’t make me right, but it does allow me to snuff out the BS. Things like “reasonable people can disagree” about the Pearson being data transformation. Coming from my background, that is as a load of crap.

About massaging the prompts. That’s also something I can sniff out. For example, you had a prompt that says if you “add” values can that be considered data transformation under a very inclusive definition. When you ask an AI about that, it’s obviously going to think about something like feature engineering as the “add” and not Pearson. For example, if you had box score data you might add TS% into the data set. That is data transformation. I don’t even think you are aware of these definitions, and that’s fine, but I’m still saying that you massaged the prompts. The reason for that is because if you actually asked ChatGpt if Pearson is data transformation, it would tell you no. You obviously had the capability to do so and it is the most relevant question. Instead you asked very open ended question that returned a response you might have liked even if for incorrect reasons.

I am very happy to continue the floater conversation if it is about floaters. Could not care less about this conversation about the legitimacy of using the Pearson. I’m obviously not going to budge from that, and I do believe my arguments that are not the Pearson stand on their own. Its totally appropriate to use but I also don’t think it’s necessary to show that allowing a lot of undesirable shots is a bad thing. If you have an argument as to why it could be a concern, I’m all ears. After the weekend of course.
 
I knew you’d be back! Currently on a train and gone for the weekend so let’s keep it short as possible. When I say you have provided no argument, I’m talking about the notion that allowing lots of floaters is bad. I obviously disagree, and despite your claims I have actually had several different arguments as to why.
Nobody ever disagreed with this statement. It bears mentioning, however, with some emphasis, that "floaters" is your word, and you seem to be hammering the hell out of it for some reason. Hmmm.
You just kept complaining that there was no rationale provided. So I did more and more until you were satisfied.
I was complaining that you were dismissing the initial/way-too-quick look at the data without providing any evidence for your dismissal. When you showed a modicum of work, then the direction of the conversation certainly changed.
The only argument you seem to have prevented about the actual claim you replied to is that there are even worse spots.
The way-too-quick exploration of that data presented that, very clearly.
But if you are not an expert, the least you can do is not shoot down arguments under the guise of a “correct” process that requires that data transformation after....
For my own entertainment, please continue to discount the role you played in the direction this conversation took. Hanging it all on me is 100% hilarity.
Also try not to use that process to justify why there are no arguments to the actual floater discussion.
"floater" alert!!
The only reason I brought my background is to let you know that you probably should not be faking it.
I'm not faking anything. Never claimed to be anything that I'm not. Please find example or stfu.
Things like “reasonable people can disagree” about the Pearson being data transformation. Coming from my background, that is as a load of crap.
I just looked through the index of three massive, well-reviewed, and -cited reference volumes on data analytics. These are mainly for analyzing ecological data, which is where I like to play. There's a grand total of 2 sentences about Pearson. So, in other words, you obliquely cited something hours after using it at a cudgel, and it was up to me to find the time to do off-site research to try to understand what the **** you were talking about. Yeah... this is clearly me being disagreeable or unreasonable or faking... or whatever... the problem here is definitely not you.
About massaging the prompts. That’s also something I can sniff out. For example, you had a prompt that says if you “add” values can that be considered data transformation under a very inclusive definition. When you ask an AI about that, it’s obviously ....
"Obvious" alert!! Doubling-down-on-your-perspective-is-the-only-perspective alert!!!
I don’t even think you are aware of these definitions, and that’s fine, but ...
How many assumptions are you up to now?
You obviously had the capability to do so and it is the most relevant question...
Is it the most relevant question? Your claim here is rooted in a perspective on the question-asking (its motivations, why the questions appear in the series they do versus another series, etc.), and what is that perspective again? Yours, perhaps? You wouldn't be lacking any information here, right? You know exactly what was going on in my head, I'm sure.
I am very happy to continue the floater conversation if it is about floaters.
Yeah, I'm sure you'd be happy to discuss something that you've been discussing alone. Par for the course.
...the legitimacy of using the Pearson. I’m obviously not going to budge from that, and I do believe my arguments that are not the Pearson stand on their own. Its totally appropriate to use...
When you have a minute, please go back through this conversation and confirm for yourself that the only disagreement about Pearson was about what it was (transformation, or not) and when someone should expect (*with tears in his eyes) that it be done. Once you finally showed your work, I immediately admitted that you might have a good point, but I'd have to go look at it. The fact that you still don't get his is another one of the more hilarious parts of this interaction.
I knew you’d be back!
Just as I knew that I could give an admission and receive none in return. *Tennis clap.
 
Last edited:
Top