It is moot because it is true that it would need to be independent to be sentient. A creation that was dependent on human programming would not qualify as A.I. , and therefore is clearly not what we are talking about.
It is very presumptuous in the least.
You would have to define obsolete.
Yes. Especially if I didn't have the problem of apoptosis(programed cell death)to shorten my life span.
Never said humans won't be able to upgrade, though I imagine it may be more difficult. Many people will simply choose not to, these are the people who I think will have quite an interest in romantic human philosophies.
If we can give birth to A.I., and I think we can, it is foolish to think that it would have any fewer eccentricities or aspirations than we do. The Singularity folks kind of prove my point.
Do you really think that humans won't someday create A.I.? I won't try and presume when it will happen but I think it is a eventual certainty that it will as long as scientific advances continue to be made.
I'm so confused. Are you actually reading my posts? You're responding to out of context phrases, without any indication that you tried to absorb the whole paragraph.
I never said that "A.I" isn't possible. I very clearly think it is. More than possible, it is almost certain. And there isn't anything presumptuous about the statement you quote. We have to use concepts in ways we can define and understand. Sentience may, or may not, be an emergent property of intelligence. It is obviously not an emergent property of processing power, since moderns computers do not show any signs of it regardless of their computing prowess. It can't simply be a property of parallelism since the internet and other parallel systems do not appear sentient either. As far as we can tell, it is simply a product of human cognition that is not a pre-requisite for generally intelligent processes, but is required for anything we would consider an intelligent machine. In other words, we have defined A.I. to mean "a machine that can think the way humans do". A machine that cannot would not be an A.I. regardless of how powerful that intelligence is.
We do not disagree on the rest of your points, but we should be careful not to create arbitrary stories about inevitable infinite upgradability, collective intelligence, cosmic scale dumb-to-smart-matter conversions, and so on. That's the main failure of transcendentalist thought. It turns beautiful and useful projections into mystic musings.
P.S. I'm having this conversation with you because I generally like your posts, and you seem interested in pursuing knowledge and ideas. No need for hostility, or for taking things personally. I am not ashamed of expressing my opinions, as I have put a lot of thought and effort into them. If you can't get past my tone (something I can't really help, I am who I am), I will happily leave you be.