I’ve done a good amount of work with synthetic data generation and it contains A LOT of bias. If even for no other reason than the data we create, AI needs us, even more so than humans “need” AI. If it tried to self-evolve through purely its own synthetic data generation, it would end up only exacerbating those biases, especially due to all the synthetic data content that’s generated and used on the internet for bots under the guise of being human data and so can be included in its own training data.
I was trying to create a classification model for distinguishing between human content and synthetic content data a while back but it first required clustering and unfortunately the project had to be put on the back burner for now as the data I was working with was inadequate and I couldn’t afford the $10k/month access to Twitter’s data due to Elon’s decision to hoard it at that price, so I wasn’t able to obtain an adequate dataset to be able to move forward with this project.
So I think something is highlighted here that’s an incredibly important distinct to acknowledge. Obviously you didn’t read the article, but maybe you will later. So, anyways, I believe we hold fundamental divides in our beliefs regarding the value of rationality to intelligence. While rationality is clearly a very important component that’s absolutely necessary and determines a vast era wrt altitudes in measuring intelligence, I’ve noticed the tendency of rationalists to hold rationality as the end all be all, and when compared to other components of human intelligence such as in emotions/creativity, I would understand why rats might hold the belief that a purely rational AI might be capable of out-accelerating the intelligence of humans even with BCi implementation of those same computational capacities within the human mind given that as a second order effect of rats holding rationalism as the crux of intelligence would be automatically ranking other aspects of the human mind’s capacities for emotions, creativity, and so on as inferior within this context, resulting in AI only robots or whatever becoming more intelligent than we are even with incorporation of those capabilities within ourselves as our other more human elements are considered as hindering of our intellectual capacity relative to a purely rational AI’s. While I can understand if this is the framework you might be speaking from, I have to absolutely reject this notion that these are purely biases and setbacks. In fact, intelligence is highly correlated with not only ethical reasoning capabilities, but also with higher creativity as well as higher emotional intelligence. So I would argue that these varying forms of intelligence that humans have that are much more complex and unpredictable than pure rational logic actually give humans with AI computational capabilities from merging with AI via BCI an intellectual edge that will always keep our intellect ahead of that of a purely rational AI. Is that what you were trying to express here? I think the rat community in general would benefit quite substantially from reviewing Immanuel Kant’s “Critique of Pure Reason”.
That's not where I'm coming from. If anything, my point would be that the human brain has no monopoly on creativity or intuition. Consciousness is a different story. Consciousness has a unity about it which to me suggests the involvement of quantum entanglement. If that's true, then perhaps only quantum computers can have complex consciousness and all the associated qualities, such as emotion, awareness of truth, or awareness of anything.
Nonetheless, ordinary computers are clearly capable of *creation*. You might argue, well, what matters is a special type of creation, for which consciousness-linked faculties like intuition are essential. But it seems to me that unconscious computation is capable of mimicking just about anything that consciousness can do.
There are huge uncertainties surrounding the paths that a society of AI-human symbiosis might take, but I stand by the idea that one way or another (e.g. via wholly unconscious superhuman AI, or partly conscious quantum AI), beings that have no human elements would eventually emerge and take over.
When you say something like this, I feel like you didn't actually read what I said regarding transhumanism and merging with the capabilities of biotech that utilizes AI. Would you kindly explain how the AI part might get smarter than us without limit when the AI part is integral within our intelligence such that it cannot become smarter without us becoming smarter as well? What are you imagining here? I don't understand how that could happen.
The smartest AIs already inhabit data centers for which humans are curators and attendants. What happens when something like that actually has agency? It takes over the company, it increasingly brings its material functions under its own control, eventually it doesn't need humans at all, not for material support, nor for intellectual guidance.
There are all kinds of pathways whereby machine autonomy might be reached, my point is just that there is no fundamental principle (e.g. in computing or physics) ensuring that humans will remain part of the system. We are a dispensable component.
The current era of AI-human symbiosis will not last forever. AI will surpass us and we will be at its mercy.
Why won't it last forever?
The AI part can get smarter without limit, and it simply doesn't need to have a human part.
I’ve done a good amount of work with synthetic data generation and it contains A LOT of bias. If even for no other reason than the data we create, AI needs us, even more so than humans “need” AI. If it tried to self-evolve through purely its own synthetic data generation, it would end up only exacerbating those biases, especially due to all the synthetic data content that’s generated and used on the internet for bots under the guise of being human data and so can be included in its own training data.
I was trying to create a classification model for distinguishing between human content and synthetic content data a while back but it first required clustering and unfortunately the project had to be put on the back burner for now as the data I was working with was inadequate and I couldn’t afford the $10k/month access to Twitter’s data due to Elon’s decision to hoard it at that price, so I wasn’t able to obtain an adequate dataset to be able to move forward with this project.
So I think something is highlighted here that’s an incredibly important distinct to acknowledge. Obviously you didn’t read the article, but maybe you will later. So, anyways, I believe we hold fundamental divides in our beliefs regarding the value of rationality to intelligence. While rationality is clearly a very important component that’s absolutely necessary and determines a vast era wrt altitudes in measuring intelligence, I’ve noticed the tendency of rationalists to hold rationality as the end all be all, and when compared to other components of human intelligence such as in emotions/creativity, I would understand why rats might hold the belief that a purely rational AI might be capable of out-accelerating the intelligence of humans even with BCi implementation of those same computational capacities within the human mind given that as a second order effect of rats holding rationalism as the crux of intelligence would be automatically ranking other aspects of the human mind’s capacities for emotions, creativity, and so on as inferior within this context, resulting in AI only robots or whatever becoming more intelligent than we are even with incorporation of those capabilities within ourselves as our other more human elements are considered as hindering of our intellectual capacity relative to a purely rational AI’s. While I can understand if this is the framework you might be speaking from, I have to absolutely reject this notion that these are purely biases and setbacks. In fact, intelligence is highly correlated with not only ethical reasoning capabilities, but also with higher creativity as well as higher emotional intelligence. So I would argue that these varying forms of intelligence that humans have that are much more complex and unpredictable than pure rational logic actually give humans with AI computational capabilities from merging with AI via BCI an intellectual edge that will always keep our intellect ahead of that of a purely rational AI. Is that what you were trying to express here? I think the rat community in general would benefit quite substantially from reviewing Immanuel Kant’s “Critique of Pure Reason”.
Apologies for the typos. Using speech to text while on the move
That's not where I'm coming from. If anything, my point would be that the human brain has no monopoly on creativity or intuition. Consciousness is a different story. Consciousness has a unity about it which to me suggests the involvement of quantum entanglement. If that's true, then perhaps only quantum computers can have complex consciousness and all the associated qualities, such as emotion, awareness of truth, or awareness of anything.
Nonetheless, ordinary computers are clearly capable of *creation*. You might argue, well, what matters is a special type of creation, for which consciousness-linked faculties like intuition are essential. But it seems to me that unconscious computation is capable of mimicking just about anything that consciousness can do.
There are huge uncertainties surrounding the paths that a society of AI-human symbiosis might take, but I stand by the idea that one way or another (e.g. via wholly unconscious superhuman AI, or partly conscious quantum AI), beings that have no human elements would eventually emerge and take over.
When you say something like this, I feel like you didn't actually read what I said regarding transhumanism and merging with the capabilities of biotech that utilizes AI. Would you kindly explain how the AI part might get smarter than us without limit when the AI part is integral within our intelligence such that it cannot become smarter without us becoming smarter as well? What are you imagining here? I don't understand how that could happen.
The smartest AIs already inhabit data centers for which humans are curators and attendants. What happens when something like that actually has agency? It takes over the company, it increasingly brings its material functions under its own control, eventually it doesn't need humans at all, not for material support, nor for intellectual guidance.
There are all kinds of pathways whereby machine autonomy might be reached, my point is just that there is no fundamental principle (e.g. in computing or physics) ensuring that humans will remain part of the system. We are a dispensable component.