Conversation
i'll be really scared when AI figures out natural language and consistently natural image generation

i hope nueralblender memes don't become like the bad autocorrect and siri not understanding memes, that is quickly made obsolete
1
1
1
@shmibs well humans speak language naturally and can tell that neural net generated images are abnormal looking. you don't think it would be possible for both to become as consistently reflective of what they emulate like voice recognition? for people not to be able to tell the difference between an AI's speech and a persons, or between an AI generated image and a real one?
1
0
1
@shmibs i think you've misunderstood my post? individual human beings don't need to be proficient in multiple cants or jargons or what have you. an AI proficient in emulating natural language doesn't need to know everything. if what prints appears to be indistinguishable from human speech with high fidelity, that is the paradigm shift i'm referring to. just as humans generate language naturally and when faced with an unfamiliar subject do not return an error, so can a nueral net.
1
0
0
@shmibs i'm just talking about passing the turing test consistently which unfortunately will happen
2
0
0
NOT PROOFREAD
Show content
@shmibs
so i want to clarify whether you're saying current neural nets won't be able to simulate natural language, or any type of program-algorithm won't ever? to clarify my OP is broadly about satisfying the perception of an undefined but satisfactory sum of subjective observers to an undefined high degree regarding the "naturalness" of inorganically generated semiotic/aesthetic content. i'm phrasing it like that to be as precise as possible about my intention.

to satisfy this impossible-to-prove-or-disprove quality for images such an algorithm would need to both know "enough" about existing (word-defined but also image-related-to-word defined) objects to generate content, a knowledge which is broadly limited by the words we have and the things we see (ie enough data will def satisfy it) but in practice only the more common things with room for educated blank-filling, because it's wrt individual people whose conscious minds don't have even close to the same data. i think this is very feasible. the other thing an algorithm would need to do is to circumscribe the possible relations of their seeded objects, avoiding aberrations that would make the image unreal looking regardless of whether there is sufficient data about an object, and is also especially useful to creating new objects from convergent seeds, and preventing discontinuities between objects. i'm saying this is a different function in the sense of "this would happen", not to separate the concepts/implementations because in practice if you have enough data on making objects with fidelity you have enough of the pictures they are drawn from which aren't just isolated objects, and so the net learns one from the other. probably. of course this would disrupt the generation of impossible objects but whatever, it's supposed to look natural anyway, although of course it can't include an algorithm on what's possible in reality in general, but the average person doesn't know that either, so no need for that much autism which is my whole point. and especially with the consideration that GPT 3 does "non-realistic" but still "natural-looking" styles, maybe i'll just say in substitute of natural looks it looks like it was either made by a person if applicable or looks like reality if applicable. in both cases, people perceiving the image as not the product of inorganic machinations. anyway, i think this is possible in the future.

with regard to natural speech generation that's much more complicated and i didn't intend to think about it that much but i'll give thoughts on it. in OP i was referring to limited generation of language that sounds natural, like "given these speech objects, and a relation of them, is this how a human (lets say within-average native speaker) would create something from it", and a resulting "dialogue", the emulation of conversation, which is where the turing test comes in. i realized that what you're referring to is the specific form of language-generation that is taking a seed and coming up with writing from it. i'd guess this is possible to achieve in a consistently pertinent and natural sounding way though short, tropey, and in the case of nicher things plagiaristic, basically using the Jeopardy AI model of natural language plus google searches and ngrams queries but on steroids. it wouldn't be able to do much but i never meant that as a criterion for being natural, it seems your interpretation is of a machine truly writing, in which case i'd agree that's very far from feasible (and also sacrilege), requiring far more than natural sounding language itself, operations on outside data (searches) and operations on itself (self-continuity and development). it would require "imagination", that is. i didn't realize that's what you meant well into writing this response actually lol, cause i don't consider the operation of literature-writing from seed analogous to "rendering valid picture result from literally interpreted concise word seed". and now i realize you got the idea that that was what i meant from the fiction generators which are somewhat popular, but i didn't intend that. to me they are highly different because speech does not have a "solution" in that manner, ideation is so different from solving, the breadth and nuance of concepts and styles of expressing them is so much greater than perceived objects. i guess modern/abstract art could pose a tiny version of the same problem but visual reality is something you can see, the opto-scape and its patterns are concrete and limited. the idea-scape is a construct in the first place and to truly emulate it is to emulate the human mind, in a strong AI sort of way, ie it would actually have to "understand" instead of merely operating upon. that or have an insane amount of data which doesn't exist because of the sheer permutations of potential ideation. any writing would be hackneyed and derivative. anyway, AI achieving that was never the intention of my post based only on the view of the observer. if it is a question of "giving natural sounding 'answers' for seed inputs" i'd argue it's possible and probable, but writing is different for sure.
what i was referring to by natural language is both a better model of a language itself and an output from a statement that is indistinguishable from what a human would come up with. This can be extended into the classical turing test in a limited way (less a dialogue and more call and response, or somewhere in between) tailored to specific use-contexts but not exclusive to them (that react to input with low-fidelity output with tangents and counterquestions). would it fall apart under scrutiny? i think not necessarily so in a way that would alert a human, the same rebuffs filling in the gaps can be used and reused, basically statistically enhanced fail-saves with a regulation of repetition, because the output isn't a "solution". with running dialogue of course there is the necessity for learning, and the lack of familiarity with and applicability of the algorithm to the situation increases as it becomes more complex and specific. at at a certain point you are aping the cogitations of the human mind itself which is futile (i choose to belief this thanks). but can you fool a high percentage of people with a high percentage of common "replies"? i think so. i think variations of this can and will be developed and used and make it into the average persons day, making fewer gaffes over time as it creates data perfectly situated to benefit itself.

not really sure what my point was here tbh sort of a stream consciousness. i don't know much about AI so yeah.
0
0
0