Comic 1686 - Moving Right Along

17th Feb 2021, 12:00 AM
Moving Right Along
Average Rating: 5 (14 votes)
Post a Comment


Lurker314 17th Feb 2021, 12:12 AM edit delete reply
I cannot help but think, "Sensors indicate ambient temperature well within tolerances for maximal function and durability."
mjkj 17th Feb 2021, 12:44 AM edit delete reply

Well, I can understand Ophie - tampering with your brain will garner some trepidation...
Oldarmourer 17th Feb 2021, 5:23 AM edit delete reply
C'n ah keep th' 'Lusty Bunny' bit ?
megados 17th Feb 2021, 5:52 AM edit delete reply

I can't blame Feelie for being a little worried. Her unease with having a corrupted brain outweighs her fear of the procedure, so that's a point in her favor already.
KarToon12 17th Feb 2021, 8:48 AM edit delete reply

"High oxygen content you got here."
"I rarely use it myself. It promotes rust."
Gilrandir 17th Feb 2021, 9:33 AM edit delete reply
The implications of the existence of Thor intrigue me. We know that, under the laws and customs of New Troy, people are ‘people’, and that synth-brain androids are ‘people’ by default. However it seems likely there are a lot of positronic-brained robots that are considered ‘property’, and not ‘people’. Thor, however, (And Doc, from Luna Star, as another example) are ‘positronics’ that still seem to be recognized as ‘people’. I wonder what their legal status is? Do they need to be individually recognized by legislative act when they emerge? Is there a standardized “Turing Test” they can take to be recognized as a ‘person’? Why do some positronics ‘spark’, while others do not, and is it, by now, a completely understood and predictable phenomenon, or is it still a mystery?
megados 17th Feb 2021, 11:09 AM edit delete reply

Some good questions here, and I don't have answers for most, but the one thing I would touch on, is that I think (and could be incorrect), that the deciding factor is sentience/sapience/self-consciousness or awareness. Without going into the attributes of each, they, rather than construction schema, are the basis of 'personhood' in the DC universe. Whatever tests there might be would probably have to do with determining these attributes.

From the DC tagline:
"Humanity's a lot more than what you're made out of..."
megados 17th Feb 2021, 7:25 PM edit delete reply

I would guess, that the distinction between individuals/people, and property would depend on the question of sentience/sapience. A robot is distinguished from an android in that way for example. In NT, a robot could be owned, but someone considered to be a person could not. Just guesswork on my part.
Gilrandir 18th Feb 2021, 12:35 AM edit delete reply
That was my point: It suggests that New Troy has some kind of ‘litmus test’ for sapience. Any creature wanting to earn the rights of a Citizen, be they positronics or synth, just takes the test. Pass, and you’re a person. Fail, and you’re a thing.

A test like that has profound implications, which I won’t belabor now. But, if they don’t have such a thing, we return to the original question: How do they decide what is person and what is property? And, if they do have it — just because it is a fun question to ask — what would happen if an organic human took the test and failed?
megados 18th Feb 2021, 5:57 AM edit delete reply

It kind of points out some of the inequities, doesn't it? If a human failed the test, then I doubt anything would come of it. Even in NT, machine intelligence is put to a disadvantage. In another example, androids have to have some sort of distinction, that marks them as android, such as eye or hair colors, VOX sounding voices, etc. They have to stand out in some way. Humans OTOH can do wahtever they want to, including adding distinctions that would make them seem as androids. That begs another interesting question: What of Dolly, now that her new body is virtually indistinguishable from human in ways undetectable except by exceptional means?

Back on topic, my guess is that CentComm might be the arbiter of what constitutes sapience in NT.
Centcomm 18th Feb 2021, 9:35 AM edit delete reply

watches the debate and eats popcorn....
jawbone 18th Feb 2021, 10:56 AM edit delete reply

I just received another case of Cousin Willie's from Purdue Pop in Corydon, IN, the self-proclaimed popcorn capital of the US. Willie's niece, Mary, was a co-worker at one of my jobs.
Gilrandir 18th Feb 2021, 12:32 PM edit delete reply
I apologize if my genial questions meant to provoke thought and promote discussion have come close to earning the often-controversial label of 'debate' (or even worse, 'argument'). Please do not misconstrue my interest and curiosity as anything other than a complimentary expression of my appreciation for the engaging, multi-leveled, and thoroughly entertaining story the entire creative team is weaving/has woven for us.

And, although I know you prefer to sit back and reserve commentary on my posts because of <SPOILER>, I hope you feel welcome and comfortable to jump in and express yourself whenever you might be so inclined, @Centcomm. ^_^
megados 18th Feb 2021, 4:10 PM edit delete reply

Oh, I would rather call it a discussion. As @Gilrandir mentioned I was also just kicking some thoughts around. Some of these ideas have been mentioned, and some new ones here, brought some of these others to mind. Some of them are kind of intertwined, it seems.

@Centcomm, you are more than welcome to join in the discussion, as well as share some of that popcorn! ;)
Centcomm 19th Feb 2021, 11:52 AM edit delete reply

tosses some packets of popcorn around.. See the issue is .. i know the answers to some of this but i cant say.,. so ill just leave you with.. "Spoilers!"
knuut 19th Feb 2021, 12:48 PM edit delete reply
It occurs to me that whatever New Troy citizens may or may not have, Centcom itself must have had some sort of discriminator (ie.'test') built in from the get go. Making a correct tactical assessment would require the ability to distinguish between 'casualty' and 'broken machine' under battlefield conditions where the Touring test is difficult to apply.
Gilrandir 19th Feb 2021, 2:32 PM edit delete reply
I don't think I agree with that at all, @knuut. Battlefield assessments can be as simple as 'friend or foe' and still be useful. Obviously the more accurately you can model the capabilities and behaviors of the players on the battlefield, the better off you are -- but that makes it a 'want', not a 'need' or 'must have'. And, at the battlefield level, it is more about "Can this resource be salvaged, or written off?' rather than 'Does this resource possess self-awareness?' At least in my opinion.
megados 19th Feb 2021, 9:20 PM edit delete reply

*catches packet of popcorn*
Thanks, @Centcomm!

I completely understand that there are some things you can't speak to. You don't have to speak to every issue, but it's nice you poke your head in at times. I wouldn't look for you to hand out spoilers! By way of your mentioning it, though, gives hope that some of these will get addressed, otherwise it wouldn't really be spoilers. ;)

@knuut, I'm inclined to echo @Gilrandir here. The battlefield would be an odd place to make sapience assessments. It's really not necessary for establishing behavior patterns. The opposing forces would be more inclined to be making IFF determinations, and neutralizing foes' resources.

That said, I do think CentComm has a certain amount of interest and sway in making the sapience determination, seeing as how she's responsible for the 'seeds', and inferentially the glen, as well. It would stand to reason that she would be the, or at least one of the arbiters of that.

This is one of those things where only the synthetic is subject to the evaluation. Human, organic entities are probably never subject sapience testing, even though there are some humans that might not measure up. I wonder how NT handles humans who have suffered, through illness or injury, debilitating brain conditions.
Sheela 21st Feb 2021, 1:38 PM edit delete reply

So, a lot of this brings up the whole "what is a person" thingy.
There's also the murky waters of "not person" now, but have the "potential to be person" later.

Like, people can act totally inhumane (Prince Douche) and machines can act very humane (Doctor Avi), and still they can both be "person" - Like so :


Does that make me inhumane ?
Damn straight it does, I am doggy !
(With good workflow too)

But I am person! 😁
megados 21st Feb 2021, 7:45 PM edit delete reply

Precisely. It's all in the difference between determining behavior versus determining sentience/sapience.
Sheela 22nd Feb 2021, 4:42 PM edit delete reply

Sometimes the voices in my head gets into a fight with one another.
When that happens, I sneak off to do things they don't approve of! 😁
megados 22nd Feb 2021, 7:31 PM edit delete reply

I have the opposite problem. When the voices in my head go silent, I have to worry about what they're getting up to. Right now, though, they refuse to speak to me. They just talk about me behind my back.
Sheela 22nd Feb 2021, 11:59 PM edit delete reply

Boren 17th Feb 2021, 5:58 PM edit delete reply
The entire concept of this, I'll call it "Organically grown IA" raises a lot of questions. With 'traditional' AI programing you could have a droid 'like' anything you want it to like. All you do is program it from the get go. This gets far messier, very interesting.
Deanatay 18th Feb 2021, 8:08 PM edit delete reply
I'm guessing the 'messiness' is the intent - 'messiness' is what generates sapience. I suspect you could simply program a machine with the elements you want - this would create a servant, with exactly the skills and behaviors you wanted. This 'birthing' procedure isn't meant to create servants, but instead, citizens.
Morituri 19th Feb 2021, 9:15 PM edit delete reply
I particularly liked the word Howard Tayler coined for it. "Growpramming."
Just_IDD 21st Feb 2021, 1:47 AM edit delete reply
So not to get in the way of the other discussion, and hopefully this gets addressed. Because Thor is a posi like Luna Star, is there less concern that he will become unstable over longer time periods and is excluded from the planned obsolescence of the higher functioning brain based androids?

I don't remember if this came up when the 150 years thing was touched on before, but why don't the androids leave New Troy for other parts when their due dates come along as a way to extend their functional times.
Gilrandir 21st Feb 2021, 9:27 PM edit delete reply
@Just_IDD, I don't recall any answer being offered about why the androids allow themselves to be retired, rather than trying to escape as their duration nears its end. Maybe they don't flee because they know the Rangers will just track them down. We could even imagine a movie about it. Logrin's Run. ^_^
Sheela 23rd Feb 2021, 12:10 AM edit delete reply

As I understand it, the body of an android could easily work for hundreds of years ... it's the mind that goes first.

Progressive Ego collapse, was the name for it.
The downside to self learning machines, is that they keep learning, and they only have finite storage.

There are actually some very old machines around, they are called AIS systems, like CentComm, Deep Blue, Agamemnon, Ariel, Aeneas,, etc. etc..
Gilrandir 23rd Feb 2021, 2:04 AM edit delete reply
I think, as far as android bodies go, it is more a matter of ‘Great-great-grandfather’s Axe”.

“This is my great-great-grandfather’s axe. True that every now and then we have to replace the handle. And every ten years or so it needs a new blade. But this axe has been in our family for generations and it is still the axe of my fathers.” So, as long as you can keep replacing worn out parts, the body would be operable indefinitely.

However, as far as the brain goes, that is a different question. Synth brains are essentially atomic; but, with Q-drives, continuity of personality would also seem to be indefinite, as long as you can beat the “Xerox Problem”. But I don’t think anyone has ever suggested the A.I.S. use synth brains. In fact, I think we heard that Aeneas’ brain is at least the size of a building, so it seems reasonable to expect a completely different set of rules and parameters would apply.
megados 23rd Feb 2021, 6:53 AM edit delete reply

@Sheela, that was how I understood it, as well. As far as I can remember, the only androids we have seen with inoperable body parts, were from damage from external causes. The lifespan rule was more so that they didn't suffer from that form of android senility or dementia, than from body deterioration. Their personas and/or entire synth brains could be moved into an entirely new body, if need be, so the body's not the issue, but if their persona becomes corrupted, it might not be salvageable.

It's an interesting concept that they keep learning, and they only have finite storage. That might make it possible that new information could overwrite key critical segments of the persona.

@JustIDD, there was a short by Dizzasterjuice that touched on the notion of an android fleeing the mandatory termination. It begins here.
Post a Comment

Comic Basement - Webcomic Ranking Directory