Anybody who wants to keep on bitching about how separation isn't advice that a psychiatrist would give is welcome to post their own credentials as a therapist, psychiatrist, or counselor. We will simply thank them for their input and keep on writing our story as we see fit, though.
Re: Alt text, CentComm has all but been dismissed! Athena's got this. :D
Re: Author's Note: I don't have any credentials in mental health, but I would imagine that a percentage of accredited professionals would recommend separation, another percentage would not, and yet another percentage would prescribe some kind of medication. (There are always those who think pills are the answer) Then there's the matter of Acantha's needing rest.
I doubt the therapist intends to keep Lynn away from Acantha forever.
Yeah, I agree. Athena's got this, and the separation is more like a breathing space thing, I think. Plus it'll help in the long-run for others to see that Lynn cares for Acantha as a friend, moreso than an unhealthy emotional attachment she was dependent on in the midst of their crisis. -v-*
I really like how this whole conversation is panning out so far, you really get a feel for the new dynamic our new arrivals are bringing in with them. :D
Crisis bonds aren’t necessarily unhealthy, or most combat veterans would be in straightjackets in round rubber rooms and doped up with enough thorazine to treat a dozen elephants :P
People need to remember this is a future world so just maybe psychology is an actual science there. ;)
As for taking time off someone or something that have been involved in your trauma it is not a bad advice no matter if it would turn out to be true or not.
My guess is that Lynn use her friend to stay strong but to actually get better she needs to crash before the issues can be properly diagnosed and dealt with.
There are pros and cons for Lynn's potential wellbeing with the separation to be sure, but also for Acantha. She just got out of heart surgery, and once she's awake enough, she will want to see Lynn too, but the downside is that the medical team would rather have her rest quietly. It's possible that she may get overly excited, which could be to her physical detriment.
I am not a medical professional either, so take this with a grain of salt.
Not just lonely but really stressed out. Acantha is in a foreign city, totally at the mercy of a rival city-state, physically compromised and without resources. While, so far, the actions of New Troy have been helpful, she has no reason to trust that their motivations are entirely altruistic. I am sure she has absolute and complete faith in Lynn but Acantha has been around palace intrigue long enough and is quite smart enough to know that Lynn is not running the show.
Stress is a huge detriment to healing, not only indirectly through poor sleep but directly though stress hormones such as adrenaline. If the medical team is halfway competent they should be aware of this, be attending to Acantha's mental well being and be doing their best to assuage her concerns.
It is not just Acantha that may be getting stressed out - Lynn certainly is highly stressed. At the very least, let Lynn send a message to Acantha. The total lack of any communication is not helping either one.
Athena starts off by asking for Lynn to finally tell HER side of what happened in Nova Roma (aka debrief) so that future decisions will be based on facts not assumptions (Stockholm Syndrome.) I think we can safely assume the grown ups have taken charge.
(edit-That was my concern - that the therapist was being unduly influenced by Calliope's instructions, which happens in real life, and people's decisions were being based on their opinions of what happened in New Rome and not on what Lynn and Acantha actually experienced.)
With everyone going off on the Stockholm Syndrome thing...
It’s nothing more than a guess. An assumption. Jumping to conclusions without having anything more than the absolute most basic/general of facts to go on. The shrink should not have even mentioned it at this point, except in the sense of 'its common for kidnap victims, so we have to consider it a possibility'. Once the whole story comes out, that preliminary guess will likely go away in favor of seeing it more along the lines of the sort of bond typical of combat veterans who’ve survived intense crisis situations together. Because really, how long was Lynn in New Rome? A week? Is that long enough for Stockholm Syndrome to truly set in? (Its plenty long enough for a combat/crisis bond to form, yes...and more than long enough for PTSD, which can be caused in mere moments...but Stockholm Syndrome has more in common with brainwashing than with simple psychological trauma, and that takes a lot longer.)
And Lynn is justifiably worried that Cent or someone else may treat Acantha as The Enemy, as well as needing to be able to actually SEE, with her own eyes, that Acantha's alright.
Lynn's psychological immaturity at the start of this is also a factor, one that greatly complicates matters. She didn’t have the experience, training, or mindset to be properly equipped to handle it the way soldiers do. This is a big part of why she’s as distraught as she is right now.
"Justifiably", @velvetsanity? I have seen nothing to justify that particular worry and (in my opinion) I think it a harsh and unfair indictment of both CentComm and Calliope. But, I suppose, only time will tell and you may be right.
Lynn has already seen, with her own eyes, that Acantha is all right. And there are many less-flattering explanations for why Lynn is as distraught as she is right now than that she lacks a soldier's training. (Again, in my opinion.)
Yes, justifiably. With drekhead out of the picture, Acantha represents the leadership of New Rome. Granted, when this all started, it was entirely her insane brother's doing, but at the time *he* represented the leadership. They don’t yet have all the facts about what happened or what the political situation in New Rome is, and are still operating on *assumptions*. And we all know what happens when you make an assumption: you make an ass out of you and mption.
The fact that they’re still operating on assumptions is why the worry is justifiable.
Additionally, Lynn is aware that Cent-Comm sometimes takes drastic unilateral action based on Cent's understanding of the circumstances, without necessarily informing anyone else of either those actions or her conclusions. This further justifies Lynn's worry, as she doesn't really understand how Cent's logic works, and is aware that she doesn't. That's not to say Cent would act against Acantha, or that Lynn's worries will be realized, just that her worries are appropriate to what she knows, what she understands, and what she knows that she doesn't know.
Ah! @Ebonbolt, you have hit on the very crux of the matter for me. You observe, quite correctly, that Lynn does not seem to understand much (if anything) about why CentComm acts as she does or targets who she does. Much like primitive man cowering in irrational terror at all times from the inexplicable and mysterious power of the thunderstorm, her anxiety is understandable in her situation until she discovers an inkling of the science of meteorology. For it to be justified, we should be seeing gathering clouds and falling barometer readings.
However, given that CentComm's actions are (or appear to be) sound and rational (not infallible) strategies based on min-max principles and risk-reward analysis, it seems to me that any actual fear of CentComm intentionally initiating irreversibly harmful action (including being unforgivably rude) to Acantha is about as likely as a thunderstorm coming out of a clear blue sky with the barometer rising and no measurable winds -- i.e., next to none. Even if CentComm did assume Acantha to hold malign intent towards New Troy, acting on that without solid evidence would seem to offer far too much risk for far too little reward. So, in an objective sense there is no justification for Lynn's worries (in my opinion). "Understandable" and "Justified" are not, in my opinion, synonyms.
True, @Gil. However, in her own mind, she would be justified for a healthy worry, as she doesn't have that metaphorical barometer to read (& Acantha just had her heart replaced, after all). All of the denials ("no, you can't go see Acantha") might very well be revving her up to an unhealthy or even hysterical level of worry, so Athena talking her down is certainly not a bad thing.
A therapist having any friggen clue what they're talking about? No no. I don't think they do, they just spout a lot of BS to sound like they do. Kind of like Abstract Art Critics.
hence that fancy name " Practice ".. i agree with you.. like sigmund fraud you must love mom/dad, just because he did.. or the crisis therapist i saw, you must be suffering LMS ( little man syndrome) cuz i wanted to kill this drunk guy,, that ran over a little girl, an as a EMT , i had to watch her die...
For everyone who is still going off on "Stockholm Syndrome".
1: The shrink admitted that the pieces didn't fit for stockholm syndrome but that she was redirecting: Comic 1532 - Not Sorry
2: On a completely practical side, Lynn is currently a HUGE security risk. and could potentially start a war because she's mad at Cent-comm. Like it or not, that has to be figured into the equation.
@DLKmusic, for the most part, I agree with your points 1* and 2. Point 3 is a statement based on interpretation. To me, what Rose said was if you assert one way or other whether it's best to separate Lynn and Acantha, back it up. (with credentials). At that point, they will write the story as they see fit, regardless of outside opinions.
There are logical arguments both ways, but it doesn't seem to me that Rose intends to quash all debate, just that the storyline won't be decided in the comments section.
while we can argue back and forth about what is the correct action to take (if there actually is one), one thing that I think has been missed is that it is the characters themselves that have and will make the decisions.
It's not even a question of (whether we think it is) right or wrong - it is what the characters or plot dictates. They will do what they do - not what we think they should do.
One can yell at the TV for all one's worth but it won't change a damm thing!
@TheSkulker
"One can yell at the TV for all one's worth but it won't change a damm thing!"
Hang on, lemme get hold of the Artist; we need to plan out a storyline where Lynn, Acantha, Marcus, and Ada go on vacation to some weird old cabin in the Forest of Death and Blood. There, they will find a strange old book, meet some people who are really into chainsaws, then do some pot, have sex, and go down into the cellar one at a time to investigate a strange noise!
@Rose: LOL! But I'm surprised you didn't include Kyle in the list of people going! Half your readers would vote for him to be the first to go into the cellar, and about half of those would want to push him down the stairs!
@Tokyo Rose :
But Roooose, the The Little Sisters of the Immaculate Chainsaw don't live in the Forest of Death and Blood .. they live in London's underground where they can chase demonic cars!
I think it's perfectly rational for an outsider to have concerns about Stockholm Syndrome. They don't know Acantha was a fellow victim - and equally, if not more, in need of treatment. Might have been wiser to convince Lynn the separation was for Acantha's recovery.
If this were Game of Thrones, Acantha would have totally manipulated the various factors that conveniently left her as the surviving, ruling heir. Can't expect an outsider not to have suspicions about her role in these events.
If this was Game of Thrones, they would have threatened Lynn's life, and then killed her boyfriend, while in disguise, and have 2 doctors present not lift a single finger to save him, while a shadowy slave would heroically rescue him, only to go on a killing rampage later !!
Athena's skills are definitely coming in handy here. She has courteously and completely taken the situation in hand; an example of Taylor matriarchy Lynn would do well to emulate.
And yeah, Lynn needed to vent (and still does), so her tantrum is understandable. It's a testament to her resiliency and strength of will that she's asserting herself now. That said, a prolonged cool-down will also benefit her and everyone around her. Most definitely including Acantha.
But Gramma can do no wrong. Lynn already voiced her beef with Calli, & Dolly is the "stern parent" who will "always" naysay. Athena is confirming Dolly's ruling, while also giving her an opportunity to present her own argument-as long as she does it within certain boundaries.
Also I can understand where the doc and centy are coming from, though neither were there so all that can do is go off training data files and make sure no stolkhom crap is happening then Lyn can serve em both a fat dish of told ya so!
Yes, but the ground decided to strategically reposition itself *away* from CentComm.
Currently, the ground is clutching at Athena's feet and wondering why the worlds seems to be revolving around her. Everyone else are just wondering when and where they got drunk.
It IS cool! A person should remember, that a lot of advances start as fictional concepts. Today's advances in AI, and robotics virtually guarantees that androids aren't all that far off in the future. Like you say, stuff doesn't stay fiction for long!
Pokéballs are science fiction now, but look at the one on a screen in the background. I think Centcomm has decided she's "gotta catch 'em all" next time, since her people seem displeased with all the killing. ;)
@ megados
Humanoid androids may be a good bit farther off (space and energy limitations make for some very tricky engineering) but if you combine an AI self modifying program like OpenAIS with a really fast and massively paralleled processing complex like Watson (the one that beat Ken Jennings at Jeopardy, not the one in the TV adds) and set it to running 'War Games' simulations against itself, we could have an early version of CentCom in less than 10 years
@ Sheela
They had game bot like programs when I started running mainframes back in the mid 1960's but whenever they ran against themselves, they stalemated. What's super neat (and scary) about this one is how it improves itself with each iteration.
You are right, @knuut, that those are the constraints right now that stand in the way of the humanoid androids, and that a CentComm-like system is closer. (I watched with rapt attention as Watson played Jeopardy as well.) But I'm maybe a little more optimistic about the technical developments, especially if I look at the speed of development to this point. Computing power and circuit densities have gained by leaps and bounds recently, and some gains have been made in energy density and storage, as well as actuator and VLSI circuit efficiencies. We're not "there" yet, but when one looks at some of the gains made there, and some of the platforms being developed by companies like Boston Dynamics, and the plethora of platforms being currently developed in Japan, one has to wonder. My imagination tells me that early iterations of that level of AI would use a form of computing/processing similar to Watson, (indeed that's what I was thinking of when I replied) and their "brains" would be external, (starting off kind of like CentComm's dolls), but I don't think it's too far off.
On the hardware side, increases in density have been very consistently following an exponential rate of doubling every two years over the past four decades or so (Moore's Law), and there is no indication AFAIK that this is about to change.
In AI hardware specifically, we are seeing a jump of several orders of magnitude right now, simply because only now dedicated deep learning hardware is becoming available, which is clearly superior to general-purpose CPUs, or even to massively parallel vector co-processors (A.K.A. modern GPUs). However, that's a one-time gain -- after that, we are back to Moore's Law.
Now I don't really know how much of a density gap we still need to bridge at this point -- but I *suspect* it's still a few orders of magnitude, i.e. a couple of decades.
A doll remote-controlled by a supercomputer is obviously much nearer than a self-contained android -- but that's quite a different scenario...
The real challenges however are on the software side. And that's where we are getting back to the OpenAIS article. The fact that it beat a top human player is not all that impressive: in any action game, the AI has a huge bandwidth advantage -- it doesn't need to be very sophisticated to outmatch a human. The interesting part however is that AIUI it learned to cope with the complex environment by deep learning alone, rather than pre-programmed rules...
This is a very nice step in the direction of Artificial General Intelligence. In view of recent advances, it looks like we might indeed see AGI within a decade or two.
For an android (in the Data Chasers sense) however, we need Strong Artificial Intelligence, which is much further out. (The article links to other articles explaining the distinctions.)
We still *might* see that in our lifetime, too... If humankind hasn't destroyed itself before that. (Fermi Paradox anyone?)
The fact that the game bot adapts, and beats itself is a kind of a new departure. If such algorithms can be effectively applied to a larger general AI, the ramifications would be staggering.
To simplify, liken it to rolling your d20 to get a successful number. When a stalemate, or unsuccessful operation occurs, roll for a different approach. Once success has been achieved, save the operation for future use. Then it becomes a learned behavior, which then can become another approach option.
I truly hope you are right but as per the previous 600 years of this comics timeline and the last hundred or so years of our actual history, we are awfully good at creating ways to put ourselves at risk.
One thing that is often overlooked, is heat dissipation. If you have that much electronic and electromechanical stuff packed that tightly, that has to be carefully engineered in. As device efficiencies improve it might not be as big a factor, but for now, it is.
That's very true. Boston Dynamics, for an example, presently uses umbilicals, and engine-driven powerplants. The inefficiencies are the reason heat dissipation is needed. As efficiencies increase, less power and heat dissipation are needed. You can then use smaller power sources, because you're wasting less power as heat.
True, but then all of a sudden, people want it to be stronger, and then inefficiencies rise again.
Just look at automobile design, even though car engines have gotten better, the cars have also gotten fatter - The result is that a For Escort form the 70's have about the same MPG as a modern car does .. 40-50 years later.
I agree. With cars, most of their efficiency gains have been put back into horsepower. With robotics, and eventual android development, one needs be able to find the sweet spot in efficiency <--> power source size to make it work first, and then develop specific characteristics from there.
TL;DR, Get a workable android first, then if you want it to lift a car, work toward that.
The biggest advances in luxury and safety have been more recent, I think. Engines and drive train advances have been slower, but over a longer period of time. It's kind of subjective, but I might be inclined to think it's a horse apiece, but I could be wrong. Having said that, I have to note that a lot (not all) of the safety and luxury improvements require power, which is ultimately incumbent on the engine to provide. That eats into gains made in engine efficiency, so you make a more efficient engine, and negate some of that with non drive train load.
I can only guess at power requirements for fully tactile skin, and I think the requirements for the skin itself would actually be minimal. The processing of the information, however would not be negligible.
As far as maintenance and care is concerned, it would depend on the environment, and what it was ultimately made of. Beyond that, your guess is as good as mine.
A thousand thanks, @Centcomm! :D Credit: @guest for the vid.
It just occurred to me: It's a perk to register on ComicFury (it's free!), and then among other things, you can edit your posts. Then, while you're at it, you can subscribe to the comic too, and that will help the comic as well! ;)
Hmmm, that makes me wonder what percentage of people actually experience that perception. I've never been affected that way, so I really can't relate, beyond viewing it as an interesting phenomenon.
So far, emphasis is placed on the artificial intelligence description of self-modifying behavior. If a humanoid construct would be created with this level of sophistication, my own interpretation (opinion) would be that this would be an intelligent robot. (yes, I know this is something of a departure from the DC universe depiction) The thing that is still missing, is imagination, and by extension, emotion. Once that hurdle is crossed, true androids can be realized.
In essence, for these constructs to be people, they themselves would have to care whether they're considered people. A robot, no matter how intelligent, does not. JMO
And I would submit that a self modifying program with a sufficient level of random input and a way to test that randomness against external criteria meets a functional definition of imagination. It's almost certainly not how we do it now but it may be close to how it started for us back before we were vertebrates (an octopus will search behind opaque objects seeking food or shelter... imagination and curiosity drive each other).
I would also submit that what we call emotion is a much more complex issue. Many of the inputs are data of one sort or another but the outputs appear to be a mix of data and neurotransmitter molecules running through the circulatory system. Unless an AI chooses to associate itself with a biochemical subprocessor of some sort what we call 'feeling' is unlikely (although a complex as large as CentComm can probably emulate very accurately).
Hmmm, I have to admit your assertion that self modifying programming embodies a certain level of imagination. It sits right at the cusp of that which could be called imagination, so you are not wrong. We know that computers have also been tasked with making art and music with varying results, and that, too, could be called imagination. It is a very goal oriented form (currently), but it passes muster. When I posted, I was thinking of a more abstract form, where the imagining is its own goal, such as that idea you have as you lay awake at night, and have to write down before you forget. Out of the blue creative thought spontaneously occuring without being tasked to do so. Seeing images in the clouds. Pretending you're Superman. Stuff like that.
I don't really think emotion is too far flung from that level of imagination. The philosophical question, to me, is whether emotion is a subset of high level imagination. Does emotional capability "switch on" when imagination capacity reaches a certain level? It doesn't seem as though they are two completely disparate things to me. Achieving it is going to have to be a departure from current methods, and a new approach to hardware. Proper introduction of randomness seems to be a recurring component, and it may be that it's a necessary one. As you say, some kind of bioware, or organic processing may be needed. Also, quantum computing is in its infancy, but might that be part of the mix? There are also other processing methods being explored, such as optical switches, etc. What about an augmented binary with 0 being logic zero and 1 being logic 1, but weighted with an analog value? Then you could assert hierarchical values to your various logical states. (it should be noted that analog values in great quantities would increase power consumption)
Intermediate states between 0 and 1? You just described "fuzzy logic", which was a major buzzword back in the 90's, but failed to have a lasting impact...
Yes and no, @antrik. The term 'fuzzy locic', refers to a logic state having any Real Number value between 0 and 1. It is essentially a software implementation of what I am talking about. It is widely used in many PID controllers and other devices. It is still having an impact. What I'm trying (somewhat unsuccessfully), to convey, is a hardware implementation using an analog voltage representation of any non-zero value. This value would always constitute a 1, with a weight represented by the analog level. It may be possible that networks of these could be constructed to overcome some of the drawbacks of hard binary logic. Values of various bits could be directly compared, or caused to act directly on one another.
Such networks would augment, not replace the usual logic components.
Interesting thing is that modern artificial neuronal networks -- as opposed to biological ones -- actually use "fuzzy" activation functions. (I.e. the strength of the output signal varies with the strength of the combined inputs, rather than just a strict "active" vs. "inactive".)
This is necessary, since the most common learning method -- back-propagation -- requires a (mostly) differentiable activation function, to calculate how much the individual weights (synapse strengths) contribute to the total error.
So in a sense, artificial neural networks can be classified as a type of fuzzy logic. However -- in spite of having a fairly specific technical meaning -- in practice it was mostly used a buzzword as far as I can tell, that simply fell out of fashion when it failed to generate billions in added revenue...
Also, while functionally these networks implement a kind of fuzzy logic, the actual implementation at hardware level is still purely digital. I'm not sure it's even possible to implement back-propagation in a fuzzy (i.e. analogue) circuit, since it involves non-trivial maths.
Analogue signal processing circuits are indeed preferred for (much) lower power consumption in some specialised areas (e.g. hearing aides) -- but they are unsuitable for more generic systems, since they are totally hard-coded; have limited precision; and can't do certain kinds of complex maths.
Yes, right now most of these implementations are digital representations of analog values. What I'm speaking of, are analog modules physically implemented in hardware, augmenting, and not replacing other digital construct implementations.
These modules don't need to be precise; their purpose is similar to the digital implementation, but with much less propagation delay and processing overhead, as well as assigning infinitely many values, rather than a finite number of values, dependant on the resolution of a register. Back propagation is possible, where it is not in the digital implementation, and current digital processing is there to handle the "nontrivial complex maths" where needed, but informing upstream sources is part of why it's desirable in the first place, and where the purely digital implementation begins to fail.
I remember that we had a pretty long discussion (warning : Walls of text) on what it would take to make a robot a "person" over on Luna Star around the time when Amy sparked.
Empathy was one of the key attributes that I argued for.
Ah yes, I remember that. (went back and reread it) Empathy is a good litmus test as well. It does fit well with the notion of abstract thought, and in a very quantifiable way. Kudos! :)
I think it's a good example of a fair mix of logic and emotion.
Re: Author's Note: I don't have any credentials in mental health, but I would imagine that a percentage of accredited professionals would recommend separation, another percentage would not, and yet another percentage would prescribe some kind of medication. (There are always those who think pills are the answer) Then there's the matter of Acantha's needing rest.
I doubt the therapist intends to keep Lynn away from Acantha forever.
I really like how this whole conversation is panning out so far, you really get a feel for the new dynamic our new arrivals are bringing in with them. :D
As for taking time off someone or something that have been involved in your trauma it is not a bad advice no matter if it would turn out to be true or not.
My guess is that Lynn use her friend to stay strong but to actually get better she needs to crash before the issues can be properly diagnosed and dealt with.
And Acantha's mental health may be an issue as well, since she will end up being lonely without Lynn.
I am not a medical professional either, so take this with a grain of salt.
Stress is a huge detriment to healing, not only indirectly through poor sleep but directly though stress hormones such as adrenaline. If the medical team is halfway competent they should be aware of this, be attending to Acantha's mental well being and be doing their best to assuage her concerns.
It is not just Acantha that may be getting stressed out - Lynn certainly is highly stressed. At the very least, let Lynn send a message to Acantha. The total lack of any communication is not helping either one.
I wonder when they will meet again - or when we will see
AthenaAcantha* again... =D...great to hear the graphics card is performing again...
* edit: corrected
(edit-That was my concern - that the therapist was being unduly influenced by Calliope's instructions, which happens in real life, and people's decisions were being based on their opinions of what happened in New Rome and not on what Lynn and Acantha actually experienced.)
It IS cute though! XD
*laughs hysterically for hours and hours*
It just called out to me, and I could ignore it no longer!
But you! You have the ability to render one where the ears are aligned, and there is no stray pixels around the border.
It’s nothing more than a guess. An assumption. Jumping to conclusions without having anything more than the absolute most basic/general of facts to go on. The shrink should not have even mentioned it at this point, except in the sense of 'its common for kidnap victims, so we have to consider it a possibility'. Once the whole story comes out, that preliminary guess will likely go away in favor of seeing it more along the lines of the sort of bond typical of combat veterans who’ve survived intense crisis situations together. Because really, how long was Lynn in New Rome? A week? Is that long enough for Stockholm Syndrome to truly set in? (Its plenty long enough for a combat/crisis bond to form, yes...and more than long enough for PTSD, which can be caused in mere moments...but Stockholm Syndrome has more in common with brainwashing than with simple psychological trauma, and that takes a lot longer.)
And Lynn is justifiably worried that Cent or someone else may treat Acantha as The Enemy, as well as needing to be able to actually SEE, with her own eyes, that Acantha's alright.
Lynn's psychological immaturity at the start of this is also a factor, one that greatly complicates matters. She didn’t have the experience, training, or mindset to be properly equipped to handle it the way soldiers do. This is a big part of why she’s as distraught as she is right now.
Lynn has already seen, with her own eyes, that Acantha is all right. And there are many less-flattering explanations for why Lynn is as distraught as she is right now than that she lacks a soldier's training. (Again, in my opinion.)
The fact that they’re still operating on assumptions is why the worry is justifiable.
However, given that CentComm's actions are (or appear to be) sound and rational (not infallible) strategies based on min-max principles and risk-reward analysis, it seems to me that any actual fear of CentComm intentionally initiating irreversibly harmful action (including being unforgivably rude) to Acantha is about as likely as a thunderstorm coming out of a clear blue sky with the barometer rising and no measurable winds -- i.e., next to none. Even if CentComm did assume Acantha to hold malign intent towards New Troy, acting on that without solid evidence would seem to offer far too much risk for far too little reward. So, in an objective sense there is no justification for Lynn's worries (in my opinion). "Understandable" and "Justified" are not, in my opinion, synonyms.
1: The shrink admitted that the pieces didn't fit for stockholm syndrome but that she was redirecting: Comic 1532 - Not Sorry
2: On a completely practical side, Lynn is currently a HUGE security risk. and could potentially start a war because she's mad at Cent-comm. Like it or not, that has to be figured into the equation.
3: The Creatrixes JUST asked us to drop it.
4: I'm going back to bed now
There are logical arguments both ways, but it doesn't seem to me that Rose intends to quash all debate, just that the storyline won't be decided in the comments section.
And on point 4, I hope you sleep well. :)
* was actually 1531 - redirecting
It's not even a question of (whether we think it is) right or wrong - it is what the characters or plot dictates. They will do what they do - not what we think they should do.
One can yell at the TV for all one's worth but it won't change a damm thing!
"One can yell at the TV for all one's worth but it won't change a damm thing!"
Hang on, lemme get hold of the Artist; we need to plan out a storyline where Lynn, Acantha, Marcus, and Ada go on vacation to some weird old cabin in the Forest of Death and Blood. There, they will find a strange old book, meet some people who are really into chainsaws, then do some pot, have sex, and go down into the cellar one at a time to investigate a strange noise!
Or, is there now a mutated super pot, growing in the wastes? That would explain the murderturd's extreme crankiness, when held in captivity... ;)
But Roooose, the The Little Sisters of the Immaculate Chainsaw don't live in the Forest of Death and Blood .. they live in London's underground where they can chase demonic cars!
If this were Game of Thrones, Acantha would have totally manipulated the various factors that conveniently left her as the surviving, ruling heir. Can't expect an outsider not to have suspicions about her role in these events.
.. oh .. wait ...
uhm... wait...
Nahh .. to far out.
And yeah, Lynn needed to vent (and still does), so her tantrum is understandable. It's a testament to her resiliency and strength of will that she's asserting herself now. That said, a prolonged cool-down will also benefit her and everyone around her. Most definitely including Acantha.
Also I can understand where the doc and centy are coming from, though neither were there so all that can do is go off training data files and make sure no stolkhom crap is happening then Lyn can serve em both a fat dish of told ya so!
Allright, allright, I may have nibbled a bit at the plug on her mainframe, but still.
Cent-Comm doesn't lose ground--it just strategically repositions.
Currently, the ground is clutching at Athena's feet and wondering why the worlds seems to be revolving around her. Everyone else are just wondering when and where they got drunk.
That's pretty neat.
Game bots aren't a new thing, per se, but it's cool that it can play with itself. :)
Humanoid androids may be a good bit farther off (space and energy limitations make for some very tricky engineering) but if you combine an AI self modifying program like OpenAIS with a really fast and massively paralleled processing complex like Watson (the one that beat Ken Jennings at Jeopardy, not the one in the TV adds) and set it to running 'War Games' simulations against itself, we could have an early version of CentCom in less than 10 years
@ Sheela
They had game bot like programs when I started running mainframes back in the mid 1960's but whenever they ran against themselves, they stalemated. What's super neat (and scary) about this one is how it improves itself with each iteration.
In AI hardware specifically, we are seeing a jump of several orders of magnitude right now, simply because only now dedicated deep learning hardware is becoming available, which is clearly superior to general-purpose CPUs, or even to massively parallel vector co-processors (A.K.A. modern GPUs). However, that's a one-time gain -- after that, we are back to Moore's Law.
Now I don't really know how much of a density gap we still need to bridge at this point -- but I *suspect* it's still a few orders of magnitude, i.e. a couple of decades.
A doll remote-controlled by a supercomputer is obviously much nearer than a self-contained android -- but that's quite a different scenario...
The real challenges however are on the software side. And that's where we are getting back to the OpenAIS article. The fact that it beat a top human player is not all that impressive: in any action game, the AI has a huge bandwidth advantage -- it doesn't need to be very sophisticated to outmatch a human. The interesting part however is that AIUI it learned to cope with the complex environment by deep learning alone, rather than pre-programmed rules...
This is a very nice step in the direction of Artificial General Intelligence. In view of recent advances, it looks like we might indeed see AGI within a decade or two.
For an android (in the Data Chasers sense) however, we need Strong Artificial Intelligence, which is much further out. (The article links to other articles explaining the distinctions.)
We still *might* see that in our lifetime, too... If humankind hasn't destroyed itself before that. (Fermi Paradox anyone?)
Robotics are NOT efficient.
Just look at automobile design, even though car engines have gotten better, the cars have also gotten fatter - The result is that a For Escort form the 70's have about the same MPG as a modern car does .. 40-50 years later.
TL;DR, Get a workable android first, then if you want it to lift a car, work toward that.
Even cheap cars are fairly luxurious today.
Who knows how much power and upkeep a fully tactile android skin would take ?
I can only guess at power requirements for fully tactile skin, and I think the requirements for the skin itself would actually be minimal. The processing of the information, however would not be negligible.
As far as maintenance and care is concerned, it would depend on the environment, and what it was ultimately made of. Beyond that, your guess is as good as mine.
(Orphaned post tied to AI and android discussion)
there you go..
It just occurred to me: It's a perk to register on ComicFury (it's free!), and then among other things, you can edit your posts. Then, while you're at it, you can subscribe to the comic too, and that will help the comic as well! ;)
In essence, for these constructs to be people, they themselves would have to care whether they're considered people. A robot, no matter how intelligent, does not. JMO
I would also submit that what we call emotion is a much more complex issue. Many of the inputs are data of one sort or another but the outputs appear to be a mix of data and neurotransmitter molecules running through the circulatory system. Unless an AI chooses to associate itself with a biochemical subprocessor of some sort what we call 'feeling' is unlikely (although a complex as large as CentComm can probably emulate very accurately).
I don't really think emotion is too far flung from that level of imagination. The philosophical question, to me, is whether emotion is a subset of high level imagination. Does emotional capability "switch on" when imagination capacity reaches a certain level? It doesn't seem as though they are two completely disparate things to me. Achieving it is going to have to be a departure from current methods, and a new approach to hardware. Proper introduction of randomness seems to be a recurring component, and it may be that it's a necessary one. As you say, some kind of bioware, or organic processing may be needed. Also, quantum computing is in its infancy, but might that be part of the mix? There are also other processing methods being explored, such as optical switches, etc. What about an augmented binary with 0 being logic zero and 1 being logic 1, but weighted with an analog value? Then you could assert hierarchical values to your various logical states. (it should be noted that analog values in great quantities would increase power consumption)
'Course I've been known to be wrong, so . . .
Such networks would augment, not replace the usual logic components.
This is necessary, since the most common learning method -- back-propagation -- requires a (mostly) differentiable activation function, to calculate how much the individual weights (synapse strengths) contribute to the total error.
So in a sense, artificial neural networks can be classified as a type of fuzzy logic. However -- in spite of having a fairly specific technical meaning -- in practice it was mostly used a buzzword as far as I can tell, that simply fell out of fashion when it failed to generate billions in added revenue...
Also, while functionally these networks implement a kind of fuzzy logic, the actual implementation at hardware level is still purely digital. I'm not sure it's even possible to implement back-propagation in a fuzzy (i.e. analogue) circuit, since it involves non-trivial maths.
Analogue signal processing circuits are indeed preferred for (much) lower power consumption in some specialised areas (e.g. hearing aides) -- but they are unsuitable for more generic systems, since they are totally hard-coded; have limited precision; and can't do certain kinds of complex maths.
These modules don't need to be precise; their purpose is similar to the digital implementation, but with much less propagation delay and processing overhead, as well as assigning infinitely many values, rather than a finite number of values, dependant on the resolution of a register. Back propagation is possible, where it is not in the digital implementation, and current digital processing is there to handle the "nontrivial complex maths" where needed, but informing upstream sources is part of why it's desirable in the first place, and where the purely digital implementation begins to fail.
Empathy was one of the key attributes that I argued for.
I think it's a good example of a fair mix of logic and emotion.