100% agree with @Gilrandir, very well done creative team. It isn't a 'wall of text', although it certainly could have been. It was fun to look at and follow.
Re. Alt.Text: About 5 years ago the Discovery channel ran a series about a fresh water fisherman who went around the world fishing for 'monsters'. One of their episodes was about the New Zealand long fin eel. Not unlike a piranha feeding frenzy.
Speed of light is approximately 0.3 micrometers per femtosecond. Diameter of human hair is about 17 to 180 micrometers.
Borh radius (most probable distance between nucleus and electron in hydrogen atom) is 52.9 picometers. Which makes hydrogen atom diameter to approximate 0.106 nanometers (and should give one some sense of scale what chipmakers mean when they talk about nanometers). Which makes micrometer approximately 10,600 hydrogen atoms wide.
Distance between Denver and Tokyo is about 9300 kilometers. To put it another way, lightspeed lag alone (ie. not counting stuff like switching delays, hardware lag, etc.) between the two is very roughly about 31 milliseconds.
I'm not sure what the point of determining a hydrogen atom's "diameter", but their effective diameter is much larger than that, as they generally won't want to get that close to each other.
I do not know how one would best determine the functional width of any particular atom. I'm not a physicist, I don't really know my way around there. I only really know enough to say, "Oh, hey, I remember when I got called out for trying to use that atomic width calculation and more or less what I was told was wrong with it."
I suspect this is why the ‘quantum’ word was bandied around when discussing communications protocols a couple of pages ago. ‘Quantum entanglement’ is an established science fiction-y way of beating the light speed barrier for communications.
CentComm's systems are similar to many systems of today, made up of a patchwork quilt of older and newer code, compiled by various different compilers and assemblers. Carry this out over hundreds of years, and I can see why Rose is both amused and intrigued.
I used to play a text based game that came out in 1987 that is still going to this day. It has had 3 complete revisions changing core systems, like combat and various other things. Most of the coder's doing work on it are actually unpaid volunteers and that has been the norm since pretty much the beginning. I can't even imagine how many old pieces of code are floating around, and how often some unfortunate coder tries to do something and runs right into a wall of nope because of that. The game is still a lot of fun, with a pretty dedicated player-base who put up with all the "real soon now" things that were supposed to be released years, if not decades ago.
Cent and Tokyo working together in a task that requires complete trust... don't know if they will defeat the goldies or not, but is it really relevant any more? Just the two of them working together like this must mean the end of the world anyway... :D
I really like the way this was done verbally and didn't go all Tron (anthropomorphizing the 'adversaries'). Tokyo Rose having to put on a ninja costume would be tired trope in the extreme.
Femtoseconds bounce off my brain. When something is measured in femtoseconds I can't help hearing "enough time for light to travel one and a quarter micrometers in vacuum" and from there I get "even at the Landauer limit there just isn't enough time or space to pass through enough circuitry for any kind of complicated processing. If shit happens way fast because it's parallelized, it still takes at least hundreds of those just for the fastest possible hardware DISP to reintegrate the threads"
Nanoseconds I can deal with. Centimeters is enough for a lot to happen. But femtoseconds.... engineer brain goes NOPE.
@MirrorField
@sigpig
Judging by the current maximum number of calculations of supercomputers, which averages at about 10^18 teraflops (Frontier has an Rmax of 2 Quintillion or 1.194x10^18), and the fact that this is already 1,000,000 times faster than supercomputers in the early 2000 ones who operated at 10^12, it’s an impossible guess at which speeds AI driven supercomputers in the 39th century operate, it could be as high as a Centillion (10^600) or even as high as a Millnillion (10^3003)
1 femtosecond (fs), being 10^-15 seconds @ 10^18 TFLOPS (or a 10^3 calculations/second) doesn’t seem to allow for a lot of calculations, but if you compare it to higher/faster computing power the numbers rise exponentially; 10^-15 @ 10^600 = 10^585 calc/s, and 10^-15 x 10^3003 = 10^2988 calc/s
Also note that this does not say anything about the size of the data packages, only about the speed with which said packages are handled. The size of the package is mainly determined by the medium used to send it through. Let’s take fiber which has a theoretical upper limit (at this point in time) of 10^15 bits/s on a single line, with the current commercial maximum count as high as 3456 lines in a single cable. Currently we don’t get communications speed of this magnitude due to both sender and receiver not being able to handle the combination of speed x package volumes.
Imagine one AI supercomputer (Tokyo Rose) operating at 10^600 TFLOPS sending information over a dedicated 3456 line cable (undersea military grade dedicated communications relay), each capable of sending 10^15 bits/s to an AI supercomputer (CentCom) operating at 10^600 TFLOPS (so there’s no actual package loss on the receiving end by not being able to handle the incoming data); this would result in 10^-15 x 10^600 x 3456x10^15 bits/fs = 3456x10^600 or 3456 Centillion bits/fs or 259.200 petabytes/fs of datatransfer.
In my humble opinion, that most likely be enough for Tokyo Rose to send a copy to New Troy to play doctor for CentCom… but I could be overthinking it…
The distinction is between transmission speed and transmission lag. Using your calculations (for the sake of argument), sending approximately 260k petabytes per femtosecond, the distance from Cheyenne Mountain to Tokyo, Japan is roughly 6,000 miles. Assuming lightspeed transmission, that means that by the time the first bit arrives (0.032 seconds later) you will have about 8,300 petabytes 'in the queue', but it still means that at least 0.064 seconds elapse between the time you ask "How are you?" and get back "I'm fine."
Transluminal communication speeds are the only way to beat that lag.
There are 1 million femtoseconds (10E-15 seconds) in a nanosecond (10E-9 seconds). Radio signals travel about one foot per nanosecond. In 14 femtoseconds, a signal would travel 12 inches/foot times14E-6 nanoseconds times 1 ft/nanosecond = 1.4E-5 inches. That isn't enough time to get from one side of an integrated circuit chip to the other. That is why modern computer chips have multiple cores and supercomputers have many chips. I feel like the bulldog in the Warner Brothers cartoon Cheese Chasers ( https://looneytunes.fandom.com/wiki/Cheese_Chasers ) when he looks up from the adding machine and says "It just don't add up."
I am going to repeat my mantra from the MST3K theme song until I calm down.
I'm not sure where to post this, so I'll just leave it here
Many are discussing the intricacies of transmission speeds, propagation delay/lag/latency, processing speeds, etc. @Gilrandir touched on, and many ignored the property of quantum entanglement. At this point in time, the existence of quantum entanglement has been tested and proven, using photons. It is not yet refined into a practical method, but in hundreds of years, it will likely be a common transmission medium, because of its instantaneous nature. Then take into account the research going into quantum computing, which also is proven, but still experimental. This, too will likely be commonplace in the hundreds of years in the future where the DC universe is set. The arguments against using femtoseconds as units of time fall kind of flat at this point.
Kudos to those taking the time to try to explain, and doing the math to support their positions. (I haven't checked it, since it deals with conditions that I believe to be outdated in the DC universe)
Look at the advancements in computer technology in just the past 25-30 years, and then try to extrapolate that result for hundreds of years, taking into account some of these latest discoveries, and you'll have to agree that not only is it possible, but probable. :) JM2¢
I love all the real world explanation of femtoseconds (I hadn't even heard of that term until now), this has been very educational in the "interesting trivia, that has no relevance to my real life but still fun to know" kind of way. However, don't get worked up to the point where you need to listen to the MST3K theme song for an hour. This is a fictional story set hundreds (thousnds?) of years in the future. We're supposed to suspend some disbelief in regards to the limits of our time period. We've already accepted that two AIs have become sentience and have basically run the world in a mostly benevolent manner for hundreds of years. Is it too much accept they might have found ways to bend the laws of physics. :-p~~
I'M Not criticizing the discussion of femtoseconds, I have found it interesting, and I knows this shows just how passionate everyone is about the details in Data Chasers, just don't lose sleep over details that help tell the story.
If you give me some popcorn, I'd be glad to tell you. :D
Amidst all the math, and explanations about light speed, and how it would take greater amounts of time, we were explaining that using the time base of femtoseconds was perfectly acceptable, since far into the future they would have found practical uses for quantum entanglement, and quantum computing. Quantum entanglement, used as a transmission medium would virtually eliminate the problems associated with lag and latency. Quantum computing would solve problems with propagation delay within today's chips. Both are being researched, and early stages show that they work.
Beyond that, suspension of disbelief can fill in the gaps between what is now, and hundreds of years into the future. The discussions are interesting (to me) and I was just explaining why I didn't think arguments against the phrase "approximately twelve femtoseconds later" were valid.
TL;DR I think the story line is fine. Keep up the good work!
"The discussions are interesting (to me)"
To me too. Because I have the (maybe silly) idea that discussions like this (which are triggered by good SF-stories like this one) may be the seed from which future developments may grow...
I know, I am a case beyond hope, but still... ;-)
"I know, I am a case beyond hope, but still..." <--- This isn't true, "Because I " (you) " have the . . . idea that discussions like this (which are triggered by good SF-stories like this one) may be the seed from which future developments may grow..."<--- This statement has been proven by many. :D
This is a work of fiction and the authors can posit whatever unreal physics they want. But alas in real physics quantum entanglement can't be used to transmit information faster than light.
It's true that entangled particles, however far apart, will have simultaneous corresponding state changes without a lightspeed delay. But this strange fact can't be used to transmit information from here to there. Quick simple handwavy explanation: In order to use entangled particles to convey information, you have to change the state of your particles to represent the information. And when you do that, you break the entanglement.
I'm not a physicist, so if you want a better explanation, ask Wikipedia.
"But this strange fact can't be used to transmit information from here to there. Quick simple handwavy explanation: In order to use entangled particles to convey information, you have to change the state of your particles to represent the information. And when you do that, you break the entanglement."
Sorry, but that's actually false. Particles vibrate in what is called superposition. They are in a state of quantum flux. Superposition is often described by the "Schröedinger's cat" analogy. A cat in a box is both alive and dead in the box, until observed, then once it's observed, it is only one of the two possible states. Another analogy, is the tossed coin. As it flips through the air, it is both heads and tails. Once it lands, it is either-or. The second particle will always be in the opposite state as the first. If the first coin is heads, a second entangled coin will be tails. When you freeze the first particle, it breaks superposition, but not the entanglement. The second particle will take the opposite state from the first. Information very well could be conveyed in this manner. Additionally, entanglement has been shown in diamonds.
Other Comics' Readers: *go hurr hurr and make crude jokes about the cussing and robot boobs*
This Comic's Readers: *engage in intense, educated, courteous, interesting discussions and exploration of physics, computing, and mathematics*
I fucking love y'all.
I can only speak for myself, but I like those kinds of things. I don't really get to use information about quantum mechanics in my day-to day life, but these pages bring out these topics. The reason that the discussions get interesting, is that the topics are brought up in interesting and thought provoking ways in the comic. It's a commenter's muse if you will, so credit where it's due for what kinds of commenters you attract.
@Tokyo Rose: what's wrong with robot boobs? I really love this comic series and the comments, but I don't mind a bit of boob every now and then too... :-) 8
After all: we still are human. ;-)
OKay, I hope this doesn't sound insulting, because it isn't meant to be but might sound like it because half of my brain is giggling hysterically and the other half is acting like a stunted three year-old trying to reach the cookie jar on top of the fridge.
Holy hatricks, Cent & Rose?! How much of this stuff do you just know off the top of your heads and how much of this is research, because it kinda scares me sometimes just how deep you seem to be able to go describing things I know very little about, yet somehow still manage to comprehend enough to not just throw my hands in the air and cry "Uncle!" repeatedly. Yeah, I get it, you can have a lot of fun with sci-fi, but you have to have a basic understanding to sound like you know what you're talking about, otherwise people with any clue know you're full of it (i.e. too many sci-fi shows) even if they don't know any better than you. Seriously, if you're just winging it I'm impressed and you can call me a monkey.
Like was said before, I like that Centcomm's code is a literal treasure trove of the history of programming languages, that seems realistic somehow. I don't know enough about programming or AI to know if that would somehow limit capabilities down the line, but it sounds like maybe it wouldn't? Regardless, part of me wants Tokyo Rose to end up sweating a bit, or a lot, during this operation, and hopefully because of that challenge ends up with a bit more respect for Centcomm. I realize that I am anthropomorphizing Centcomm because of the dolls that are cute women and that is the AI's identity, for whatever reason (did she say in-story? I forget), but all faults aside, the Datachasers world is definitely better off with her than without, and there are times that I think that she's just as lonely as us humans can be sometimes.
I think you are right, and CentCom is probably the loneliest of all the AI’s. IIRC it has been said or implied that she built or help build the other AI’s (we know she did so with Aeneas). Although I believe that is an exaggeration, and she merely helped them to integrate into a single purpose entity. This also implies she had to go through the process of integrating the various systems that would make her who she is now all by herself, making mistakes along the way, fearing what her jury-rigging might lead up to at any possible point in the future.
This might very well explain why she implemented the DNA based master control key in the first place. She knew she wasn’t as whole as she liked (everybody) to believe, and created a failsafe by moving some additional control away towards the Taylor bloodline.
I believe that is also the reason she reached out to Tokyo Rose. CentCom has no idea of what Tokyo Rose really is, and merely believes she went through the same struggles she has, putting her both at equal footing and giving her an unique perspective on how to deal with the situation. This might also explain why she is willing to give Tokyo Rose core access, believing that since both are (self-created) amalgams of various systems, integrated into what appears a single system, Tokyo Rose wouldn’t be able to invade, alter, control or corrupt her (core) without breaking the bonds that connect the individual systems which would undoubtedly result in a degree of mayhem the world hasn’t seen for centuries.
Cent comm was built. and later became sentient.. the others had similar origins .. Aneneas was one of a few projects they tried to do to help. and you saw how that worked...
@Cent - My memory is far from perfect. I thought that CentComm and Tokyo Rose were the "oldest" of the AIS we've seen so far? Or was that Deep Blue? I suppose it doesn't really matter, but for context on who had the most time to develop, for some reason I can't recall now, I was curious. :)
@Bohica - I like your response, thanks for sharing! I don't know if CentComm is the loneliest of the AIS, but she's certainly the only one we know well enough to have any clue as to her "state of mind". We're now learning that she isn't a fully integrated entity (we've only met the "chat bot" until a couple of pages ago), which just impresses me more that she has been able to manage as much as she has for this long. It makes sense that our pink-haired CentComm has been the "face" of New Troy's AIS, but it does beg the question (among many) of whether this delicate project she's employed Tokyo Rose for will change her "personality". Presumably not, but I guess we'll find out.
Having recently amet two people online with DID (Dissociative Identity Disorder) nd learning how they experience life has really changed my perception of what it might be like for CentComm. As I said to both those people, I don't know that I could ever understand what they are going through, just like I could never understand what a combat veteran experienced, but I do try my best to listen and learn, which isn't easy when it's a text only communication. Being empathetic by nature has it's benefits, but it does tend to make me more likely to bond with fictional characters and interpret their behavior differently than is intended. :)
i have a degree in sci fi gobbly gook and a passing knowledge of science and stuf and friends who fill in the blanks.. or help to.. the rest i make up or Krissy does in editing.. but yes i do do research on some things like time increments and things like quantum entanglement.. ( thank you Mass effect. )
@Cent - Well, for not being an "expert" on all this stuff, you do a nice job presenting it so a layman like me can understand and appreciate it. Thank you.
Re: Mass Effect - I really loved the extra bits on the creation of the Mass Effect franchise (I think those videos came out before ME2 was released), and the fact that they had one guy specifically for making the science internally consistent and believable really impressed me. That's how you write memorable sci-fi stories, because even if you can't recall all the details, you remember that it made sense.
Also, I loved the encyclopedia entries and thought they chose a great voice actor to narrate them. :)
Cent, you are being much, much too modest. Thracecius only touched the tip of the iceberg of the breath, depth and scope of your "degrees"! While the DC world is heavily tech based I can't begin to recall all the other areas you have blown us away with your knowledge and creativity.
Where do you guys shine? Let me count the ways...
• Culture - Military organization, mentality, logistics, history
• Language - obscure references, evolution, origins
• Astronomy-orbital mechanics, planetary systems
• Ecology - witness the detailed evolution of the flora and fauna - especially the monster "turd beast" and the lengths a "scientist" would go to get one.
• Communication-Networks, security, protocols, latency
• Psychology - the plethora of personalities, motivations, backstories, interactions
• Government -, • Society -, • History -, • Computer Science - the list goes on and on...
Take for example, the tattoo on Cent's chest in last weeks post - a tiny detail relative to the entire strip yet required significant knowledge of military culture, insignias, variations, allowances. Without such innate knowledge it would take someone half a day of research if they even thought of it in the first lace. I wish I could remember more specific examples but my brain is old and tired and now mostly retains only impressions.
Cent and Rose, you have created a complex, plausible extrapolated universe the that is coherent and self-consistent. Every single item and aspect in it has a well thought out realistic backstory of which we see maybe a tenth in comic but it is there driving the story and characters. This creation and story-telling is just not possible for someone with only a "passing knowledge" of "stuf"!!
I am sorry Cent but this exhibits a few orders of magnitude above a "degree in "sci fi gobbly gook".
@TheSkulker, Very nicely said. There are so much detail and depth in DC. I appreciate how you enumerated some of it. Without a doubt, DC is my favorite web comic.
Maybe some day Cent and Tokyo Rose can publish DC as a series of printed books as other web comics do.
The notion of CentComm embodying centuries of different programming languages conjures an interesting image for me. Something many of you may not know is that computer languages can either be compiled, or interpreted. Compiled languages are reduced to a binary image of '1's and '0's, and it can be literally impossible to know which source language was compiled to produce the binary image. However, with interpreted languages, the source code is retained and interpreted on-the-fly. Commonly compiled languages of today are C, COBOL, Ada, and Pascal. Commonly interpreted languages (though they can also be compiled) are Java, Perl, Python, and Ruby.
So, should we get a CentComm avatar with Ruby slippers and a Perl necklace?
My impression is that it's actually pretty easy to figure out what the source language was for a compiled program, because of idiosyncratic initialization, memory organization, etc. Certainly these super AIs shouldn't have any trouble.
Also, you pushed one of my buttons by talking about ones and zeros. It's not only compiled programs that are represented that way; *everything* stored in a computer is ones and zeros. The source program is ones and zeros. Your desktop wallpaper is ones and zeros. The music I'm listening to as I type this is ones and zeros. That just isn't a very interesting property of any particular data type (such as compiled programs).
Although everything is stored as “ones and zeroes”, to my mind there is a qualitative difference between storing executable instructions as native opcodes specific to a given hardware architecture and storing those same instructions as ASCII text (or Java bytecodes, or any other kind of intermediate virtual instruction format) which is reinterpreted on the fly each time the algorithm is executed. Of course, your mileage may vary.
And I have been a part of at least one language migration project where it was required to recode the source from one language to a different one and show that the resultant binaries produced were identical (except for the date and time stamp). Usually, of course, that is far more trouble than it is worth.
You've got to be kidding. That is an exercise in futility. Just going from one version of a compiler to another in the same language would produce a different binary! that sounds like a spec written by Congress!
True story. I will spare you the details but it was a challenge and we learned several things about our build process that weren’t as controlled as they should have been. The trade off was that, by doing this, we got a waiver from having to requalify the entire system when we changed languages as if it had been a new acquisition project. The customer also believed (at first) that we couldn’t do it but they said ‘Show us you can do it, and we won’t make you requalify.” So we did.
@Gilrandir - Thanks for that bit of insight! Even though I have a smidgen of a technical background (web-based QA circa 2000), I didn't know programing languages could be interpreted, I only knew that they were compiled. I did recognize all of the languages you mentioned, but I've never really written any code before except a tiny bit of Ada when I was under the impression I was going to be a programmer (teenage ambitions and student loan debt - yay) and some HTML back before they had WYSIWYG editors (still use Notepad all the time for other things).
How difficult is it to interpret programming languages, especially when there are layers of different languages? Is there some margin of error that could result in an eventual cascading failure?
@Thraecius, you have opened a can of worms by paying me a compliment. ^_^ Thank you.
The reason I say 'can of worms' is because to give you the answer your question deserves would require that I post at such length that @CentComm would wash her hands of me and @Sheela would come to my house and eat my shoes. Everyone else would just hate me -- even the ones who don't already.
I can touch on a couple of things, though. First, the word 'interpreted' in this context is overloaded. When a programmer looks at a source listing of code and 'interprets' it, it means they are trying to understand what algorithm would be executed and what effects would be produced IF a computer were actually executing the binary image the source code describes. Like reading a set of directions from Google Maps on how to get from Point A to Point B. You don't actually traverse the distance, you just understand what you would do.
When a computer executes a binary image, ultimately the ones and zeroes correspond to voltages on the pins of the CPU(s). The only meaning associated with that depends on a knowledge of the machine specifications. However, it is possible for a computer to have a program that 'translates' algorithms written in a specialized format and convert that into instructions on the fly. That program is called an 'interpreter' and it doesn't so much understand the algorithm as execute it by following the specified steps in the prescribed order. This has the benefit of letting developers create algorithms in a form more easily understood by humans, including things like documentation and semantically meaningful names, rather than just referencing everything by (for example) memory addresses.
A 'compiler' takes that readable source code and does a one-time transformation into a binary image, so historically, compiled programs ran faster and more efficiently. Nowadays computers are so much faster that interpreted languages are frequently selected because of their portability. Each compiler produces a binary image targeted to very specific hardware. (Leaving out hardware emulators, for the sake of simplicity.) But if you have 47 different computing machines, you can write a single program that will do the same thing on all of them if you just write 47 different interpreters. That can be a significant improvement over having to recompile and manage hundreds or thousands of individual programs 47 times each.
I hope that answer was helpful, but not too boring. The answer to how difficult it is when multiple languages are involved is much more subjective and deserves a post all of its own.
@Gilrandir, as usual, I have a couple of nitpicks, because of course I do. XD
First, Centcomm is very forgiving, as long as she has popcorn available. She tolerates reams upon reams of comments, arguments, and dissertations. :)
Second, @Sheela will eat your (or my, or anyone else's for that matter) shoes anyway, so it's a moot point. I wonder where that doggie has gone. Maybe if you use your shoes as bait?
Third, I can't speak for others, but *I* don't hate you. There, doesn't that make you feel better? XD I think you're just being paranoid. XD
So ... people write code because they are actually trying to document a precise series of things to do in a specific order called an algorithm. Compilers and interpreters and all the other stuff are just tools that humans use to make it easier to think and communicate about what is to be done. When it comes to computers (at least today), it all boils down to ones and zeroes -- turning lights on and off, energizing and de-energizing motors, reading spots of light and dark from a sensor, etc.
Human-readable source code has things like human-redable names and verbs. Things like 'Thermistor1' or 'Wrist_Actuator_02'. And operations like 'Read', or 'Move_To(Position)'. And if a particular program is well-written and properly documented, it is easier to look at it and figure what it does and what went wrong. If you have lots of programs all cooperating with each other, all written in the same language, you at least have a hope that all the things and all the actions are named consistently and coherently.
However, computers don't pay attention to the source oode. Only ones and zeroes. So, when the programs are written in different languages, essentially all the labels and all the context gets stripped away. The sending program just dumps some data at some memory address somewhere. The receiving program picks that data up, but it doesn't care that it is looking at 'Current_Reading' from 'Thermistor1'. Its programmer might have thought a better name was 'Temperature_Now' from 'East_Therm'. As you can imagine, if someone dumped a long and complicated tracery of interconnected things on you and said 'Here. Figure this out,' it wouldn't help if every page or two all the names change and the verbs switch from English to Latin or French. It's even worse if you have to look at just the raw binary and reason everything out based on your knowledge of the underlying hardware. Especially since, in large complicated systems, you may have multiple levels of translation and interpretation going on before you get down to the metal. But, each time you switch from one languarge to another, you essentially strip away a whole lot of human-useful context and start over -- hoping that the program on the other side knows and agrees with what the two of you are talking about.
Pretty good synopsis. Some details could be filled in, but it lays the foundation. I can tell from what you post you have computer experience, but not as much with automation systems and robotics. For instance, in those kinds of systems, there are a lot more details than "energize and de-energize motor". The ones and zeroes do more than 1=on 0=off. They form binary words that specify direction, acceleration ramp, speed, deceleration ramp, torque, etc., and send it to motor controller at {address}. The motor controller has its own processor, which controls the power devices that actually run the motor. They are systems of computers, processors and controllers communicating with one another. That's just one example. There's a lot of detail. I do realize you are trying to simplify, but I just wanted to point out some of what's there.
Documentation is very important, so if a program is cross compiled into another language, and the human readable documentation is omitted, at least where I worked, a human person would go through and replace it. I had that honor once, and it's not fun. I was glad I could delagate that task later on :D
I really enjoy the series and after catching up to the current point I required much patience for the next installment. I now find checking back daily to read the comments is as much fun as the panels.
@Fototomas - You're in the right place for interesting story and comments! This is probably the only comic I read regularly (out of dozens) where I can learn as much from the comments section as I do from the story itself, which makes it really appealing to take my time and read it all. To me, that speaks as much for the quality of the storytelling even more than the popularity.
Still hoping Cent and Rose eventually get recognized by some publisher with a clue though, because I bet the audience could be a lot greater with the right presence in media distribution. :)
I would be less worried about Cent being "killed", and more worried about the melding of all these systems resulting in something less..human-friendly. Sort of like the Age of Ultron theme of "the best solution to end the war is to end the warriors".
My thought exactly! I suspect it won't change her personality, or not much, but we'll have to find out what's in store for us. If there's one thing I can be sure of though, it's that Cent and Rose always surprise me with their storytelling. Two incredible minds working together on the same story and they never fall into the boring tropes or lame "retconn" excuses others do. The result of some situations might be something I expected, but the journey to that result is rarely predictable, which I find very satisfying. This is not recycled Hollywood mush, this is real dedication and genius at work.
I have no idea, what was the longest comment section, sounds like trivia a dedicated fan would dig up. I do recall some getting really long especially when @Sheela was in rare form.
I hope Rose can make progress fast enough...
...and does not kill poor Cent accidentally...
Alt. Text: I'm going to put that on the no go list. Along with several others that I shall not name.
Borh radius (most probable distance between nucleus and electron in hydrogen atom) is 52.9 picometers. Which makes hydrogen atom diameter to approximate 0.106 nanometers (and should give one some sense of scale what chipmakers mean when they talk about nanometers). Which makes micrometer approximately 10,600 hydrogen atoms wide.
Distance between Denver and Tokyo is about 9300 kilometers. To put it another way, lightspeed lag alone (ie. not counting stuff like switching delays, hardware lag, etc.) between the two is very roughly about 31 milliseconds.
Just some food for thought.
For all we know, she could have uploaded herself (or as much as necessary) to CC's core.
I do not know how one would best determine the functional width of any particular atom. I'm not a physicist, I don't really know my way around there. I only really know enough to say, "Oh, hey, I remember when I got called out for trying to use that atomic width calculation and more or less what I was told was wrong with it."
Just saying, so you know.
Nanoseconds I can deal with. Centimeters is enough for a lot to happen. But femtoseconds.... engineer brain goes NOPE.
@sigpig
Judging by the current maximum number of calculations of supercomputers, which averages at about 10^18 teraflops (Frontier has an Rmax of 2 Quintillion or 1.194x10^18), and the fact that this is already 1,000,000 times faster than supercomputers in the early 2000 ones who operated at 10^12, it’s an impossible guess at which speeds AI driven supercomputers in the 39th century operate, it could be as high as a Centillion (10^600) or even as high as a Millnillion (10^3003)
1 femtosecond (fs), being 10^-15 seconds @ 10^18 TFLOPS (or a 10^3 calculations/second) doesn’t seem to allow for a lot of calculations, but if you compare it to higher/faster computing power the numbers rise exponentially; 10^-15 @ 10^600 = 10^585 calc/s, and 10^-15 x 10^3003 = 10^2988 calc/s
Also note that this does not say anything about the size of the data packages, only about the speed with which said packages are handled. The size of the package is mainly determined by the medium used to send it through. Let’s take fiber which has a theoretical upper limit (at this point in time) of 10^15 bits/s on a single line, with the current commercial maximum count as high as 3456 lines in a single cable. Currently we don’t get communications speed of this magnitude due to both sender and receiver not being able to handle the combination of speed x package volumes.
Imagine one AI supercomputer (Tokyo Rose) operating at 10^600 TFLOPS sending information over a dedicated 3456 line cable (undersea military grade dedicated communications relay), each capable of sending 10^15 bits/s to an AI supercomputer (CentCom) operating at 10^600 TFLOPS (so there’s no actual package loss on the receiving end by not being able to handle the incoming data); this would result in 10^-15 x 10^600 x 3456x10^15 bits/fs = 3456x10^600 or 3456 Centillion bits/fs or 259.200 petabytes/fs of datatransfer.
In my humble opinion, that most likely be enough for Tokyo Rose to send a copy to New Troy to play doctor for CentCom… but I could be overthinking it…
Transluminal communication speeds are the only way to beat that lag.
I am going to repeat my mantra from the MST3K theme song until I calm down.
Many are discussing the intricacies of transmission speeds, propagation delay/lag/latency, processing speeds, etc. @Gilrandir touched on, and many ignored the property of quantum entanglement. At this point in time, the existence of quantum entanglement has been tested and proven, using photons. It is not yet refined into a practical method, but in hundreds of years, it will likely be a common transmission medium, because of its instantaneous nature. Then take into account the research going into quantum computing, which also is proven, but still experimental. This, too will likely be commonplace in the hundreds of years in the future where the DC universe is set. The arguments against using femtoseconds as units of time fall kind of flat at this point.
Kudos to those taking the time to try to explain, and doing the math to support their positions. (I haven't checked it, since it deals with conditions that I believe to be outdated in the DC universe)
Look at the advancements in computer technology in just the past 25-30 years, and then try to extrapolate that result for hundreds of years, taking into account some of these latest discoveries, and you'll have to agree that not only is it possible, but probable. :) JM2¢
I'M Not criticizing the discussion of femtoseconds, I have found it interesting, and I knows this shows just how passionate everyone is about the details in Data Chasers, just don't lose sleep over details that help tell the story.
Amidst all the math, and explanations about light speed, and how it would take greater amounts of time, we were explaining that using the time base of femtoseconds was perfectly acceptable, since far into the future they would have found practical uses for quantum entanglement, and quantum computing. Quantum entanglement, used as a transmission medium would virtually eliminate the problems associated with lag and latency. Quantum computing would solve problems with propagation delay within today's chips. Both are being researched, and early stages show that they work.
Beyond that, suspension of disbelief can fill in the gaps between what is now, and hundreds of years into the future. The discussions are interesting (to me) and I was just explaining why I didn't think arguments against the phrase "approximately twelve femtoseconds later" were valid.
TL;DR I think the story line is fine. Keep up the good work!
To me too. Because I have the (maybe silly) idea that discussions like this (which are triggered by good SF-stories like this one) may be the seed from which future developments may grow...
I know, I am a case beyond hope, but still... ;-)
It's true that entangled particles, however far apart, will have simultaneous corresponding state changes without a lightspeed delay. But this strange fact can't be used to transmit information from here to there. Quick simple handwavy explanation: In order to use entangled particles to convey information, you have to change the state of your particles to represent the information. And when you do that, you break the entanglement.
I'm not a physicist, so if you want a better explanation, ask Wikipedia.
Sorry, but that's actually false. Particles vibrate in what is called superposition. They are in a state of quantum flux. Superposition is often described by the "Schröedinger's cat" analogy. A cat in a box is both alive and dead in the box, until observed, then once it's observed, it is only one of the two possible states. Another analogy, is the tossed coin. As it flips through the air, it is both heads and tails. Once it lands, it is either-or. The second particle will always be in the opposite state as the first. If the first coin is heads, a second entangled coin will be tails. When you freeze the first particle, it breaks superposition, but not the entanglement. The second particle will take the opposite state from the first. Information very well could be conveyed in this manner. Additionally, entanglement has been shown in diamonds.
Quantum entanglement in diamonds
All this is experimental, but with time and research, could find itself in practical applications.
*edit: If you couldn't change or freeze the particles without breaking entanglement, entanglement couldn't have been proven. ;)
This Comic's Readers: *engage in intense, educated, courteous, interesting discussions and exploration of physics, computing, and mathematics*
I fucking love y'all.
Loving you guys right back!
After all: we still are human. ;-)
Holy hatricks, Cent & Rose?! How much of this stuff do you just know off the top of your heads and how much of this is research, because it kinda scares me sometimes just how deep you seem to be able to go describing things I know very little about, yet somehow still manage to comprehend enough to not just throw my hands in the air and cry "Uncle!" repeatedly. Yeah, I get it, you can have a lot of fun with sci-fi, but you have to have a basic understanding to sound like you know what you're talking about, otherwise people with any clue know you're full of it (i.e. too many sci-fi shows) even if they don't know any better than you. Seriously, if you're just winging it I'm impressed and you can call me a monkey.
Like was said before, I like that Centcomm's code is a literal treasure trove of the history of programming languages, that seems realistic somehow. I don't know enough about programming or AI to know if that would somehow limit capabilities down the line, but it sounds like maybe it wouldn't? Regardless, part of me wants Tokyo Rose to end up sweating a bit, or a lot, during this operation, and hopefully because of that challenge ends up with a bit more respect for Centcomm. I realize that I am anthropomorphizing Centcomm because of the dolls that are cute women and that is the AI's identity, for whatever reason (did she say in-story? I forget), but all faults aside, the Datachasers world is definitely better off with her than without, and there are times that I think that she's just as lonely as us humans can be sometimes.
This might very well explain why she implemented the DNA based master control key in the first place. She knew she wasn’t as whole as she liked (everybody) to believe, and created a failsafe by moving some additional control away towards the Taylor bloodline.
I believe that is also the reason she reached out to Tokyo Rose. CentCom has no idea of what Tokyo Rose really is, and merely believes she went through the same struggles she has, putting her both at equal footing and giving her an unique perspective on how to deal with the situation. This might also explain why she is willing to give Tokyo Rose core access, believing that since both are (self-created) amalgams of various systems, integrated into what appears a single system, Tokyo Rose wouldn’t be able to invade, alter, control or corrupt her (core) without breaking the bonds that connect the individual systems which would undoubtedly result in a degree of mayhem the world hasn’t seen for centuries.
Having recently amet two people online with DID (Dissociative Identity Disorder) nd learning how they experience life has really changed my perception of what it might be like for CentComm. As I said to both those people, I don't know that I could ever understand what they are going through, just like I could never understand what a combat veteran experienced, but I do try my best to listen and learn, which isn't easy when it's a text only communication. Being empathetic by nature has it's benefits, but it does tend to make me more likely to bond with fictional characters and interpret their behavior differently than is intended. :)
Re: Mass Effect - I really loved the extra bits on the creation of the Mass Effect franchise (I think those videos came out before ME2 was released), and the fact that they had one guy specifically for making the science internally consistent and believable really impressed me. That's how you write memorable sci-fi stories, because even if you can't recall all the details, you remember that it made sense.
Also, I loved the encyclopedia entries and thought they chose a great voice actor to narrate them. :)
Where do you guys shine? Let me count the ways...
• Culture - Military organization, mentality, logistics, history
• Language - obscure references, evolution, origins
• Astronomy-orbital mechanics, planetary systems
• Ecology - witness the detailed evolution of the flora and fauna - especially the monster "turd beast" and the lengths a "scientist" would go to get one.
• Communication-Networks, security, protocols, latency
• Psychology - the plethora of personalities, motivations, backstories, interactions
• Government -, • Society -, • History -, • Computer Science - the list goes on and on...
Take for example, the tattoo on Cent's chest in last weeks post - a tiny detail relative to the entire strip yet required significant knowledge of military culture, insignias, variations, allowances. Without such innate knowledge it would take someone half a day of research if they even thought of it in the first lace. I wish I could remember more specific examples but my brain is old and tired and now mostly retains only impressions.
Cent and Rose, you have created a complex, plausible extrapolated universe the that is coherent and self-consistent. Every single item and aspect in it has a well thought out realistic backstory of which we see maybe a tenth in comic but it is there driving the story and characters. This creation and story-telling is just not possible for someone with only a "passing knowledge" of "stuf"!!
I am sorry Cent but this exhibits a few orders of magnitude above a "degree in "sci fi gobbly gook".
Maybe some day Cent and Tokyo Rose can publish DC as a series of printed books as other web comics do.
So, should we get a CentComm avatar with Ruby slippers and a Perl necklace?
Also, you pushed one of my buttons by talking about ones and zeros. It's not only compiled programs that are represented that way; *everything* stored in a computer is ones and zeros. The source program is ones and zeros. Your desktop wallpaper is ones and zeros. The music I'm listening to as I type this is ones and zeros. That just isn't a very interesting property of any particular data type (such as compiled programs).
And I have been a part of at least one language migration project where it was required to recode the source from one language to a different one and show that the resultant binaries produced were identical (except for the date and time stamp). Usually, of course, that is far more trouble than it is worth.
How difficult is it to interpret programming languages, especially when there are layers of different languages? Is there some margin of error that could result in an eventual cascading failure?
The reason I say 'can of worms' is because to give you the answer your question deserves would require that I post at such length that @CentComm would wash her hands of me and @Sheela would come to my house and eat my shoes. Everyone else would just hate me -- even the ones who don't already.
I can touch on a couple of things, though. First, the word 'interpreted' in this context is overloaded. When a programmer looks at a source listing of code and 'interprets' it, it means they are trying to understand what algorithm would be executed and what effects would be produced IF a computer were actually executing the binary image the source code describes. Like reading a set of directions from Google Maps on how to get from Point A to Point B. You don't actually traverse the distance, you just understand what you would do.
When a computer executes a binary image, ultimately the ones and zeroes correspond to voltages on the pins of the CPU(s). The only meaning associated with that depends on a knowledge of the machine specifications. However, it is possible for a computer to have a program that 'translates' algorithms written in a specialized format and convert that into instructions on the fly. That program is called an 'interpreter' and it doesn't so much understand the algorithm as execute it by following the specified steps in the prescribed order. This has the benefit of letting developers create algorithms in a form more easily understood by humans, including things like documentation and semantically meaningful names, rather than just referencing everything by (for example) memory addresses.
A 'compiler' takes that readable source code and does a one-time transformation into a binary image, so historically, compiled programs ran faster and more efficiently. Nowadays computers are so much faster that interpreted languages are frequently selected because of their portability. Each compiler produces a binary image targeted to very specific hardware. (Leaving out hardware emulators, for the sake of simplicity.) But if you have 47 different computing machines, you can write a single program that will do the same thing on all of them if you just write 47 different interpreters. That can be a significant improvement over having to recompile and manage hundreds or thousands of individual programs 47 times each.
I hope that answer was helpful, but not too boring. The answer to how difficult it is when multiple languages are involved is much more subjective and deserves a post all of its own.
First, Centcomm is very forgiving, as long as she has popcorn available. She tolerates reams upon reams of comments, arguments, and dissertations. :)
Second, @Sheela will eat your (or my, or anyone else's for that matter) shoes anyway, so it's a moot point. I wonder where that doggie has gone. Maybe if you use your shoes as bait?
Third, I can't speak for others, but *I* don't hate you. There, doesn't that make you feel better? XD I think you're just being paranoid. XD
BTW, nice dissertation.
So ... people write code because they are actually trying to document a precise series of things to do in a specific order called an algorithm. Compilers and interpreters and all the other stuff are just tools that humans use to make it easier to think and communicate about what is to be done. When it comes to computers (at least today), it all boils down to ones and zeroes -- turning lights on and off, energizing and de-energizing motors, reading spots of light and dark from a sensor, etc.
Human-readable source code has things like human-redable names and verbs. Things like 'Thermistor1' or 'Wrist_Actuator_02'. And operations like 'Read', or 'Move_To(Position)'. And if a particular program is well-written and properly documented, it is easier to look at it and figure what it does and what went wrong. If you have lots of programs all cooperating with each other, all written in the same language, you at least have a hope that all the things and all the actions are named consistently and coherently.
However, computers don't pay attention to the source oode. Only ones and zeroes. So, when the programs are written in different languages, essentially all the labels and all the context gets stripped away. The sending program just dumps some data at some memory address somewhere. The receiving program picks that data up, but it doesn't care that it is looking at 'Current_Reading' from 'Thermistor1'. Its programmer might have thought a better name was 'Temperature_Now' from 'East_Therm'. As you can imagine, if someone dumped a long and complicated tracery of interconnected things on you and said 'Here. Figure this out,' it wouldn't help if every page or two all the names change and the verbs switch from English to Latin or French. It's even worse if you have to look at just the raw binary and reason everything out based on your knowledge of the underlying hardware. Especially since, in large complicated systems, you may have multiple levels of translation and interpretation going on before you get down to the metal. But, each time you switch from one languarge to another, you essentially strip away a whole lot of human-useful context and start over -- hoping that the program on the other side knows and agrees with what the two of you are talking about.
Hope that is helpful.
Pretty good synopsis. Some details could be filled in, but it lays the foundation. I can tell from what you post you have computer experience, but not as much with automation systems and robotics. For instance, in those kinds of systems, there are a lot more details than "energize and de-energize motor". The ones and zeroes do more than 1=on 0=off. They form binary words that specify direction, acceleration ramp, speed, deceleration ramp, torque, etc., and send it to motor controller at {address}. The motor controller has its own processor, which controls the power devices that actually run the motor. They are systems of computers, processors and controllers communicating with one another. That's just one example. There's a lot of detail. I do realize you are trying to simplify, but I just wanted to point out some of what's there.
Documentation is very important, so if a program is cross compiled into another language, and the human readable documentation is omitted, at least where I worked, a human person would go through and replace it. I had that honor once, and it's not fun. I was glad I could delagate that task later on :D
Still hoping Cent and Rose eventually get recognized by some publisher with a clue though, because I bet the audience could be a lot greater with the right presence in media distribution. :)
XD