Science Fiction Round 33: A Book Too Horrible To Finish
Yes, we’ve officially found a novel that was too frustrating to keep reading.
And thus, I gave up, some time before page 100.
The book is Cybernetic Samurai, by Victor Milán, published in 1985.
And that has committed some terrible sins. It may be submitted for ritual recycling after I’m done writing this review.
Did You Actually Research Japan?
I am bothered by a lot of the references to Japanese culture that show up in this book. Much of it seems to treat Japanese people like inhuman space aliens rather than just people with a different cultural background. How often is a real Japanese person constantly thinking that Americans are blunt and un-subtle? Or heard supporting the idea that the Japanese should be considered a separate species from the rest of Homo sapiens?
Even more distressingly, bushido is treated as the ultimate morality and the best way to do things. Even when its precepts have many issues — such as the ritual suicide part. And the classism aspects.
Radiation Poisoning Does Not Work That Way
The story is set following a World War III scenario, which sees major US cities getting nuked.
One major character has, as a part of her backstory, visited a city shortly after said nuking, to look for her partner. (Said partner is very dead.)
As part of the description of the attach, it’s stated that this was a ground bombing, which causes less damage on the ground. And that, and I quote, “Air bursts produced no fallout to speak of.”
Which is mostly true. It depends on other details, like wind patterns, and just how high up you are, and so forth, but it’s basically because the nastier radiation in the fireball doesn’t get blasted around in dust from the ground.
On the other hand… it’s stated that the character “somehow” survived the radiation exposure from her visit, even though she was expected to die due to having been massively exposed (note: not always fatal, but at the high doses, it is; and either way, you need meds) and then not given any treatment due to being expected to die. And is, later on, suffering bizarre symptoms that don’t match up with anything I’m familiar with, handwaved by the character having some different genetic quirk or… something.
Um, no. Just no.
Machine Learning Does Not Work That Way Either
I was really hopeful about the clever little AI’s story.
And yet… it falls apart so completely, it’s hard to know where to begin.
The initial idea sounds a lot like machine learning. Set up a program, have a computer change it regularly in a particular, perhaps random, way until the program does something like what you want it to do. Seeing that as a setup for making an intelligent thing? Okay, sure, I can buy that.
But there are several problems. First, there are already very high level artificial intelligences in this setting. They are intelligent and adaptable.
The only quality that they are stated to lack which separates them from human beings is initiative. Which is defined as the ability to do something without being prompted by an external stimulus.
Think about that for a moment. How many things that you do don’t have an external stimulus starting them at some point? I like baking cookies, but I took in a lot of pro-cookie culture as a child, and also like to eat them when prompted by hunger or delicious smells. I like daydreaming about stories, but I’ve been previously exposed to the idea that daydreaming is a thing and have read or heard lots and lots of stories.
Tokugawa (the AI) is determined to be sentient when it acts of its own volition to stop an annoying input data stream.
That isn’t a response to an external stimulus?
Now, I may be misunderstanding things, and perhaps the author was actually trying to go for something more akin to free will. Okay; so, what happens if you take your lesser AI, and make some small modifications so that they can do things for which they have not been given orders? How is Tokugawa that different from the prior generation?
Third, they start with, essentially, a child of an AI which needs to learn things. This is fine, and makes a certain kind of sense. Any good AI should be able to learn and adapt.
Except they’re teaching it by having it experience simulations of life in feudal Japan. What?
Look, it’s an AI. You don’t have to try to turn it into Data (yes, I know there was no Data when the story was written). Why not have it interact with real people, in real time? You know, like how most humans learn about people?
And, last but not least: AI. It’s an artificial intelligence. Cybernetics is something else, usually taken in the context of scifi to be the melding of man and machine… which may involve AI, but is not AI alone.
Maybe that changes by the end of the story, but I’m not going to look to find out.