They just keep on making these things. And, well, you know, I was on an airplane.
I might not have watched this otherwise, given that Spiderman 2 is a sequel of a reboot…
… Wait, so is Star Trek (sort of).
Eh, whatever. Which is roughly my opinion of the movie.
Yes, we’ve officially found a novel that was too frustrating to keep reading.
And thus, I gave up, some time before page 100.
The book is Cybernetic Samurai, by Victor Milán, published in 1985.
And that has committed some terrible sins. It may be submitted for ritual recycling after I’m done writing this review.
Did You Actually Research Japan?
I am bothered by a lot of the references to Japanese culture that show up in this book. Much of it seems to treat Japanese people like inhuman space aliens rather than just people with a different cultural background. How often is a real Japanese person constantly thinking that Americans are blunt and un-subtle? Or heard supporting the idea that the Japanese should be considered a separate species from the rest of Homo sapiens?
Even more distressingly, bushido is treated as the ultimate morality and the best way to do things. Even when its precepts have many issues — such as the ritual suicide part. And the classism aspects.
Radiation Poisoning Does Not Work That Way
The story is set following a World War III scenario, which sees major US cities getting nuked.
One major character has, as a part of her backstory, visited a city shortly after said nuking, to look for her partner. (Said partner is very dead.)
As part of the description of the attach, it’s stated that this was a ground bombing, which causes less damage on the ground. And that, and I quote, “Air bursts produced no fallout to speak of.”
Which is mostly true. It depends on other details, like wind patterns, and just how high up you are, and so forth, but it’s basically because the nastier radiation in the fireball doesn’t get blasted around in dust from the ground.
On the other hand… it’s stated that the character “somehow” survived the radiation exposure from her visit, even though she was expected to die due to having been massively exposed (note: not always fatal, but at the high doses, it is; and either way, you need meds) and then not given any treatment due to being expected to die. And is, later on, suffering bizarre symptoms that don’t match up with anything I’m familiar with, handwaved by the character having some different genetic quirk or… something.
Um, no. Just no.
Machine Learning Does Not Work That Way Either
I was really hopeful about the clever little AI’s story.
And yet… it falls apart so completely, it’s hard to know where to begin.
The initial idea sounds a lot like machine learning. Set up a program, have a computer change it regularly in a particular, perhaps random, way until the program does something like what you want it to do. Seeing that as a setup for making an intelligent thing? Okay, sure, I can buy that.
But there are several problems. First, there are already very high level artificial intelligences in this setting. They are intelligent and adaptable.
The only quality that they are stated to lack which separates them from human beings is initiative. Which is defined as the ability to do something without being prompted by an external stimulus.
Think about that for a moment. How many things that you do don’t have an external stimulus starting them at some point? I like baking cookies, but I took in a lot of pro-cookie culture as a child, and also like to eat them when prompted by hunger or delicious smells. I like daydreaming about stories, but I’ve been previously exposed to the idea that daydreaming is a thing and have read or heard lots and lots of stories.
Tokugawa (the AI) is determined to be sentient when it acts of its own volition to stop an annoying input data stream.
That isn’t a response to an external stimulus?
Now, I may be misunderstanding things, and perhaps the author was actually trying to go for something more akin to free will. Okay; so, what happens if you take your lesser AI, and make some small modifications so that they can do things for which they have not been given orders? How is Tokugawa that different from the prior generation?
Third, they start with, essentially, a child of an AI which needs to learn things. This is fine, and makes a certain kind of sense. Any good AI should be able to learn and adapt.
Except they’re teaching it by having it experience simulations of life in feudal Japan. What?
Look, it’s an AI. You don’t have to try to turn it into Data (yes, I know there was no Data when the story was written). Why not have it interact with real people, in real time? You know, like how most humans learn about people?
And, last but not least: AI. It’s an artificial intelligence. Cybernetics is something else, usually taken in the context of scifi to be the melding of man and machine… which may involve AI, but is not AI alone.
Maybe that changes by the end of the story, but I’m not going to look to find out.
The film is Paycheck. The basic premise is that the lead character, Michael Jennings, works for companies to reverse-engineer their competitors’ technology, and then have his memory wiped when he’s done.
The general theme is [GIGANTONORMOUS SPOILERIFICNESS].
Now that that’s out of the way, let’s do some analysis.
The Author Is A Spoiler
The movie is based on a short story written by some guy named Phillip Dick.
Given that he’s the author behind the original print versions of Minority Report and Blade Runner, as the poster above says, among other things, you can infer that mind-bending is imminent.
I am reminded of Achron in many ways. The last job Jennings takes is to [SPOILER] help build a machine that can see into the future. No time travel, mind you, just see the future.
Turns out, seeing the future makes the world go bonkers in the future, more or less, so Jennings has to set things up so that, after his memory is wiped after the job, he can destroy the machine that lets people see the future. And uses his past future knowledge to make it work.
Reverse-Engineering The Future
That’s how Jennings approaches the whole situation. Before his memory is wiped, he uses his foreknowledge to send himself a package of useful goodies. Once he’s back out in the world, he’s confused about why he gave up his payment… and why he sent himself a back of junk. A few “coincidental” convenient things push him to realize that he sent himself a bag of tools that he would need to change the future.
Admittedly, this is pretty cool. Solving the puzzles along with him and watching how small things can change the course of events is great fun.
There’s just one part that has me concerned.
A huge world-war in the wake of future-viewing tech (predicted by said tech) is what causes Jennings to send himself a package in the future to make sure he destroys what he created.
Now, this is all well and good. Except, he and a friend go back to the machine to destroy it… after looking into the future one more time.
That “one last look” showed Jennings getting shot on a catwalk. Jennings sees this, and plans to change the future to work around this issue.
But… in between his successful escape and his reading of the future, one of the bad guys has access to the machine. And looks at what Jennings was looking at, to see him getting shot.
But, if Jennings had changed the future to avoid getting shot, why wasn’t the evil executive able to change the future such that Jennings did get shot? The only way this makes sense would be if the bad guy was looking at a record of what Jennings saw, rather than the actual future… which isn’t made explicit.
Bad Guy Computer Security
I think the villains had the idiot ball in this one.
I mean, seriously. You didn’t make backups of the plans for the future-watching device? You aren’t most of the way done building a second one at a separate facility? You just let Jennings back into your facility to get at the machine? You assume he could only be going in there for the power of seeing the future, despite the fact that you know about all the looming disasters if the machine continues to exist?
… yeah. That, and, to some degree, I think the premise of the device itself is an issue. Seeing the future inevitably means war, plague and devastation? I think the problem is that people seem to assume the future is immutable… or their attempts to mitigate that future, cause it to happen… in the future. Very confusing, of course.
But, for once, could the future we see be a good one? And our efforts to bring it about cause it to happen only once we see that it can happen? Or, alternatively, seeing that the future can be good causes everyone to be complacent, leading to a worse future, necessitating the destruction of the device
That might be a more interesting twist.