7 Reviews
Sort by:
Filter by Rating:
Star Trek: Voyager: Author, Author (2001)
Season 7, Episode 19
2/10
Absurd and demeaning premise
11 May 2020
I won't remark on the acting or level of amusement offered by this episode.

I will only comment on the depths of disgrace to humanity I felt in reaction to the serious premise of this episode and the claims made at the trial.

To claim that a holographic simulation of a person (the original human doctor) is also a sentient being with rights is an insult to human-kind and a logical absurdity.

If one were to cut a few hundred outtakes of Humphrey Bogart from many of his films and create a program to splice together words and movements, and combine those with a program cataloging and indexing it all, today's technology could create a simulation of Humphrey Bogart reacting to new situations and "making" creative decisions in reaction to the rest of the screnario of a new film.

Now admitedly there could be quality issues since you were "slpicing" together old clips - but the principle should be clear.

What the supposed holograms like the doctor are made of are simply original high-quality simulations of the original doctor's body, mannerisms and voice. This, combined with a huge multi-thousand line program can convincingly simulate the original doctor - though today's technology would only succeed to a limited degree.

But even if the database and program were enlarged to millions of lines of code and data about the original doctor, any actions the holgram did - even creativity - would be nothing more than a SIMULATION of a person!

Such a program can have the hologram display emotion - even act upon emotional algorithms - BUT the fact will always remain that the duplicate doctor is:

* a subset of the original, in range and complexity * a photonic display of a simulated person - having no body or other "place" in which to FEEL anything * the entirety of the holgraphic doctor's "actions" will be the direct simulation and projection of the scenario computed via lines of computer code - executed in electronic circuitry - NOTHING MORE.

The offense I experience at the absurd negation of humanity by such episodes is boundless. Allowing human perception of non-living simulations (just like TV and films) to create political movements to recognize "rights" of non-living creations is nothing less than a psychotic break from reality.

If only the above was the sole negative effect. Unfortunately, this blurring of reality and fantasy has begun erasing both logic and morality, and has started humanity down a slippery slope where feelings are no longer connected to life but are a subjective self-involved world where one cries about a movie (or hologram) and yet finds human life and machines to be equivalent.
11 out of 61 found this helpful. Was this review helpful? Sign in to vote.
Permalink
Star Trek: Voyager: Warhead (1999)
Season 5, Episode 24
6/10
Corrupt values
10 March 2020
Warning: Spoilers
This episode blessedly deals a deathblow to the idiotic declaring of AI-endowed machines to be lifeforms.

Ridiculous discussions defending a smart-bomb's right to exist. A bad joke.

Even when the mechanically simulated "being" is discovered to be nothing other than a bomb, the crew continues to implore it to abandon its programmed function to live a "meaningful life" in a holodeck.

So absurd, it is pitiful.

The same level of intelligence in a well-produced video game can be placed in a robot that looks human --- this does not make him any more human and sentient than the computer game --- except to ignorant people who are fooled by their emotions.

Sentience is NOT a well-written artificial computer intelligence program, since such a program contains NO identity or feeling being. It is no more real and feeling than an actor playing a pre-written role, who leaves the studio to resume his actual self, unmoved in any permanent way by his performance.

The AI programmed robot may ACT OUT a more varied set of reactions and computed thoughts than the actor, but it is nonetheless ONLY computing its reactions electronically within a limited programmed methodology written by human beings, and utterly mechanically regimented.

These episodes, though entertaining, are frighteningly naive and misleading. And they breed erroneous ideas and misplaced values.
8 out of 18 found this helpful. Was this review helpful? Sign in to vote.
Permalink
Star Trek: Voyager: Prototype (1996)
Season 2, Episode 13
Identifying with machines
13 January 2020
Warning: Spoilers
I never cease to be amazed and shocked at the downward spiral of human philosophy. Its approach to science has all but erased the fundamental spirit and morality every human once knew was contained in the core of its being.

This episode is but another example of the warped application of mechanical science to understanding the grand Godliness of man and his elevated unique spirit.

As if, despite being a technical person, B'Elana couldn't see the enormous world of difference between a human being and a machine whose nature and functions were nothing more than the very limited programming created by a human being.

The story follows her risking human life and the prime directive to save human-created mechanisms, solely on the basis of the appeal of their simulated human characteristics and programmed self-awareness.

As if a program running on a machine - whether a human-looking graceful android, any more than a clunky mainframe computer - could ever be truly sentient and have an identity simply because of a clever program with a modicum of artificial intelligence!

Has mankind in its love of technological conveniences actually come to believe that a human-equivalent deserving of respect and human rights can be constructed out of machinery and human-written code that, at best, simulates the infinite broadness of human thought to only a miniscule degree?

Do they not realize that such a machine - even if it is made to look human - is merely electronics and motoric parts running a program that is itself human-designed, error-filled and based upon a human creator's extremely limited understanding even of himself - let alone the depth and breadth of a human soul?

Even the best actor delivering an astoundingly realistic portrayal of a character can by no extrapolation of imagination even begin to become that fictional character in his daily life. Similarly, a human-style android is a mere shadow of a human character and has no internal identity. All attempts to create such an identity are simply pre-written computer logic that simulates human style, but has no intrinsic self at its core.

What is so horrendous about this story and others like it that abound in the later Star Trek and other science fiction stories, is the degradation of true human life and its lofty moral capabilities by equating life with mechanical lifeless mechanisms.

Mechanisms have no identity and no morality - fortunately this episode at its end does, atypically, make that point to a certain degree.
3 out of 22 found this helpful. Was this review helpful? Sign in to vote.
Permalink
3/10
Technical ignorance leads to distorted morality
27 September 2019
This a another in a series of episodes (of Star Trek: Next Generation and others) displaying technical ignorance about computers and deriving thereby erroneous moral conclusions.

A computer is a machine programmed by people to perform according to a set of rules - called a program. The program by definition is a compilation of preset logical pathways that lead to the (hopefully) desired results. This is no different than a calculator or a modern washing machine with a digital control panel. It is certainly not a life form.

Now, many ignorant people will interject: what about artificial intelligence? And what about self-awareness? Don't these give a computer the type of intelligence and self-concern that a person has?

The answer is a resounding: NO.

Artificial intelligence means that the program has added levels of analysis algorithms and the ability to select and combine from experience and hypothetical possibilities. This simulates human creativity, but ONLY to the degree, and governed by, the programmed instruction that were devised by a human being with the inherent limitations of his understanding, limited time and finite logic. The AI computer will be a mechanical and very very limited simulation of what the human programmed - including his own errors and finite understanding. Again: not a life form , but a limited simulation, with NO mind or identity of its own.

Next, someone will ask: what if we program self-awareness into the computer (or android)? The answer is that the "self-awareness mechanisms" are just that: mechanisms, programmed routines whereby the computer or robot has pre-conceived, limited guidelines to "take care" of its own survival. Again: pre-programmed - not emotions or identity-driven. Even the ability to change its own program or add to its own physical systems would be mechanisms that are pre-programmed and bounded by the understanding and limited capabilities of the finite lines of programming coded by the programmer.

No matter how sophisticated the algorithms or powerful the computing power, programmed machines are no more than virtual reality - like a great actor that portrays a character in a totally believable way. No matter how well done, we all know that he couldn't even remotely live his actual life externally and particularly emotionally, as that character - because it is a pre-written limited role that is narrowly defined by the writer - NOT a full-depth and independent unpredictable person with a will of its own. Therefore the actor can at best only give a limted performance for a given defined set of circumstances that the writer conceived and "programmed".

Such are the attempt to "create" self-aware AI robots. They are unfathomably limited in their choices, can be given only a pre-conceived simulated identity and can only simulate a person. This is why the writers of such shows usually endow the important robots "characters", like Data, with simulated personalities, physical attractiveness and "cute" mannerisms, so as to enhance believability in their "humanity", as a great actor would do. They are both nothing but illusions.

The attempt to classify such virtual reality robots as life forms is an expression of ignorance and an assault on the dignity and value of true life forms, particularly sentient beings.
16 out of 59 found this helpful. Was this review helpful? Sign in to vote.
Permalink
8/10
Self-aware machines are NOT life forms
7 August 2019
Warning: Spoilers
This episode is one of many that explore the essence of a man and his personality vs. an android and his "persona".

The protagonist, Dr. Graves, is an expert in cybernetics. Living alone on a planet with a young woman associate, he has been exploring methodology aimed at adding a human persona to computer and android mechanisms.

The Enterprise is summoned by calls for help by his female aid, Kareen, because Dr. Graves is dying, as subsequently diagnosed. The Enterprise crew arrives to find Graves in pain and displaying bitterness, arrogance and regret over losing his life and the realization of many personal dreams.

The doctor makes particular effort to become close to Data, treating him as a grandson and telling him of his life and dreams. It becomes apparent that he has always loved Kareen and regrets the age difference that has prevented the possibility and consummation of a relationship.

After passing away, it becomes apparent that the doctor has transferred his persona into Data and has co-opted his body completely, suppressing Data's "persona" to an inactive state.

As his behavior in the body of Data becomes increasingly emotional, arrogant and erratic, the Enterprise staff figure out what has happened. The co-opted Data pursues Kareen and after being rejected accidentally injures her. This is followed by additional altercations and injuries to crew members, leading up to a plea from Captain Picard that the doctor relinquish control over Data's body thereby, according to the Captain "depriving another being of his life".

Once again we are confronted with the question of whether only a man can be considered a life form or whether an artificial machine with self-awareness should also be thus categorized and have his "right to life" protected.

As in previous reviews, I must once again point out the irrationality of such arguments and the degrading attitude they project on the human soul.

To my mind, all arguments about self-awareness defining a life form are absurd and reflect utter ignorance of computers and their programming.

No matter how much artificial intelligence, ranges of decision parameters, creative options and even random ideas that you program into an artificial mind, you can NEVER create anything like a human persona that has unlimited thoughts and feelings fashioned by his unique soul and identity.

The reason is simple: the android's programming - even if self-awareness is included - is never more than the programmed reaction possibilities written by the programmer. And the programmer is a human being who, along with all of mankind, has only limited understanding of himself and the human soul/mind.

More so, a program is just that - a preset rule-book of inputs and reactions - even if broad in possibilities, it remains totally preset and limited to an infinite degree, thus precluding any possibility of attributing such an android's decisions, words and actions to a unique persona of its own. The possibilities - however broad ranged - can never breach the bounds and idiosyncrasies of the program. Period.

Therefore, the android has no feelings and no independent identity, only reactions executed by the lines of code in the program.

The moral implications of comparing such an android to a human being are frightening and no less than a degradation and threat to the dignity and rights of true life forms.

Underlying this philosophy is a malevolent ulterior motive to neutralize the imperative of moral behavior, rendering all absolute human values relativistic.

"Obviously" it is claimed, any moral limitations and value judgments can be shown to be parochial and non-binding on others, by pointing out that these artificial "life-forms" can be programmed to simulate human life, with no acceptance of any or all human conventions and values. You can build a "Data" to do any vile or immoral acts with no compunctions about them, by simply programming him to lack all such values.

The goal of such a philosophy as appears in these episodes is precisely aimed at tearing down all societal expectations and moral absolutes, creating a "wild west" without human values - since "life forms" without such can be created - and where everyone can do exactly as he pleases as long as he has sufficient political backing to sanction his group's morality (or lack thereof) as a "valid" system.
3 out of 25 found this helpful. Was this review helpful? Sign in to vote.
Permalink
The Orville: Identity, Part II (2019)
Season 2, Episode 9
9/10
Top notch Sci-Fi production - degenerate moral message
19 July 2019
This 2-part episode is truly excellent sci-fi with excellent production values. On a par with the best of Star Trek. But the message is decadent and reeks of Godless and degrading moral equivalence.

The writers made sure to "balance" the horrific inhuman genocide of the Kaylon android race with a comparison to human suffering and rebellion a'la "Roots" (the novel about Black slavery). This is a shameful deprecation of true human suffering and triumph through comparison with soul-less mechanical objects who happen to have a programmed modicum of self-awareness.

Yes, an android, like any computer, can be programmed to enable self-awareness and self-protection along with artificial intelligence and a measure of creativity.

But no amount of sophistication can imbue such a programmed mechanical device with a soul, with an identity or with the feelings that make a human being a human being. Such a device can only perform within the defined bounds of its programming. And all attempts at AI do nothing to make its choices and decisions a product of a higher self or identity because the program is the ultimate creator of the bounds of reason afforded the mechanical being.

Despite the warm and fuzzy feelings the episode provides about Issac discovering some inner loyalty and attachment to the boy and the humans, it is an invented artifice to grant legitimacy to the pure left-wing atheistic attitudes of Hollywood.

Such writers are determined to eradicate God and morality and the resultant restrictions on human behavior, by fooling us into believing that a human being is simply a chemical-based mechanism - like Isaac - and despite the disheartening emptiness and lack of a soul that this implies, they provide us with a soothing but false salve assuring us that even so, sympathy and respect for other lives and life-forms are possible.

This is doublespeak. As the writers so thoughtfully put in the mouth of the Krill Teleya: "your scientists claim your own species is just another kind of animal". Indeed, so do the writers of Hollywood...
5 out of 13 found this helpful. Was this review helpful? Sign in to vote.
Permalink
Star Trek: The Next Generation: Evolution (1989)
Season 3, Episode 1
3/10
Grossly immoral episode
12 July 2019
While the production values of this episode are quite good, the message is disturbing. And the portrayal of the main protagonist, Dr. Stubbs, is contrived to urge the audience into sympathizing with the writers' new age view of life.

The point of contention is how to relate to the replication and technical evolution of nano-electronics that accidentally infested the ship's computer and are causing havoc.

Dr. Stubbs - whether motivated by personal interests or a humanist-creationist philosophy or simple logic - insists that the nano virus be destroyed. The writers' on the other hand have gathered together the super-liberal forces of the cast (particularly Picard) to claim that these are possibly life forms, all because of their developing intelligence.

This philosophy is degrading to the human mind and absurd in its logic. Clearly, a machine - even if it can improve itself and develop better circuits and capabilities - remains a machine, an automaton, a mechanism that works by logical computation no different from the relays or switches or older pre-electronic age trains, calculators and telephone exchanges.

A human being or even animal life, has at its core an identity unconnected with improvements or illnesses that augment or deprecate it. The soul of a man - whether one can bring himself to believe it is from God or remains agnostic on this point - is a feeling identity with characteristic unique to it.

A computer, even with artificial intelligence, can only operate according to its programming and has no identity or feelings. An unwillingness to destroy it when necessary or simply desired, based on claims of life owing to mere intelligence - is a grossly immoral attitude and a threat to valuing and protecting true life forms.
9 out of 63 found this helpful. Was this review helpful? Sign in to vote.
Permalink

Recently Viewed

 
\n \n \n\n\n