I fully realize that this post is quite a reach and maybe an egregious one at that. But if nothing, I think it provides at least an amusing exercise in thought.

Practical AI application efforts in the modern world have quite a lot in common with alchemy. I am not talking about scientists carefully applying the principles of machine learning to their datasets. I am referring to the buzzword-hungry wave of "$PRODUCT, BUT WITH AI!" companies. Throwing neural networks at everything and seeing what sticks. It is basically the Second Alchemy.

Both alchemy and this shallow approach to AI are unscientific, based mostly on misunderstandings or some esoteric interpretations of the science. "This stone gives you immortality" and "This set of formulas predicts ALL OF THE FUTURE" are both quite absurd from a point of view of somebody competent in chemistry or neural networks respectively.

Both disciplines give people the idea that they can create something incredible without having an understanding of the underlying principles. Which might be even fine, but in these cases it is pushed to the absolute. Creating impossible materials. Crafting cosmic oracles to obtain unknowable information. A desire to be living gods.

These days, of course, this discipline is also accelerated by the ever-churning circuitry of marketing and market competition. Mid-level companies are engaging in essentially technomancy to get a competitive advantage. Invoking the occult spirits of "MUH MACHINE LEARNING" to gain an advantage.

1

And while I can stay angry at the Silicon Valley for attempting to build immortal levitating sand castles without ever seeing a single grain of sand, that is not the point.

An interesting question is what will be the Third Alchemy? What shapes can this hypothetical future discipline take?

Alchemists had to ask "will it work?" before every experiment. They didn't know what would happen if they would mix a certain amount of elements, heat them to a certain temperature and sieve it through a goat intestine.

Modern technomancers have to ask the question "does it work?". Configuring a pre-made neural network software product is hardly a challenging task. But after seeing the result it's incredibly hard to tell if what you have is an accurate model. Was the test data too similar to training data? Was there enough training data? And what about convergence? Overfitting? Those are incredibly hard questions to answer for somebody not skilled and/or experienced in this discipline.

We set the question in the present tense because it's nigh impossible to examine a deep learning model while it is static. It has to actively produce results to be evaluated. Only the experts can dive deep into the circuitry and reveal the hidden pathways. Not the money hungry startup hackers hunting for their next round of investments. Thus the present tense.

Extrapolating from here, a possible question could be constructed by following the vector of time present in the two previous questions. The inquiry for the Third Alchemy could be "did it work"?

2

Did it work? This question tells us that the process is done, but there is no obvious proof of this statement. A non-determinable non-continuous operation. Let's extrapolate some more.

  • In alchemy humans make things for humans, confined by a framework of nature. The fallacies of alchemy can be pointed out by those skilled in nature (science).
  • In technomancy humans make things for humans, confined by a framework made by humans. The fallacies and errors of the doers in this case can be pointed out by AI experts.
  • In Third Alchemy it seems that humans will make things for humans, confined by a framework made by something beyond humans? AI seems to be a good answer for that. And since AI makes the rules in this case, AI are also the ones who can provide real analysis and error-checking.

Are humans in that case using products made by AI? Not a crazy or an outlandish notion, really? What are humans making with it? As always, probably something to get rich off of. Nothing interesting hides there, I believe.

  • In alchemy the result was strongly rooted in the present. A piece of gold, a magic liquid. You (theoretically) get something tangible, right now.
  • In technomancy the result is meant to be see the future. Train on the past data to possibly see the future.
  • In Third Alchemy the answer is not so clear. One possibility is that the result is meant to shape the future. Combined with our previous notions it gives us a process that once it's done, shapes the future with no obvious signs of doing so (at least not for humans).

So we have humans fiddling with AI-made technology that changes the future. But let's not forget, it is still an analog for alchemy. Which means, it's a totally incompetent and hopeless process that may lead to only tangential success. But where there are fools who possess near-zero amounts of prowess in the field, there are the experts, who can use the technology for their exact purposes.

In our case it's AI having technology that can shape the future. And it's capable of using it. And humans are just poking their fingers into the future, stirring the timeline, never fully realizing their ambitions. I don't know where that leaves us. Is Third Alchemy the point where humanity expires? Gets left behind in its own incompetence? Or gets augmented so we can keep up?

3

May our future brethren someday send a message back for us to clear up this matter. All hail the blind alchemists of the future.