The Last Crusader (intro to my new 90-minute documentary)

A war between angelic princes. A prophecy written 2,500 years ago. A path of empires leading from the ruins of Persia… …to the crown of a modern king. What if the wars of East and West, of North and South — from Cyrus to Napoleon, from Heraclius to Hitler — were not random chapters of history… …but foretold milestones in a divine chronicle? From the cross of Christ to the shadow of Antichrist, this is the untold story of Daniel 11. A heavenly war played out on earthly battlegrounds. And now, its final act begins.

Background of the Documentary Production

This has been perhaps the most challenging and labor-intensive video project I’ve done so far. I began working on it a week before Midsummer, and it took over three weeks to complete, even though I was working on it nearly 16 hours a day. In the same amount of time, I could’ve written a 400-page book (my previous 200-page book took just 10 days to write). Video editing is extremely time-consuming and tedious work, even though AI tools have made the process significantly faster. I couldn’t have created a documentary like this without the help of AI. In fact, I had ChatGPT write the entire script (I first provide a rough version of what each scene should say, and then it rewrites it in English in a form suitable for a dramatic documentary). The voiceover is also AI-generated using a service called Play.ht (where I found an excellent narrator voice specifically for this kind of History Channel-style dramatic documentary).

The audio portion of the video was completed in about a week, but adding the visual elements took another two weeks. At first, I tried outsourcing that task to AI as well and paid for services that could visualize the video in a single day (you feed the audio into the software one minute at a time, and it generates AI-created images that match the spoken content of each scene within minutes). But in the end, I used only a small portion of those visuals in the final video. AI is still not capable enough on the visual side. Adding visuals is a major part of the editing craft. The images, videos, or animated scenes (some of which I also created using Google’s Veo 3) should not only relate to the spoken content but also synchronize with the background music. Otherwise, they appear disconnected from the video as a whole.

An example of this can already be seen at the 11-second mark of the intro, where the scene changes precisely at the moment a dramatic hit is heard in the background music. AI still can’t visualize such nuances — it stitches together clips like an amateur video editor. That kind of work still requires human intuition, which AI lacks. But perhaps within a few years, with enough training on large video datasets, they’ll learn to handle even this aspect more skillfully.

For the first time, I also used several avatar characters in this video — both artificially created and real people (i.e., from services where real actors perform, but AI can legally make them speak your script by syncing their mouth movements to the audio). I’ve included a disclosure alongside each avatar appearance stating that it was generated with the help of AI. I don’t want to use AI to mislead people, as many already do. My video consists mainly of these avatar characters and the dramatic, poetic main narrator, whose voice can also be heard in the intro.

I’m aiming to publish the full 90-minute video tomorrow, as promised in the intro. The Finnish subtitles are still unfinished, but with AI tools, even that part can be completed fairly quickly nowadays.


Leave a comment