Tyrant Empire

Conquer Your Life

Blog Post Week #7

By: tyrin barney

An Odyssey of Generative Art

In the era of the digital renaissance, technology’s swift advancement continues to reshape and redefine the boundaries of creativity. From the earliest cave paintings to Renaissance masterpieces and now to AI-generated artwork, human expression has traveled eons, pushing the limits of what’s possible at every step. Particularly in the realm of digital creation, the past two years have marked an epoch of transformation.

Let’s first set the stage by reminiscing about early 2022, a pivotal moment in this digital revolution. Generative AI art was an exciting newcomer on the tech scene. Playing with “Dream” by Wombo, I was taken aback by a computer’s ability to interpret and create abstract visualizations from mere textual prompts. In hindsight, the images seem rudimentary, almost embryonic in their quality, but the sheer idea was nothing short of revolutionary.

This wonder, however, was merely a prelude. As the months unfurled, the world witnessed the rise of Midjourney v3 and DallE. These weren’t just upgrades to existing software; they represented quantum leaps. The creation of 3D photos with unparalleled detailing was now within grasp. Textures became richer, lighting more nuanced, and the lines between digital creation and reality started to blur. The tech community was abuzz, not just because of the capabilities of these platforms, but the breathtaking pace at which they were evolving.

Before the world could fully fathom the implications of these developments, another wave of innovation struck. Enter Stable Diffusion 1.5 & Midjourney v4. These powerhouses further upped the ante, making hyper-realistic image generation seem almost mundane. The repercussions were profound. The nuances and detailing in the images produced were so meticulous that distinguishing between a real photograph and an AI-generated image became an art in itself.

The true marvel was the democratization of these tools. Previously, high-quality content creation was limited to those with hefty software budgets or specialized expertise. But now, with open-source marvels like Stable Diffusion, top-tier digital creation was accessible to anyone willing to learn. Moreover, the community-driven ethos around these platforms spurred further innovation, broadening the horizons of what was conceivable.

The onset of 2023 brought forth even more groundbreaking advancements. With the release of Stable Diffusion SDXL, Midjourney v5, and the highly anticipated DallE 3 on the horizon, the domain of generative art has entered uncharted territories. What made Stable Diffusion particularly remarkable was not just its capabilities but its philosophy. Being open-source, it empowered communities, allowing a plethora of extensions and plugins. Animations, once considered a niche and expert-driven domain, were now accessible to the masses. And with platforms like Runway Gen2 & Kaiber leading the commercial side of things, the landscape was richer than ever.

This begs the question: what does this mean for the average individual? For starters, it signals the ushering in of an era where creativity is boundless. Whether you’re a professional artist, an enthusiastic hobbyist, or someone merely curious, tools like these level the playing field. With platforms like the Tyrant Empire offering resources and tools such as the Prompt Generator, the entry barriers to the world of generative art are lower than ever.

However, with great power comes great responsibility. The ability to create almost indistinguishable deep fakes and virtual entities presents ethical dilemmas. While the technology itself is neutral, its application can be a double-edged sword. As we march ahead, the onus is on creators, consumers, and regulators to ensure that these tools are used responsibly.

Conclusively, as we reflect upon this whirlwind journey from basic 2D images to intricate 3D animations within a mere two years, it’s evident that we’re on the cusp of a new digital age. The rapid progression might seem overwhelming, but therein lies an unparalleled opportunity. Embracing and adapting to this change is the need of the hour. Remember, in every age of transformation, those who have thrived were not just the ones equipped with the tools but those who had the vision to harness them effectively. As the tapestry of digital creation continues to evolve, let us be the artists, weaving our unique threads into it, constantly learning, innovating, and pushing the boundaries of what’s possible.

In the immortal words of T.S. Eliot, “Only those who will risk going too far can possibly find out how far one can go.” The realm of digital creation is vast and largely uncharted. So, let us venture forth with courage, curiosity, and creativity. After all, the future is not just something we inherit but something we craft. Let’s mold it with imagination, wisdom, and the astounding tools at our disposal.

This week, I am showing thousands of people how to create animations using A.I. Everything from the installation of the extensions & models, to showing the ideal settings, to diagnosing common problems that you might face. With how fast the industry is moving, getting to know the tools & how they work will ensure that you stay ahead of the curve. Tyrants conquer their lives. To do so, they need to learn about the cutting-edge advancements of humanity and learn how to use them to their advantage. Keep conquering.

Create Your Own A.I. Animation

In the video below, I will show you everything you need to know to begin creating your own animations for free on your own computer.

I will show you how to use AnimateDiff with Controlnet.

For this tutorial I will be using the Automatic1111 U.I. If you do not have it, follow the installation instructions here.

TL,Dr summary

*Please Note*

When I recorded this video, Controlnet updated & had broken its’ usability with AnimateDiff.

At the time of writing this article (Oct. 11th, 2023) the native Controlnet & AnimateDiff extensions work cohesively together & will not need the forked extensions installed as mentioned in this video.

If you run into any issues such as the “AttributeError: IPAdapter” Error, you can download the forked extensions here:

  • Controlnet
  • AnimateDiff
 
Shout out to Reddit user Indrema for creating these extensions
Paste the url in this box & click "Install"
Click the arrow & copy the link

First, let’s download the AnimateDiff extension:

  1. Click the “Code” button & copy the link.
  2. In Automatic 1111, navigate to the “Extensions” tab > “Install from URL”
  3. Paste the link & click “Install”

Next, let’s download the Controlnet extension:

  1. Click the “Code” button & copy the link.
  2. In Automatic1111, navigate to the “Extensions” tab > “Install from URL”
  3. Paste the link & click “Install”
Click the arrow & copy the link
Paste the motion model(s) in this folder

Now that we have our extensions installed, we need our models for the extensions.

Let’s get our motion model for AnimateDiff first. Currently, the V2 model is the latest & greatest.

Download the model & then put the file in the “Models” folder within the “AnimateDiff” extension.

The file path should look something like this:

Webui>Extensions>AnimateDiff>Models

Ensure that you have all of these selected

For Controlnet, make sure you have the “Tile” or “Tile/Blur” Control Type enabled & the “Tile_Resample” preprocessor along with the “Control_v11f1e_sd15_tile” model are selected.

If you do not have the “Control_v11f1e_sd15_tile” model, you can download it here

Paste the url in this box & click "Install"
The tyrant prompt generator creates high quality prompts in seconds

The first method of animation is text to video. 

We don’t need anything to create our animation aside from a text prompt.

To create a prompt, I will be using the Tyrant Prompt Generator because I want a quick high quality prompt.

To prevent any issues, I made sure to limit the prompt length to 25 words.  

For the 1st animation method, we only need to use AnimateDiff. However, we will be using these settings for all 3 methods I will be showing you.

We are going to create a 2 second animation at 10 FPS. A total of 20 frames.

Ensure your AnimateDiff is enabled, motion model is selected, & settings match mine.

Click “Generate”.

Ensure your animatediff parameters look like this
Ensure your controlnet settings match mine

The 2nd method of animation is image to video.

Here is where we will be introducing our Controlnet extension.

Keep your AnimateDiff settings the same as in the first method.

Enable your Controlnet, select the “Tile/Blur” control type & ensure the preprocessor & model auto-populate with “tile_resample” & “control_v11f1e_sd15_tile”

Then add the image you would like to animate into the Controlnet unit.

Click “Generate”.

 

For the 3rd and final method, we are going to use 2 different images. 

The first image will be where the animation starts, while the 2nd image will be where the animation ends.

So in essence, we are going to animate & create the difference between the 2 images.

We are going to need to introduce a 2nd Controlnet unit.

Note that the image in the lowest number controlnet will be the starting image, while the image with the higher number will be the ending image.

So “Controlnet unit 0” will host the starting image, while “Controlnet unit 1” will host our ending image.

Ensure the settings are the same on both units.

Click “Generate”.

introduce a 2nd controlnet unit & add the image you want to end the animation at.
Ensure your controlnet settings match mine

Now that we have our animations, they aren’t very useful at such a small resolution, nor are they very appealing at such a low frame rate.

Let’s upscale & interpolate the frames so we get a HD video at a buttery-smooth 60 FPS.

I will be using Topaz Video AI for this.

Use my referral link if you want this revolutionary tool. 😁

The settings I use are as follows:

  • Frame Interpolation with the Apollo A.I. model. Remove duplicate frames with 10 sensitivity. (Slow motion is optional if you want to extend the length of your animation).
  • Enhancement with Progressive styling. Proteus A.I. model. Add noise if you want that “less A.I.” look by making skin look more rough. Recover original detail at 20.
  • You can also turn on Stabilization if your animation is very flashy. Start with 50 or less strength, enable “Reduce Jittery Motions” with “number of passes” at 1. (Sometimes stabilization can make some movements in your animation blurry. 1 pass is usually the highest I will go if I use it.
  • Export in a matter of seconds!
my settings for perfect ai upscaling & interpolation

Before topaz

after topaz

Click here to get topaz video ai

common issues

Enable This setting under "optimizations" in the settings tab

If the GIF that you have generated is switching halfway through, there are 2 possible reasons why:

  1. Your prompt may be too long. Ensure that your prompt is 50 tokens or less. (You can see token values in the top right corner of your positive prompt)
  2. There is a high chance that you don’t have the “Pad prompt/negative prompt to be same length” option enabled. Go to settings > Optimizations > Pad prompt/negative prompt to be same length 

 

Join The Tyrant Empire

Enlist Today for free! Begin your journey of conquest and self-mastery. Let the Tyrant Empire be the wind beneath your wings, uplifting you to the heights you were born to reach. Forge ahead, become the tyrant of your existence, and manifest the life you envisage.

Join Now

Tyrant Empire