Creative Technologist
I make fun stuffs with and about computers.
Current: Building Queer Map Taiwan with SpOnAcT.xyz
Previous: Senior Creative Technologist at OUTFRONT Media
Previous: Senior Creative Technologist at OUTFRONT Media
Making of: Millennium Cruiser
Context
I started wit the idea of machine generated images as inspirations for physical product designs.
The idea is to generate images from a string of prompt of users' desire, feed set prompt into the AI Art Machine(https://is.gd/artmachine) we've shown in class, in which this model will output a series of gradually refined images. After the iterations reach a certain level of clarity, I will start a 3D model base on set iteration.
I started wit the idea of machine generated images as inspirations for physical product designs.
The idea is to generate images from a string of prompt of users' desire, feed set prompt into the AI Art Machine(https://is.gd/artmachine) we've shown in class, in which this model will output a series of gradually refined images. After the iterations reach a certain level of clarity, I will start a 3D model base on set iteration.
Some additions:
I later on added another layer of model which was GPT-3, a NLP model created by OpenAI, where I asked it to give me a name based on my input prompt: "The ship that made the Kessel Run in less than twelve parsecs." Which is a reference to how the Millennium Falcon was first introduced in the Star Wars franchise. Before trying out this prompt, considering the amount of data and how GPT-3 was trained, I was expecting nothing but Millennium Falcon, somehow I got a small number of different names of 2-word combo, all with the same first word of Millennium, and the one that stuck with me which later became the name of the ship was: Millennium Cruiser.
Image outputs from the prompt: "The ship that made the Kessel Run in less than twelve parsecs."
The image above is one of the first iterations coming from the prompt, unfortunately I forgot to document the steps but from my vague memory this one is somewhere between 200 and 300 steps. I've set my images preview at 100 steps per image, where the first one jumped out to me how close it's texture was to the one in the move, where I started to worry if my parameter were too specific.
The following iterations put my concerns to rest as the overall look of the entity slowly drifts towards a small scale toy like object, but with a series of surprisingly pronounced lightings.
I ended up starting my modeling at the 1700s step, while I set the models stopping point at 2000, I started to feel the effect of diminishing returns. Staring at this image it made me wonder if my prompt were somehow linked to child-like aesthetics, observing from the rounded shapes and the polished-wood like background expressed a sense of friendliness.
Moving on to modeling:
I chose Fusion-360's free from for the job, besides the rounded nature this is pretty much the fastest way to generate a "good enough" model for rendering, It still took me a while to get there though giving how long I've been away from the modeling works
The next step for me will be the rendering and presenting it as an 2D speculative ad for a futuristic vehicle.
Rendering, tag lines and descriptions:
Rendering, tag lines and descriptions:
As mentioned before, the inclusion of GPT-3 will also allow me to generate text descriptions for my model and my idea is to present it as a machine-oriented product design.
Rendering:
I tried to apply the same materials as the generated images, ended up taking the gen-images as the base for the renderings texture.