Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
AbstractPhil 
posted an update 1 day ago
Post
99
Today, I'll be determining the codebook capacity and utility potential for the larger batteries; Fresnel, Johanna, Grandmaster, Freckles, and Johanna-F variants, which should give a good indication of which models are capable of handling codebooks and which are more errant. The earlier all use SVD while the later do not. The differences are noted per and the behavior divergent.

I anticipate the D=16 will be more errant, and the final-state variants of those could very well be much more difficult or costly to inference as their axis bends are likely considerably harder to track. However, I'm confident that enough bounces will give the yield required so I'll set up some high-yield noise barrages to determine how much of them we can in fact extract from Johanna, and then set up similar barrages for images to map the internals of Fresnel and Grandmaster.

Grandmaster will be tricky, as it was an experimental Johanna-256 finetuned series meant to map sigma noised image inputs to recreate Fresnel behavioral output. Noised image goes in -> Fresnel-grade replication comes out in high res.

This allowed preliminary Dall-E Mini-esque VAE generation and will be explored further for the stereoscopic translation subsystem, to allow image generation in the unique format of diffusion that I was working out. I anticipate this system to be more than capable at making monstrosities, so I won't be posting TOO MANY prelims on this one, but the high-capacity potential of these noise makers are meaningfully powerful. Getting uniform codebooks in-place for these models will allow full transformer mapping downstream instead of just guess working the MSE piecemeal, which the earlier versions and variants were doing.

I'm straying from the CLS specifically for this series because CLS creates adjudicated pools of bias orbiting the INCORRECT orbiter some SVAE. The orbital target IS the soft-hand accumulated bias with the sphere-norm, so having a competitor isn't going to be a good option.

I have a few diffuser prototypes that I'll be exploring now that the full array system is in order. One that I've been very much wanting to approach, which is sigma-degrading interpolation manifolding.

In other words, you take an H2 Fresnel expert, snap it in. Say I train a cifar100 variant and finetune it with oh maybe 50 epochs of reconstruction from the Fresnel-512 with various levels of noise applied to Fresnel, not using cutmix or something odd like that.

Next we finetune our array. Say we want 1000 steps, we'll divide the amount of adjudicated states by how many states of noise we want to see. Our finetuned batteries are then ran with oh... maybe 500 batches of images each and apply scheduled noise instead of random noise like what the H2 batteries were primarily trained with, which should be within a 10 minute training session or so. The batteries pooled into the battery array and uploaded as a standard battery array for reuse in safetensor format with the optimizer states uploaded alongside at an adjacent repo.

So the process is simple; noised image in, replicate next stage of noise down in the chain. Each battery is meant to denoise by one step and collapse the results into a patchworked behavioral training for a downstream model.

We then take each of these variants and blow them up, creating scanner manifolds of each and collapsing the weights into a single linear batched pass which will be roughly 500 megs vram or so each sigma attempted.

Finally we stack our entire sequence up and hook them together with MLP collapse, and at each level inject the original image with the correct noise value. So say you have 10 batteries that are meant to target 10 noise steps. You now have a 10 step reconstruction generator that runs once and automatically boom your image pops out nearly instantly.

Alright, now if we space that out using Adam's standard internal step of 1000 and space it out, we'll have our roughly 1/100 sigma hoppers. This will be our blueprint.

With this we can distill a diffusion model's vae expectation into what we want, and guarantee the output is fully prepared for step-hop skipping.

So each of these are fed into a singular transformer structure that sees the original diffuser's standard diffusion step produced, and boom you have yourself a pixel synthesis skip process. You've effectively skipped the entirety of the diffusion process with the correct layout.

This will also require the Alexandria stage, so it will take time to process and pool the necessary informational accumulations and relational capacity to make that portion perfect, however with some more work Alexandria's text distribution system will be ready to go, and the distiller will be ready to consume high-yield diffuser technologies like Flux and the like.

This will allow for not only compacting massive amounts of information into embedded solvers, but allow for cellphone-sized image generation with enough processing and data, to create flux-grade images or better.

The technology is there, the experiments yielded, the answers present, the results show this is more than possible, and now it's time to build.

As predicted the codebooks for all noise models conform to an architectural scaling for them within a very minimal delta shift. There is no real deviance, the architectures learn a codebook that manifests and can be directly utilized at runtime.

The delta is real within shift and each model conforms to it's own modified codebook delta during training. This is an architectural constant now and can be prepared in very little time before processing or utilizing the models.

The helper functions and methods are all present in the AbstractEyes/geolip-svae repo on github now, and everything is documented.

huggingface_0428_02