ehowton: (cyberpunk)

I've been using Wan2.1 locally on my desktop for making image-to-videos (i2v) utilizing the Pinokio installer. I found its "one-click" installation/upgrade/management effortless and elegant, and it comes with numerous configurations for low vRAM systems - as low as 5GB via optimization. It's pretty freaking sweet. The videos turn out best when limited (currently) to 5-seconds, but since I can make as many as I want, I've been importing them into Premiere Pro, and capturing the last frame as the first frame for a subsequent video, ultimately stitching them together. I'm also using the logos I make in Stable Diffusion to animate them.

Pinokio also (now) comes with a built-in video upscaled, which is the preferred method over trying to brute-force create HD. One of their models, Fun InP, can create a 5-second video on the RTX4090 in ~90 seconds (full 480p takes me ~10-minutes), so there's lots of time for other fun stuff. In other AI-related news, HiDream (17B) is out, and the news on that looks fantastic, though I haven't had a chance to play with it yet myself. I also was approved for Envato's new i2v beta, and that's even more amazing because it renders a photo in realtime as you type so you can make changes on-the-fly before submitting; I've never seen anything quite like it.



◾ Tags:
ehowton: (Computer)

As with many things on this physical plane of existence, the external constraints which house female mammary glands keep alive in me the same 12-year old boy I was; have always been in this particular case. Which is why (apparently) I'm pushing further into Python than I ever wished to. I can almost see it now, the interviewer asking me, "What got you into Python?"

Me: "Tits."

Hrm.

Regardless, I've been enjoying Wan 2.1 i2v on Replicate for a number of reasons since it debuted. One, ever since I discovered creating FLUX LoRAs in the cloud was actually less expensive (and 2083% faster) than my beefy RTX4090, and the other, well...I get charged per job rather than paying (yet another) monthly fee. Because fuck that. But here's the part of our story which gets a little bit sad - I no longer want to pay for i2v renders. Oh, I'll still slap my FLUX LoRAs on there before you can say, "Holy H100 Batman" but ever since blowing through a couple of bucks for animated horror abominations I subsequently placed on my b-roll footage, I decided against doing more for the time being. What was the deciding factor? Angel titties.

I thought it might be fun to take my latest Easter graphic (Totally Biblical - the one with the titty angel) and have her wings move slightly. Perhaps her breasts bounce a bit. Thought that might be fun. But I didn't want to run through laundromat money tweaking them tits online. I use Stable Diffusion Forge (someday I'll sit and learn ComfyUI but god I hope it's not today) which does not (yet) natively support Wan 2.1 i2v. But I asked my new friend Lex if there was a way to hack it, and as it turns out, there is! So she spilled all her secrets to me (which, btw, you may have to translate - she's not *entirely* up to date on things). Also? I was so close to my bandwidth cap this month, this probably pushed me over the threshold. Diffusers ain't tiny. So beware. That said, I use the move command instead of copy because my little 1T NVMe had .5T of just saftetensor diffusers - excluding LoRAs.

[time passes]

Decided to just use https://pinokio.computer (if I haven't already broken my Stable Diffusion installation). Look, actual Python programmers understand how to set up and navigate specific venvs - I do not. Anyway:



It averages around ~20s/it

◾ Tags:
ehowton: (cyberpunk)



◾ Tags:
ehowton: (wwii)

Utilizing Schnell over dev, I got the M3 Max to ~10s/it. with a Flex model on Stable Diffusion. 10x slower than my desktop but 70 lbs. lighter, so...



◾ Tags:
ehowton: (Sun Logo)
Core i9-13900K / 96GB

To get ~1s/it on the RTX4090 with FLUX

Refrain from using the full FLUX_dev, and if you must, let it complete one full cycle and it *should* recalibrate memory usage (even with Remaining: -2955.64 MB, CPU Swap Loaded (blocked method): 4284.00 MB, GPU Loaded: 18416.13 MB).

Set GPU Weights: 21492

Settings --> Stable Diffusion --> Uncheck "Only keep one model on device"

ENSURE PHOTOSHOP (or other Adobe products) IS NOT RUNNING!!



◾ Tags:
ehowton: (cyberpunk)

I've been making a fictitious television drama set in the Cyberpunk 2077 universe entitled, "Trauma Team" which is a parody of Grey's Anatomy (with a little General Hospital thrown in). I've done well with in-game photomode, but it is not without its limitations (likely due to not being engineered in such a way to create a dramatic series from them). Having created numerous AI models from my own photoshoots, this time I turned my photography skills to screenshots to recreate the game characters in hopes of integrating them into the videos seamlessly.



◾ Tags:
ehowton: (Default)


◾ Tags:
ehowton: (my_lovers)


◾ Tags:
ehowton: (ocktoberfest)


◾ Tags:
ehowton: (synapse)


◾ Tags:
ehowton: (my_lovers)


◾ Tags:
ehowton: (my_lovers)


◾ Tags:
ehowton: (my_lovers)

I often try to look at things as they could be interpreted by others, and while I have only a limited perspective in which to do so, the act of the exercise alone I would hope keeps part humble, aware, and compassionate as the circumstance may dictate. Admittedly when I write, I sometimes use imagery in which can be layered with meaning; at times to obfuscate that which I wish to remain unspoken while simultaneously carrying out the feeling I wish to invoke or express, but also to observe which interpretation readers more closely identify. This can be a powerful tool in which to reverse-engineer attitudes, beliefs, and worldviews. It is also a mild indicator of how different people processes information.

During one our few sessions together with the last therapist, wifey wasn't entirely pleased to learn that some of my behavior was indicative of treating her as an equal. The therapist reiterated that, "this is a good thing," but I also understood where wifey was coming from - when you're aching to get your needs met, sometimes sharing an equal burden of responsibility isn't the best answer. It also seemed to surprise wifey's BFF when I told her I considered her a peer; this after getting the opportunity to sit and talk and really getting to know her.

I've been enjoying exploring the literary and poetic themes of pre-Raphaelite art in AI as well as inserting myself into the scenes. In many of these renderings, I am dressed as a knight, in brilliant armor. Wifey has never let go of the idea that I have an innate need to be the, "knight in shining armor" to gorgeous, delicate women in need - AKA white knight syndrome, despite it not being listed in the Diagnostic and Statistical Manual of Mental Disorders (DSM). So yes, I could potentially see reinforcing her claims by doing so. That said, having been a long-time admirer of John Williams Waterhouse's work surely supersedes that assertion. Which brings me back to BFF.

The reason white knight syndrome came up in therapy to begin with was how we each initially responded to the BFF as she was going through her divorce. Wifey felt it too close to, "good cop, bad cop" with her always filling the bad cop role, and me the doting father figure. I'm not entirely sure on the specifics of this next part, but my falling in love with the BFF surely exacerbated her feelings on the subject, at one point suggesting it may very well be an unhealthy relationship given the fact I was giving off daddy vibes. Our therapist merely echoed my feelings on the matter, that because our individual relationship with the BFF was different, it would be reflected in how, and what, we communicated. I was merely supporting her by being her cheerleader, which I apparently don't do enough with wifey when she needs it - as I see her as not requiring it. Simply put, I expect more from her because of my relationship with her and I know her strengths. And this brings me back to pre-Raphaelites getting their inspiration from poetry and literature.

I've noticed that while yes, I am in armor, I almost always render BFF in armor as well, which got me thinking about this post. In the intervening 18-months since her divorce, and upon the realization I see her as my equal, it would only make sense I would do so, rather than an incapable woman who requires saving. Now I don't know her feelings on the matter - perhaps she'd prefer to be seen that way, and if so, we can certainly explore that possibility together, but in my mind's eye, she's powerful; an equal on the battlefield. What about wifey then? Well, she's is generally draped with the most delicate of pre-Raphaelite clothing. Not because she has to be, because I do know that's what she prefers, and what I cannot seemingly accomplish in real life, at a minimum I can do so in the subtext of imagination.



◾ Tags:
ehowton: (poly)


◾ Tags:
ehowton: (BSD)

Had an idea that would require learning how to utilize AI for text-to-speech projects, but I had to learn some things first. There's an Nvidia-specific application which sounded fun (Tachatron 2), but the RTX4090 is Ada Lovelace architecture and the app only supports Volta, Turing, and Ampere and the next one which looked interesting was ESPnet, which appears to work best on linux. So I fired up the old desktop (the one with the RTX3070) but I had to replace a brand new monitor that went out after only two months of use (ugh), then patch and upgrade the operating system since its been so long since I've powered it on.

Really only three tasks left: Figure out how to install all the inane prerequisites using incomprehensible GitHub links which don't tell you *where* to put stuff so everything can work together, try to start and run the damn thing, and (if all that works), figure out how to actually use the application. Which reminds me, since this is text-to-speech, I probably ought to hook up some sort of audio output device to the linux box.



◾ Tags:
ehowton: (wwii)



◾ Tags:
ehowton: (coffee)


◾ Tags:
ehowton: (ocktoberfest)


◾ Tags:
ehowton: (Captain Hammer)


◾ Tags:
ehowton: (ocktoberfest)


◾ Tags:
ehowton: (Default)

To be completely fair, my life gets exceedingly complicated due to one variable and one variable only: Me. I'm the variable which complicates my life at seemingly each and every turn, without cessation. In this next episode of, "WTAF is Wrong with You?" I imagine at some point in the very near future, my wife coming across AI-rendered photos of her BFF standing in a datacenter wearing only a bikini. I also imagine my wife not looking upon this favorably. And while its true all of this is occurring only within my imagination right now, its a very real scenario, and the very real answer will sound like a complete fabrication designed to keep me out of trouble, but I assure you it is not. I have the pictures of ME in a bikini standing in that very same datacenter to prove it.

There I was, innocently rendering professional headshots of myself for LinkedIN with a brand new SDXL AI model when I grew bored. Hey, it happens. I thought it would be fun to render other people in the same scene. Makes perfect sense, right? But I'm short on SDXL models currently (I do have a couple more in the wings but they haven't been shot yet) so I used the only other model I do have - wife's BFF. Mind you, I didn't change the prompt at all, only the model. But instead of her, it rendered some dude. So I double-checked, and ran it again. Some dude again. This was a head scratcher. I've been operating under the assumption that sometimes the GUI "caches" model information, and it wasn't using mine specifically, but it also wasn't using hers. Turns out this would be incorrect, and goes all the way back to bias - AI's not mine.

I changed the model back to myself and it rendered flawlessly. I moved it once again to the BFF and it was some dude. Wondering aloud now, I decided to change the prompt to have her wear a bikini top instead of a polo, and sure enough, right there in the middle of the datacenter, she appeared - clad only in a bikini. To further stir the AI bias pot, she now had an arm thrown behind her head in a completely unacceptable professional LinkedIN pose. My suspicions partially confirmed, I changed the model back to myself but did NOT change the prompt, expecting to see myself again, but this time clad in that same bikini. Wrong again. There I was, sure, and yes, in a bikini, but gone was the male me; I was standing in that datacenter without the short hair, and without any facial hair whatsoever - the beard and mustache were gone (though one of the pictures does show a little chest hair between my suddenly ample bosom.) That's when I realized the random dude in the server room was the male version of my wife's BFF.

So what have we learned? Only women are allowed to wear bikinis, and only men are allowed to wear polos - according to AI.

And that's why I have pictures of my wife's BFF wearing a bikini on my computer.



◾ Tags:
ehowton: (ai)

  • Portrait (2:3)
    • 832x1248

  • Standard (3:4)
    • 880x1176

  • Large Format (4:5)
    • 912x1144

  • Wide (7:9)
    • 1152x896

  • Selfie / Social Media Video (9:16)
    • 768x1360

  • Square (1:1)
    • 1024x1024

  • SD TV (4:3)
    • 1176x888

  • IMAX (1.43:1)
    • 1224x856

  • European Widescreen
    • (1.66:1) 1312x792

  • Widescreen / HD TV (16:9)
    • 1360x768

  • Standard Widescreen (1.85:1)
    • 1392x752

  • Cinemascope / Panavision (2.35:1)
    • 1568x664

  • Anamorphic Widescreen (2.39:1)
    • 1576x656

  • Older TV and some documentaries (4:3)
    • 1176x880

  • Golden Ratio (1.618:1)
    • 1296x800
◾ Tags:
ehowton: (wwii)

Artificial Intelligence art-generators absolutely love booba; gigantic, over-the-top, gravity-defying tits. Even the most mundane, safe-for-work prompt will return whatever you've asked for...except with artifically-enhanced, award-winning chesticles prominently displayed just under tight-fitting clothes straining under the guise of the SFW keyword and demanding to be set free in all their ridiculous glory! It's basically a proverb at this point, and we all know the why. That argument falls outside the scope of this discussion.

The idea of "prompting" is using keywords to describe a scene you want rendered, and generally speaking, works really, really well. Except with boobs. Boobs are almost singularly broken (unless you're using a moderated service such as MidJourney, which also falls outside the scope of this discussion). Let me explain. If awe-inspiring breasts spilling out of your clothes is the default, is there any way to counteract that? By default, no, not really, and I'll tell you why: Anytime you willing use descriptive words to lessen or minimize the sheer volume of the highly-visible honkers, it ignores the adjective/verb, and exaggerates the noun. Thus, "tiny tits" or "small breasts" translates to AI as, "TITS" and "BREASTS" respectively, making the already otherworldly orbs even more comically accentuated. That is to say, to keep the asininity to the very limits of its default caricature, we must never use any words to describe mammary glands by any of its plentifully varied nomenclature nor scientifically accurate verbiage (I have personally noted that simply using, "a-cup" can sometimes help but as God is my witness that has been known to backfire so please beware).

However, similar to modding a video game, AI generation has its own "mods" which can be added to help create specific scenes. With these mods and special prompting, you might already be thinking a fair and reasonable solution is at hand! You would be wrong. Honestly, an understanding of anime (Japanese animation) helps. More specifically, certain Japanese tropes within the genre. Now, I don't really want to get into the weeds here, so let's just say arguably, anime characters either have them (jaw-droppingly magnificent sweater puppets tantalizingly focused upon) or don't (also deliberately focused upon, despite being pointedly absent - which is also usually vocalized just in case you failed to notice the uncomfortably long lingering look upon the "astonishingly" flat chest). For the sake of argument let's just think of the disparity as a cultural literary conflict and move forward.

Where could I possibly be going with this you may ask? I'm honestly beginning to wonder that myself. Ah yes, mods, and why they don't really work. While breast-minimization mods are, perhaps unsurprisingly, woefully under-represented in the mod marketplace, the ones which work the best would ideally make them completely non-existient. And this, dear reader has to do (at its worst) more with kink, and its associated porn or (at best) tasteful nude art than anything else. What could possibly make me jump to such a conclusion? Well, they're all marked as NSFW, which is AI-speak for (among other things), topless, boobs-out, naked women. This is where we introduce mod, "weights." We'll limit this to LoRAs or Low-Rank Adaptation of LLMs (Large Language Models) in our discussion, though there are others.

The weight of a LoRA is a modifier of how much of the mod to apply to the render. Generally, this range falls between 0.1 to 1.5, with the higher range applying more of the mod to scene, and the lower range dialing it back - this gives you granular control over the modifications' application to your final product. I introduce this concept to illustrate tweaking tremendous tits. The mod I use is the LoRA, "Flat-Chested" (because of course it is). LoRAs (by default) start at a weight of 1.0 when loaded, which applies maximum effect (anything above 1.0 is used for special purposes, such as forcing an invoke, but I digress). In theory, my render would have a "Flat Chest" at that weight, but please recall this is an NSFW LoRA, so very nearly every use at that weight would override clothing leaving me with a *topless*, flat-chested female, which may or may not look entirely out of place depending upon the surroundings in which that render exists or for the purpose it was rendered. Too little weight, and it struggles to make any visible changes at all. This is but one dilemma facing AI bias (if you recall the stories of the gender-swapped founding fathers or the African-American Nazis).

This may be the point where you roll your eyes and suggest starting at the half-weight of 0.5 and you wouldn't be completely wrong - as that is a good place to start - providing you keep in mind half your renders may display a very obviously naked woman who's boobs are the color of rendered clothing. At some point of course, with an endless font of both time and patience, you may actually achieve your vision.

I don't even remember where I was going with this.


◾ Tags:
ehowton: (Default)


◾ Tags:
ehowton: (Doc Brown)

Time Machine AI model (LoRA) for Stable Diffusion; the classic Delorean from Back to the Future. You can download it here: https://civitai.com/models/118932/time-machine-back-to-the-future-delorean-dmc-12



◾ Tags:
ehowton: (Transformers)


◾ Tags:
ehowton: (pink)
Re: Stable Diffusion/kohya_ss/ComfyUI

When you see this:

error: Your local changes to the following files would be overwritten by merge...
...Please commit your changes or stash them before you merge.
Aborting


Do this:

git fetch --all
git reset --hard
git pull
◾ Tags:
ehowton: (ai)
Embedding Learning rate:

0.05:10, 0.02:20, 0.01:60, 0.005:200, 0.002:500, 0.001:3000, 0.0005
◾ Tags:
ehowton: (Gaming)


◾ Tags:
ehowton: (Computer)

Two things happened simultaneously; my suddenly inadequate mid-range gaming rig due to the advent of AI, and my junior sysadmin-in-training wife deciding she wanted hands-on hardware experience after her initial CIS class. This was going to be a rather expensive, albeit unique, opportunity. Creating AI models from my photographs in order to manifest unique end-products takes either horsepower, or time, neither of which I have in excess. I'm hoping to change at least one of those.

Geekfriend and I spent a full day researching specs to ensure we could eke out the absolute best price/performance ratio given the dizzying array of options (and stock) available, and double and triple-checking compatibility. For a purpose-built AI system this included a 13th gen Core i9, DDR5 RAM, gen-5 NVMe, and of course the best bang-for-your-buck RTX 4090 on the market - not an inexpensive endeavor no matter how successful the valuation.

What surprised me the most was the price of non-"flagship" motherboards required to pull this off. In that regard, I settled on the MSI z790 MEG ACE which provides (after applying stipulations), an x8 PCIe slot for the (ASUS ROG Strix OC) 4090, and a single gen-5 m.2 for the (Crucial T700) NVMe, leaving a single, non-disabled (chipset-driven) PCIe x4 slot (for bifurcated quad NVMe adapter card) and four remaining gen-4 m.2 slots, one via CPU, three via chipset at the expense of the final SATA connection. Booting from gen-5 NVMe over the loss of (estimated) 1-2% of GPU throughput given the architectural limitations of the i9 while disappointing, seems a fair trade-off, and allows me to continue using most of my existing NVMe, including the ridiculous Cyperpunk 2077 Firecuda, all neatly tucked into a brilliant white Lian-Li O11 Dynamic EVO case.

RAM is similarly affected, able to run at (max OC) 7800MHz with one, single-rank DDR5 stick, lowered to 5600MHz running two, dual-rank DDR5 sticks. I've been pleased with TEAMGROUP my last several builds so purchased their T-Create series for this build-out. Power will be supplied by a Seasonic PRIME TX-1000, a brand favorite of Geekfriend.

And that was the easy part, relatively speaking. This purchase will create a cascading effect as I have my wife dismantle her PC, my PC, a running linux box sourced from my last upgrade, and a spare PC, allowing her to assemble her upgraded desktop from the resultant parts as she learns hardware.



◾ Tags:
ehowton: (Camera Side)


◾ Tags:
ehowton: (Indiana Jones)


◾ Tags:
ehowton: (Dallas Pegasus)


◾ Tags:
ehowton: (Computer)
The below is my configuration with RTX 4090

Dreambooth LoRA --> Configuarion file --> Open --> [confignew.json] --> Load

mkdir -p img/##_loraname [number of training images x folder repeats (##) x Epochs / batch size = < 3k]

Utilities --> WD14 Captioning --> Image folder to caption --> [00_loraname]
Utilities --> Blip Captioning --> Image folder to caption --> [00_loraname] (trigger word suffix)

CAPTION IMAGES


(not needed if "Blip" above)

for x in `ls *.txt`; do dos2unix $x; done
for x in `ls *.txt`; do sed 's/$/, loratrigger/' $x > new$x; done
for x in `ls *.txt | awk '!/^new/ {print}'` ; do rm $x ;done
rename 's/new//g' *.txt



Dreambooth LoRA --> Source model --> Pretrained model name or path -->

animefull-final-pruned_(NovelAI) [for rpg/anime/fantasy]
v1-5-pruned [for realistic models/people]

Dreambooth LoRA --> Folders --> Image folder --> [img]
Dreambooth LoRA --> Folders --> Output folder --> [../]
Dreambooth LoRA --> Folders --> Model output name --> loraname

Dreambooth LoRA --> Training parameters

Train batch size --> 4 (six if recently rebooted and vRAM hasn't been touched)
Epoch --> 20
Save every N epochs --> 2
Mixed precision --> bf16 (nVidia only)
Save precision --> bf16 (nVidia only)
Number of CPU threads per core --> 2
Cache latents --> CHECKED
LR Scheduler --> polynomial
Optimizer --> AdamW8bit
Text Encoder learning rate --> 0.000045
Unet learning rate --> 0.0002
Network Rank (Dimension) --> 96
Network Alpha --> 192
Max resolution --> 768,768
Enable buckets --> CHECKED

Advanced configuration --> Clip Skip --> 1 (for v1-5-pruned)
Advanced configuration --> Clip Skip --> 2 (for NovelAI)
Gradient checkpointing --> UNCHECKED
Use xformers --> CHECKED
Don't upscale bucket resolution --> CHECKED
Noise offset --> 0.05

Sample images config --> Sample every n epochs --> 1
Sample prompts --> (loratrigger) --n low quality, worst quality, bad anatomy, --w 512 --h 512 --d 1 --l 7.5 --s 20

TRAIN MODEL



Reference articles:
https://aituts.com/stable-diffusion-lora/
https://www.reddit.com/r/StableDiffusion/comments/11r2shu/i_made_a_style_lora_from_a_photoshop_action_i/
https://www.zoomyizumi.com/lora-experiment-8/
◾ Tags:
ehowton: (wwii)


Model was photographed in my studio; aircraft was an AI model I trained from a B-24 Liberator.
◾ Tags:
ehowton: (ai)

Texas Artist John R Geren had the idea of marine life as WWII aircraft, subsequently creating a handful of larger-than-life museum pieces representing both Allied and Axis factions. It therefore became an imperative stop on my monthlong WARSHIPS ROADTRIP this past October. Admittedly at the time, I didn't know exactly what I was going to do with these images, but had taken so many, I thought it might be fun to create an AI model out of them and see what it would generate. While obviously not one-to-one exact replications, the renders were outstanding and breathed life and drama into his magnificent creations:







◾ Tags:
ehowton: (Kroenen)

Buddy of mine in Texas has a vintage German WWII Das-Kleine-Wunder NZ350 he's been restoring for...I don't know how long. I took the opportunity during my monthlong WARSHIPS ROADTRIP to photograph it. Later - and only because I had shots of it from every angle - I trained an AI LoRA with it (download here). This model was trained from 64 of my own high-resolution photographs taken with the following lenses on a Fuji X-T4 body: Canon 70-200mm f/4 "L", Sigma 50mm f/1.4 "Art", and Hasselblad Planar f/2.8. Lower weight will produce (among other things) a V-Twin engine and a fairing. Be advised because all the original photos contained a helmet on the back seat, there is a strong preference for it being rendered. You can see the renders as well as the original photographs of the DKW NZ350 here: https://www.ehowtonphotography.com/JRG/DMK-NZ350/



◾ Tags:
ehowton: (Ghostbusters)

Created my first Low-Rank Adaptation (LoRA) for Stable Diffusion; the classic Ectomobile from Ghostbusters. You can download it here: https://civitai.com/models/79682/ghostbusters-ecto-1







◾ Tags:
ehowton: (Transformers)


◾ Tags:
ehowton: (Aircraft)


◾ Tags:
ehowton: (Transformers)


"May I see your badge, officer?"
◾ Tags:
ehowton: (navy)






◾ Tags:
ehowton: (BSD)
I wanted to add how I was finally able to resolve, using xvgray's suggestion once I figured out how to install Python 3.10. This issue has been driving me nuts all day as a new Linux user, and I'm hoping to help spare someone else the pain.

Since I couldn't install Python 3.10 with the simple command (sudo apt install python3.10 led to error ...couldn't find package by glob 'python3.10'), I had to install it manually. Python 3.10 has various dependencies that neither apt nor aptitude could resolve on their own without some prodding. Here are the steps to install it, but note that since I'm a new Linux user, I don't have any idea if this will screw up your OS otherwise. Use this solution at your own risk.

    Open the "Software & Updates" app. (I used the GUI version that comes with Ubuntu Cinnamon 23.04. I think this same GUI is in all versions of Ubuntu.)
    In the "Other Software" tab, click the "Add..." button and add these (one at a time, but the second one might populate for you automatically after you enter the first):

    deb http://security.ubuntu.com/ubuntu jammy-security main
    deb-src http://security.ubuntu.com/ubuntu jammy-security main

    Go to close the "Software & Updates" window and allow it to refresh the software packages list.
    Switching gears -- navigate to this site in a web browser (https://packages.ubuntu.com/jammy/amd64/libmpdec3/download), and download "libmpdec3." This is a subdependency for Python 3.10, and it's not available otherwise as far as I could tell.
    Once "libmpdec3" is downloaded, double-click it to install it from your Downloads. You might have to tell it to "Open With..." and choose "Software Manager."
    Once that's installed, open your Terminal and install aptitude with sudo apt install aptitude. (This step might be optional, but aptitude is what I used instead of apt in these next steps. You can try it with apt instead if you want.)
    Once installed, issue these commands:

sudo aptitude install libpython3.10-stdlib
sudo aptitude install python3.10

    Follow xvgray's comment (quoted below).

    python3.10 -m pip install --user virtualenv

    * Delete the `stable-diffusion-webui/venv` directory.

    * Edit the `'./webui.sh'` script and change two entries:

    Line 38: From: python_cmd="python3" To: python_cmd="python3.10"

    Line 161: From: ${python_cmd}" -m venv "${venv_dir}" To: virtualenv --python="/usr/bin/python3.10" "${venv_dir}"

Good luck!


https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9791
◾ Tags:
ehowton: (my_lovers)


◾ Tags:
ehowton: (Skoal)


◾ Tags:
ehowton: (navy)


◾ Tags:
ehowton: (Jack Sparrow)


◾ Tags:

June 2025

S M T W T F S
1 2 3 4 5 6 7
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Expand Cut Tags

No cut tags