<![CDATA[Victor's Cloud Blog]]>https://ghosty.6ccorp.com/https://ghosty.6ccorp.com/favicon.pngVictor's Cloud Bloghttps://ghosty.6ccorp.com/Ghost 4.36Wed, 30 Mar 2022 13:44:04 GMT60<![CDATA[Snake in the sandbox]]>Continuing where we left off on my Drop off Form Project you'd be wise to recall that I had a mostly functioning serverless async form where clients could use the webapp to fill out a drop-off slip prior to bringing devices in for repair.

The state of it

]]>
https://ghosty.6ccorp.com/snake-in-the-sandbox/6241c2df2e62be00014686d7Mon, 28 Mar 2022 15:34:04 GMT

Continuing where we left off on my Drop off Form Project you'd be wise to recall that I had a mostly functioning serverless async form where clients could use the webapp to fill out a drop-off slip prior to bringing devices in for repair.

The state of it was that it would send from the web interface through to AWS SES and out to my email address indicating we had a basic functionality.  Great start!

Python: the friendly snek

After coming to the conclusion in the last episode that Javascript is still not my friend, I went back to the Snek.  When I look at Python code, I can reasonably determine what is going on, and when I write it, samsies.

With that in mind, I rewrote the Lambda function in Py and got back to where I was with the JS: I could accept the raw event from the webapp and pass it through SES.  From here it would be a matter of cleaning up the raw data into proper formats, eg Form Data and Signature image.

After some digging around, it was revealed that AWS Lambda lets you add attachments if you create a Raw email message structure with your Python, using the docs and Stackoverflow.  But in order to do so, I had to take the event passed in, and "clean it up", ie format the data properly so that I am able to separate the wheat from the chaff Form Data from the Signature Data.

Doing this required more learnings on how to ingest JSON data from the event and store it in a a dict structure, pop items from a dict, iterating values, str.replace chars in a string, debug input values from CloudWatch endlessly only to discover my test data strings from Lambda's buit-in test was different from the form-data submitted from Postman and required an extra pop, MIME structures, 500-code Server Errors and other fun things that required about a week of studying and failing in agony application testing.

Snake in the sandbox
When you finally get your code to act right is a very satisfying feeling

After a few days of smashing my head against the wall and using the ol' print(everyVarIsee) method of debugging, I finally got my brain to wrap around what I was looking at exactly and smoothed it out like a wrinkly blanket.  I was finally able to jump into a cozy space and enjoy that feeling when you have a freshly made bed.

Snake in the sandbox

The pillows on the bed were made from a Base64 encoded signature file, popped from the event, stringified and re-encoded, then attached as a MIME-Multipart object.  The bed sheets were made from pulling the To: address from the Form Data object and inserted as part of the Raw email header.  Finally, the blankets were snugged up and made from taking all of the form data inputs and stringifying them using HTML format, cleaning up a couple unnecessary characters from the json.dumps() and tucking them into the MIME object so it would pass the initial form data through SES out to our email and CC to the client's email from the form so they could have a receipt of the transaction.

Snake in the sandbox
Confirms receipt of the information for the device, client and their signature.

I couldn't have done it without you, Python (but probably could have with the original JS but that can be for later, I needed results now!)

Mmmm Toast

Once that part was working right, only 2 details remained: clean up the front end app so it didn't look like it came from the late 1990's, and then escape from the AWS SES Sandbox.

So back to the front end, I am not the greatest on UX design but I do know there are some ways to get something presentable quickly.  For this I brought in Bootstrap for it's clean visuals, responsive mobile abilities, familiar interface elements for many people, and quick implementation of things like Toasters so I didn't have to spend too much time making it act right.

After brushing up on the concepts and skimming templates, I removed previous unnecessary CSS and applied the Bootstrap stuffs which brought it together nicely.  I then added the company logo in both mobile-portrait style and then larger display size formatted banner style with the @mobile CSS definition.

Snake in the sandbox
Looks clean, real clean

After getting that part shaped correctly and responsive, I learned about how to implement a Toast function so that when a user clicks the Submit button, they get a little Pop-up to indicate they have successfully performed an action and they can move onto their next task without wondering "did it send?  Am I worthy as a human?  How many pits in a pat?" etc.

Bootstrap made it super easy, I just needed to add in JQuery script into the page, attach the class and ID to the Submit button, then using an example script from a tutorial I added a little notification to confirm that "Yes, you are worthy as a human and can move onto your next task in life".

Lastly I cleaned up the Lambda code so it's dry and safe to upload to my github account, which is where I am hosting the page.

The Great Escape

The last part of this journey is to ask AWS very nicely to remove my account from the sandbox.  But you have to be very clear and very explicit about how you are going to use this account, for what, why, how, where, when, how many times you've been divorced, if you have any secret konami codes, and lastly if you only had one song to play for the rest of your life what would it be?  Or something like that.

Snake in the sandbox
I may be over-embellishing about their information requirements

But I fired off some info, let's see how it goes.  Tune in next week for more adventures in Serverless .... stuff!

Edit: they approved the project and everyone is mirthful!

]]>
<![CDATA[Just sign at the JS Pad]]>As part of the company my wifey and I made, we have clients drop off their devices for us to repair.  Part of the intake process is to fill out a customer & device info sheet along with a disclaimer statement which they sign and date, and then receive

]]>
https://ghosty.6ccorp.com/sign-at-the-js-pad/6214640842b592000168c526Fri, 18 Mar 2022 06:11:29 GMT

As part of the company my wifey and I made, we have clients drop off their devices for us to repair.  Part of the intake process is to fill out a customer & device info sheet along with a disclaimer statement which they sign and date, and then receive a copy of.

After the pandemic started we needed a way to transition that process from paper forms to digital systems in the most cost-effective manner possible.  This was pretty easy to achieve using a fillable PDF form and an old galaxy 10" tablet which clients could fill out when they arrive.

There was one problem that this did not address, which is the issue of client's who are using a proxy person to drop off the devices, and may not have all of the information needed to fill out the form.  We felt it would be best to have some sort of web-based form that clients could fill out and send ahead of time so the proxy could deliver the device and not have to worry about that.  In addition, it seems that some of the older clients with vision and/or physical issues would have an easier time filling a web form over trying to type on a tablet.

The Project

So the project became defined as a Web form which sends to our business email address, which could convert to PDF and can be sent to the client after device has been received as proof of receipt, and for as minimal cost as possible since it's likely to only be used a handful of times per month.

The easiest way of course, would be using a managed service to perform these functions, of which there are a few.  It also needed to have signature capture capability.  This reduced the number of options down a few, but left a handful of options like Docusign and Formplus amongst others.

Docusign, despite being the most well known, was also priced out of our range.  We settled on Formplus free tier, set up a form, and let it rip.  It worked absolutely great!  Easy to set up, fast, looked great, and signature capability.  One slight problem: I had overlooked the fact that signature was a premium option, which became disabled after the trial period was up, shifting up to their basic plan at over $20/month.

Ow.

I wasn't able to find any other options which supported signature capabilities at a low enough cost, I think $8/mo was the lowest I could find.  But for a maximum of 3-5 people a month, I just couldn't justify it.

So I did what any reasonable business owner would do in this situation:

I decided to build my own by spending hours of valuable time and learning a whole bunch in the process

After searching around for the best ways to build an edge-hosted submission form with signature capabilities, it looked one of the fastest ways was to build a submission form page using HTML/JS form+Signature pad JS, hosted on Vercel, and Sendgrid to fire it out.

Easy peasy lemon sq - ow I got lemon juice in my eye!

I followed a tutorial that got me close enough that I was able to build up a form and fire it from my localhost server in VSCode.  Neat!  Next step, deploy to Vercel and test it through that to ensure smooth deliveries, and then add signature pad, dress up the front end and everybody's happy.  Pretty easy!

Until it wasn't, of course.

After deploying to Vercel, the emails would no longer fire out.  I'm not sure why.  No amount of console.log()'s seem to give me information about why they aren't reaching my destination email address.

I can see the form built and sent to API, it looks like API received, then .. nothing.  And Sendgrid's logging is less than helpful as far as I can tell.  Why would it work in the dev environment but not on Vercel?  Initiate Googling

Just sign at the JS Pad
Oic

From what I gleaned, it looks like something to do with Cross-origin issues as I came across numerous other unresolved issues using Vercel/Sendgrid.  There might be a successful way to do it, but as usual, I did what any reasonable business owner would do in this situation!

I tore it down and started over

When life gives you lemons, make OW I got it in my eye again!

After revisiting other options to accomplish the task at hand, and seeing what I had so far, I came across a system which uses AWS Simple Email System to send the form.  I already had the form built, and once I got that part working right off dev, then I can add the signature and voila - everyone's happy!

I got familiar with AWS offerings a few months back after taking a course on Cantrill.io for AWS SysOp Associate training, just to learn more about it since by this point I had become almost obsessed with Cloud technology.  I loved to learn about all the ways to skin an apple with all these different tech stacks, and the idea we could build these Highly Available, resilient computing systems so fast and so cheap (sometimes) just blows my mind.

Just sign at the JS Pad

So from here I built up the IAM role for the Lambda access, created a Lambda function in Node.js to handle the JSON processing, and an API gateway to handle the form submission.  A little debugging later and it's receiving the form correctly, and sending them out to my email.  Yay!

Just sign at the JS Pad

Next step: Signature Pad!

Here's a fun fact about me

I am not great with Javascript. I like how flexible it is, but I think because of that, I have trouble following along in the code when stuff starts getting invoked all over the place and lose track of where I'm at. This becomes important later!

This part went surprisingly well as I was able to follow the README instructions and it integrated almost perfectly except for a glitch that would not let it give a continuous curve on mobile displays.

After some intense Googlin' I found a solution in the form of touch events control which allows for differences in touch input technologies (mouse, trackpad, touch screen).  The original use case for the signature pad was built for mouse control, and to use it with touch screen I needed to add specific listeners for that to happen.

Just sign at the JS Pad

Doing so also created a slight bug where I couldn't scroll the page, but that was fixed by correcting the scope of the controls to be only inside the pad element.  We're back in business!

Bringing it all together

So one of the cool things of the signature pad JS is that by default it converts the image to a base64 object for passing manipulation, which works nicely as I can embed that into inline HTML which will work nicely with my intended project format.

Easy, right?

Stop saying easy!  It's never easy!  And start wearing goggles for the lemon juice!

Ok well it looks like after adding the signature element to my form and passing it to Lambda, we have 2 specific problems:

1) I can't figure out how to separate the signature JSON from the rest of the JSON
2) I can't figure out how to build an html page to send via SES with JS with that element very easily

Oh dear.  Ok, halfway there at least!  But I wonder, can I use Python to handle this portion of it?  I'm a bit faster with the Python than I am with the JS, and I just learned some cool tricks on Python Quickstart for Linux Administrators on Pluralsight.

I'll tell you more later in a follow up post!  For now, my lap is too sweaty from typing this up on the macbook.  Thx - mgmt

Edit: Story continued here with a happy ending!

]]>
<![CDATA[Raspberry Visions]]>Dateline: 2013 - Santa Fe Springs

This was one of my favorite projects and I had to learn a lot to get it working reliably and for a reasonable cost.

A new bid RFQ became available for a decent sized Regional Public Transport Organization who were looking for a company

]]>
https://ghosty.6ccorp.com/raspberry-visions-2/6214646742b592000168c532Wed, 09 Mar 2022 00:14:44 GMT

Dateline: 2013 - Santa Fe Springs

This was one of my favorite projects and I had to learn a lot to get it working reliably and for a reasonable cost.

A new bid RFQ became available for a decent sized Regional Public Transport Organization who were looking for a company to install dispensers for chemical cleaning solutions to use in their Vehicle Wash operations, and keep supply tanks full with minimal work on their end, and an SLA requirement of 24 hour response time to any issues.  The bid project had a budget of $36,000 over 3 a year contract, with options to extend.

So at a high level, they wanted a hands-off system, with no re-filling or maintenance of equipment by their staff.  Their Service Worker contractors were only to dispense and use products during their operations cleaning the vehicles.

We were very interested as we were able to produce the cleaning chemicals, and we had some experience with maintaining equipment like that.  But we hadn't done tank fills yet, and the distance was a bit out of our local range at about 60 miles one-way which might make maintenance and refill operations difficult.

After studying tank fill options, we decided we might be able to fill the tanks with an electric high speed pump running at about 45GPM, which could be obtained for less than $1k.  Our main challenge at that point would be how to keep the tanks filled without having to drive an hour out to the facility just to see how much supplies were left.

The previous supplier had a flat-bed truck with tanks mounted on it, and just drove by monthly, refilling as needed and charging for whatever was dispensed.  Not a bad system, but a bit rigid and may result in less than optimal business performance if they was any variation in usage rates.

The Plan

My thinking was that if I was able to install some sort of remote monitoring equipment, we could perform that operation with direct knowledge of the current supply situation and schedule production and deliveries with Just-in-Time precision.

I had built a previous project to give us remote monitoring capability using liquid level switches attached to a Raspberry Pi to give us early warning on low levels at one of our Hotel clients.  The Pi was running a python app I built to send an email to me based on any trip of the liquid level Normally Off switches, which worked perfect in my test environment (of course!), but after time the switches would get gummed up or damaged by the products and lead to situations where product would run out, which less than great.

Between that project and the new bid RFQ, the Raspberry Pi team released a camera attachment as well as an Infra-Red night vision capable camera, which was really cool, but also gave me an idea: what if we could just look at the levels of the supply to keep tabs on them?  Just, everyday shoot me an update on the supply level?  This sounds like it would work with less moving parts, so more reliably!

The initial build

Raspberry Visions
Thanks to client who was gracious enough to let me test these units. Luckily the project manager liked the idea of them and wanted to see how they worked.

The first version I built up utilized a Raspberry Pi 2 Model B+ and a first gen PiNoir 4MP camera.  This was all mounted into an ABS plastic housing with an adjustable wall mount kit that is no longer being produced.  I dub thee PiMon 1!

Connectivity was provided a random Chinese brand Wifi adapter with a 50ft cable, as I had to mount the antenna outside in order to get any sort of reception inside the cinderblock room and across a parking lot.  That was as long as the RTL driver was stable, which it wasn't.  After some GoOgLiNg, it appears that those drivers were really only stable in Static IP configuration, and so it was.

On it's best day it got spotty reception, but enough that I was able to connect to their network.  The plan was to connect to some sort of server or SMTP service to enable a daily send of a captured image.

Since this was a fairly large enterprise network, they had most ports locked down, and the only ports I could access were web 80/443 and TLS SMTP port 587.  Good enough!  External email provider it is!

So using some rando tutorial I set the PiMon to connect to Gmail via SMTP using locally stored credentials.  After some trial and error (like image quality and size, login credential syntax, etc) the sendmail method worked great.

Based on that that I wrote a script to capture an image and send it with mpack to my personal inbox everyday, triggered by crontab.

raspistill -q 10 -o /home/user/pimon1.jpg

mpack -s pimon1 /home/user/pimon1.jpg my@emailaddress.com
Raspberry Visions
The IR was much better than I anticipated even when tested at the shop

This was pretty straight forward and worked pretty well, until it didn't.  There were a couple improvements that became apparent over time:

1) A date added onto the subject line would help for tracking purposes, as I could watch the liquid levels over time and use those values to track usage rates as well as use backdated images for auditing purposes.
2) The wifi adapters I was using were struggling to stay connected, particularly at the longer range locations at the end of the parking lots, roughly 1/8 mile.
3) As I added more devices into the system, it would help to have them identify themselves, using a more modular script so I can just apply changes to the script as a fleet
4) Sometimes the devices would freeze and would need to be restarted, particularly those that were exposed to more of the elements, so maybe a timer or reset function to help them stabilize.
5) Any changes I needed to make to the script had to be done while I was at the location, and it would be nice if I could connect to them from the home base

Problem 1+3

So addressing problem 1+3, I updated the script to add identifying information into the script to provide identifiers for each device and date sent for reference purposes.

DATE=$(date +"%b-%d-%y")
PINAME=$(hostname)
raspistill -q 10 -o /home/user/"$PINAME".jpg
mpack -s "$PINAME $DATE" /home/user/"$PINAME".jpg victor@emailthingy.com

Problem 2

For the 2nd problem, I had to really dig around for options that a Raspberry could support and could be used outdoors.  It took a while but my results weren't found searching for the usual hardware, but instead in the Commercial networking space.  While there were many options with a fat price tag, I came across some that were reasonably priced: Ubiquiti Products

After some preliminary research on Ubiquiti products, I settled on using a Ubiquiti PicoStation M2

Raspberry Visions
Rugged construction, a unidirectional antenna, Power over Ethernet and an Ethernet connection == Great Success

Since the Raspberry had an ethernet port, connecting it to the Picostation was a cinch.  Connecting the Picostation to the client's AP went quickly and within an hour (I had to learn their UI and networking jargon I wasn't familiar with) it was up and running, with a solid signal that would yield 300ms or less ping times with substantially decreased packet loss across the parking lot.

Since the Picostation was designed for wall mount and outdoor usage, it mounted easy and fast and would withstand whatever inclement weather or horrid sunshine mother nature could hurl at it.  Nice 😎

Problem 4

To address this one, I added a crontab event to restart the unstable Pimons every night at midnight.  This helped with some, but not with others.  

For those troublesome units that it did not help with, I installed a physical timer and adjusted the scripts to allow for start-up, perform actions, then shutdown events.

DATE=$(date +"%b-%d-%y")
PINAME=$(hostname)

#pause for net connect
sleep 240

#smile for the camera
raspistill -q 10 -o /home/user/pimon1.jpg

#package and ship image
mpack -s "piMon1 $DATE" /home/user/piMon1.jpg me@email.com

#removing capture for image freshness
rm /home/user/piMon1.jpg

#pause for transmission
sleep 540
#sleep 1540 

#go to sleep little pibox
sudo shutdown -P -h 0
#sudo reboot

This approach kept them mostly stable.  The only issues encountered after that regarding stability had to do with the SD cards becoming corrupted from time to time, particularly with the ones in the outdoor facilities.

This sub problem of exposure to the environment lead to a solution I will describe a little later.

Problem 5

Waiting to access the devices in order to add upgrades to the scripts was kind of annoying as I wasn't able to install updates in a timely manner, so everything took forever.

The first method I tried to circumvent this issue was by installing a commercial grade 4G Router, namely the Pepwave BR1.  It was still able to connect to CDMA networks, namely Verizon, and I was able to purchase a low cost (read $20/mo) plan to offer cellular connectivity to this location.  If it worked, I could apply the same solution to the 2nd location some 30 miles east.

Raspberry Visions
LTE FTW!

After learning about how these beauties worked, I had it up and running in about 2 hours with 2 PiMons connected to it.  Unfortunately, the router was pretty expensive and it still didn't offer good support for SSH'ing back to it for some reason.

After a couple months of running like this, I came across an interesting piece of software: ZeroTier.

Raspberry Visions

Zerotier offered a system of Peer-to-Peer networking using UDP to create a virtual LAN connection, which was able to bypass the client's LAN firewall blocks on SSH.  With that I was able to connect back to my homebase server, and vice versa.   Nice 😎

With this piece of the puzzle in place, I was able to remove the Pepwave solution and connect directly to the on-site WiFi, cutting that cost back down to $0 for connectivity.

And, as a matter of identification, I added a variable to the script to attach the device's IP address to the email as well, so I can quick reference that device for connections, in addition to the ZeroTier client list in the portal.

Revisions and current state

Raspberry Visions
Getting close to a nice looking final product lol

With further time and consideration, I found a nice sized weather proof box from McMaster Carr that I was able to stuff all the parts into to help provide protection against the elements addressing the sub problem from 4.

I also added in some more scripting to timestamp the images in addition to help identify when a picture was taken with the following:

#VARIABLE DEFINITIONS
IP=$(hostname --all-ip-addresses)
DATE=$(date +"%b-%d-%y")
PINAME=$(hostname)

#Pause for net connect (in case of reboot snapshots)
#sleep 60
sleep 240

#removing previous image capture for freshness
rm /home/user/"$PINAME".jpg

##Testing area
#Camera capture (comment or uncomment for USB FS webcam or Raspberry Pi Cam raspistill)
#fswebcam /home/user/piMon1.jpg -r 1280x768
#retake or FSwebcam first image capture bug
#sleep 10
#fswebcam /home/user/piMon1.jpg -r 1280x768

#say cheese!
raspistill -q 10 -o /home/user/"$PINAME".jpg

#timestamp it and append a 1 to filename for identification
convert "$PINAME".jpg -pointsize 36 -fill white -annotate +40+40 'Date: '"$DATE" "$PINAME"1.jpg

#package and ship image
IP=$(hostname --all-ip-addresses)
mpack -s "$PINAME SNAPSHOT $DATE IP $IP" /home/user/"$PINAME"1.jpg ${sendtoaddress}
#upload to web, auto auth
scp -q /home/pi/"$PINAME"1.jpg user@xxx.xxx.xxx.xxx:~/captures/

#pause for transmission
#sleep 120
#sleep 540
sleep 14400

#Uncomment to shutdown for periodic cycled PiMons
#sudo shutdown -P -h 0
#sudo reboot

Next Steps

Future modifications would be to use RPi 4 units, and add logic so that the scripts would automatically adjust between always-connected, reboots, or startup-shutdown script versions based on the Hostname.

Next would be to control them more effectively using Ansible (using the always-on and repeat-until-true capabilities of Ansible) for updates/control, and uploading images to AWS S3 instead of my local server.

Physically, they were fairly self-contained.  The last iterations looked a bit cleaner than the holey piMon6 unit above, so my plans for next stages were to minimize the amount of crazy cables inside the box.  

That would likely entail a custom-built power distribution system inside with 110V in, and 110V, 12V, or 5V outputs, plus space for the optional Ubiquiti parts and custom-cut cabling so that it would fit more effectively in that box's space.

Maybe it could be condensed further since much of that space is just adapters and wires?  I'm not sure, but at this point, we've ended the contract and I'm out of that business, I don't think I'll get a chance to find out any time soon.  C'est la vie.

Here's some random pictures of the project:

Raspberry Visions
This box was able to contain the enormous RPi! Also picture: power strip, PoE Injector for a Ubiquiti PicoStation, Power supply for IR illuminator, extra long cables.
Raspberry Visions
IR Illuminator - ENGAGE

Raspberry Visions
10% image quality provided good enough images
Raspberry Visions
IR image captured and delivered. The IR at that range worked surprisingly well at penetrating the HDPE container material enough that liquid levels were fairly easy to discern.
Raspberry Visions
Raspberry Visions
One of the first captures
Raspberry Visions
Later capture
Raspberry Visions
Raspberry Visions
]]>
<![CDATA[Migrating a VirtualBoxer]]>Or how I migrated some Virtualbox machines to KVM/QEMU

In the olden days, I would often set up a server on Virtualbox using a desktop install of my favorite Ubuntu flavor to get it up and running with a GUI interface since they typically worked good out of the

]]>
https://ghosty.6ccorp.com/migrating-a-virtualboxer/62245eecd70cf20001f497bfTue, 08 Mar 2022 17:26:20 GMTOr how I migrated some Virtualbox machines to KVM/QEMUMigrating a VirtualBoxer

In the olden days, I would often set up a server on Virtualbox using a desktop install of my favorite Ubuntu flavor to get it up and running with a GUI interface since they typically worked good out of the box.

After it was stood up, I would disable the desktop portion and leave the rest running, then migrate it to a beefier server and set it up to run autostart and headless if everything was hunky-dory.

While it has worked great overall, using Virtualbox as a virtual host is kind of janky and hasn't offered the level of virtual machine support I've come to find useful from other hypervisors like KVM or containers like Docker.  

I'm not ragging on it, I just have found these other tools to be a bit more effective when managing larger groups of virtual machines from a server perspective, but using Virtualbox has been handy for it's easy to use GUI interface and fairly robust hardware integration.  It just works when I need it to, and there's tons of community support for it.

What I mean by this is when I am doing tests with larger fleets of VM's, QEMU and Docker are much easier to deploy and manage these groups using CLI commands to initialize and/or manipulate them.

In either event I've come across times where I have wanted to migrate from  Virtualbox to KVM, or even from a bare-metal to virtualbox to my proxmoxbox server, so here's a couple stories on how I did that:

Scenario 1: moving an Ubuntu bare metal to Virtualbox to KVM

This project was to move my old small ERP server from the Dell it was on into a VM.  This is surprisingly easy and worked like a champ!  It was running Ubuntu 12.04 with software Raid 1, using minimal resources (read Athlon X2 and 2GB DDR2 RAM) and is just for internal company stuffs for my old company.

Migrating a VirtualBoxer
OpenERP has served me well

1) I cloned an image of both RAID disks (but really only needed the first one) and output them to .dd files, then changed permissions to my user on those images so Virtualbox could use attach to them.

2) using vboxmanage internalcommands I created a rawfile link to the first disk image, and attached it to a Linux-type VirtualBox with 4096MB ram and 3 cores of CPU.  It booted up right away.

3) I adjust the network interface as those values change when you change hardware, so I modified devices in /etc/network/interfaces from eth0 to enp9s0 and switched static IP's to something on the local network, then restarted network services which brought it back online.

At this point everything was working well enough that I was ready to migrate it to KVM on the Proxmoxboxrox.  So the process of importing a *.dd image to KVM is trivial as it does not need conversion of any type, and although *.dd doesn't support snapshots, I can either convert it later to another FS type or just rsync whole .dd snapshots to my backup box.  I'll figure that part out later!

4) next I created a VM instance on Proxmox with the necessary resources.  In this instance I gave it 1024MB RAM (balloons to 4096MB), 3 cores, and attached to vmbr0 interface.  It doesn't let me create an instance without a disk, but I can just remove that later.

At this point it was ready to finish, but I did not boot the instance yet, as I wanted to first connect the transferred disk image first, then let it fly.

The process to attach an existing disk to a VM in Proxmox is pretty easy.  After doing some Google-fu, it looks like the disk must be declared on that instance's conf file in /etc/pve/qemu-server/ first as an unused device, then attached to the instance via GUI and configured.  I followed this guide which helped the initial set up, and then the comment by Sergio which described the "qm rescan" bit to help the system identify the changes.

5) Fire it up and work out the bugs.  So here we go!  First bug: no boot disk found!  Aw crap what happened?  Let's check settings:

Migrating a VirtualBoxer
Y u no boot?

Ah.  Newly attached disk does not get set as a boot option.  Enable that, disable the others. Try again.

Boom!  Everything fired up without a hitch.  So next step was just resetting the network interface again for the new host.  Super easy.  

Lastly on this particular instance, I went in and removed the 2nd RAID drive from mdadm so it wouldn't operate on degraded performance mode, and I don't really need the redundancy as the system is no longer mission critical and only for reference now, and I can just perform regular snapshots for backup protection.  After that, do a --grow operation (huh huh) which is used to shrink the number of RAID devices, interestingly enough.  Neat.

Scenario 2: Migrate from Virtualbox to KVM

The second project I did recently was to move a small Wordpress website I was hosting on Virtualbox from that server to the Proxmoxboxroxsox for better underlying hardware.  The process was very similar however there are couple differences in steps as this VM had a long timeline of snapshots which needed to be merged to produce a current image, which I learned the hard way!

Migrating a VirtualBoxer
Yeah that guy just saw a snake I think

My first try I took a similar approach as the baremetal, yanking the VMDK image from Virtualbox and connecting it to KVM as before.  It worked well enough, however I noticed the Wordpress site was ... old.  It was using the old site design and after further inspection it was clear that I was using an old snapshot image.  Oh, right, snapshots. I didn't merge the snapshots.  Fascinating!

So restarting the process,

1) exported the VM with vboxmanage which merges the diffs from the snapshots, then produces the OVA in a tidy package.  From there I sent it to the Proxmoxbox.  Now back to the Google Fu to reveal how to import an OVA to KVM.  

2)  .. unzip the OVA!  It turns out the OVA is an archive which contains the Virtual disk and a couple of other files like manifest and what not.  The only part I needed to grab was the VMDK, easy enough.  But KVM cannot read VMDK natively it seems.

3) use qemu-img convert to produce a readable image, in this case I used qcow format as it has native KVM support and has snapshot capabilities and file-space balloon support, though the abstraction layers versus raw can leave some performance on the table.  It's not a high volume server and it is running on SSD so I'm not too worried about it.

4) create the VM instance again, set parameters but not invoke it.

5) as before, attach the unused disk and perform the "qm rescan" so it's detected, then set it's parameters including enabling the disk for booting in the options.  Looks ready to fire up.

6) send it!  Started up without a hitch this time, and after reconfiguring the network interface in /etc/network/interfaces to enp-whatever-it-was, it was connected to vmbrsomething and reached the outside world.  Presto.

Migrating a VirtualBoxer
Runs like a champ. Sweet, buttery champ.

Finishing touches

For both servers I created a reverse proxy tunnel for SSL support using NPM and CloudFlare Full Strict as my Cert Authority.  This setup works fantastic for these small servers and CloudFlare offers a modest performance boost with the CDN and caching capabilities when used with Full-strict SSL and edge certs.

Migrating a VirtualBoxer
I love CloudFlare - very low cost and highly effective SSL, CDN and DNS services
]]>
<![CDATA[Home Lab Confab]]>I keep getting questions from literally everyone I talk to asking about my home lab set up.

My milk man, my Pokemon trainer, random people at Aldis, even my cryogenic planning advisor!

Alright, to honor my adoring public let me spill the beans about my High Performance (read: extra low

]]>
https://ghosty.6ccorp.com/home-lab-confab/6221789c42b592000168c96eFri, 04 Mar 2022 06:54:28 GMT

I keep getting questions from literally everyone I talk to asking about my home lab set up.

My milk man, my Pokemon trainer, random people at Aldis, even my cryogenic planning advisor!

Alright, to honor my adoring public let me spill the beans about my High Performance (read: extra low budget) Compute Center 😁

Systems

T3500 Precision

  • Xeon 8 Core W3530 2.80GHz
  • 24GB RAM
  • 14TB HDD Storage / 1.15 TB SSD Boot+VM Storage
  • Ubuntu Desktop
  • Usage: General Purpose Compute - Docker/Swarm, Kubernetes, VirtualBox, ERP Server, ZoneMinder Server, NFS Server, Backups + Snapshots + Syncthing

T3600 Precision

  • Xeon 8 Core E5-2665 2.40GHz
  • 48GB RAM
  • 4TB HDD Storage + ZFS / 120GB SSD Boot disk
  • Proxmox VE‌‌Usage: General Purpose Compute - Docker/Swarm, KVM, Kubernetes + Argo CD, AWS LocalStack, NFS Server, Backups + Snapshots

Dell Inspiron N5040

  • Core i7 M640 2.80GHz
  • 8GB RAM - 256GB SSD - Ubuntu‌‌Usage: Ingress & DNS - PiHole, BIND, Docker/Swarm, Virtualbox, Nginx Proxy Manager, Portainer Manager, Grafana Host

Acer EasyStore H340

  • Atom 230 1.60GHz
  • 2GB RAM
  • 8TB Storage + ZFS
  • Proxmox Server VE ‌‌Usage: building up to be Deep Freeze & Redundancy Storage
Home Lab Confab
Bad power supply! Bad!

APC Backup UPS Pro 1500

  • 2 Working batteries out of 6 😎
Home Lab Confab
I took it apart only to find that it was the removable batteries that were bad lol

Network

  • TL-R605 Main Router - 3 Subnets for General, Servers, Guest + VLANs for Virtual subnets
  • 2x 1Gigabit Routers w/ 5GHz/2.4Ghz WiFi (TP-Link & LinkSys) running in AP Mode
  • 2x 1 Gigabit 8 port switches (TP-Link)
  • 2x 2.4GHz WiFi Repeaters in extender mode
  • 1x LinkSys VOIP
  • 2x DCS-8000LH Wireless Cameras (connected to ZoneMinder)
  • 400/20 Mbps Spectrum Cable

Rando test results

Home Lab Confab
iperf intra-network test between the precisions, not too bad
Home Lab Confab
Speedtest results, what's up with that upload speed??

Anywho

Want to see anything else?  Let me know!

]]>
<![CDATA[Ghost scrubbing with a Python]]>So remember that fancy Static Generated version of my Ghost CMS blog I was bragging about?  I may have gotten too excited and overlooked something.

Overlooked something kind of important, namely mobile experience.

Images go here ^

Ok so maybe it wasn't working so smooth.  Let'

]]>
https://ghosty.6ccorp.com/ghost-scrubbing-with-a-python/6218319e42b592000168c83cFri, 25 Feb 2022 15:43:01 GMT

So remember that fancy Static Generated version of my Ghost CMS blog I was bragging about?  I may have gotten too excited and overlooked something.

Overlooked something kind of important, namely mobile experience.

Ghost scrubbing with a Python
Images go here ^

Ok so maybe it wasn't working so smooth.  Let's see what's going on this time.

Ghost scrubbing with a Python

Chrome's native Inspect tool is rad.  Looking through the image elements in that header, it shows me which images exist and which ones don't.  The filenames and directories look fine, so not the same problem as wget, but for some reason some of the elements don't appear in the quick inspect tool, particularly the 600w and the 1000w sizes.

Ghost scrubbing with a Python

Let's dig a bit into the file system and take a looksy.

Ghost scrubbing with a Python

Interesting, so I'm not seeing the w1000 or w300 subfolder for the mobile optimized images that Ghost produces.  And the ones that do exist look like they aren't producing all of the size-optimized imagery it should for each page.

Ok, so maybe httrack isn't flawless, but should be pretty easy to set an option to produce a mobile-view request and re-run httrack to give me an overlay of a mobile site version and merge the two, right?

Right?

Httrack is kind of old

Like myself, httrack pre-dates mobile-centric technology, and though it's creator still appears to be on the web and active, the git page shows last update some years ago, and in a 1 year old forum post the Author says he's not actively maintaining it, so mobile-centric views are unlikely to be integrated.

So when it comes to server-side javascript hosts, there is something about srcsets and javascript or some gobbledegook that prevents them from being called in.  Yuck.

Ok, maybe we can throw something together with the original source clone, a little C won't hurt

Ghost scrubbing with a Python

Have I mentioned I can be incredibly lazy?  I don't want to do all of that, I just want something fast and easy and cheap!

Ok so httrack doesn't have any options for changing viewport sizes, what about adding to the sitemap.xml telling httrack to crawl the non-apparent subfolders and images?  That should be able to direct well enough right?

Aaaand changing sitemap output from Ghost requires digging in the JS source.  Cmon!  I'm trying to be lazy here!

Ok stepping back a second, wget was able to produce a mostly working site, which just needed a little scrubbing to fix some weird filename errors.  But at least it generated the full list of available image sources.

If I'm going to code, I want to do as little as possible.  Let's revisit and see what is going on here.

Ever clean something with a Python, Jimmy?

It looks like this wget issue seems to vex Ghost sites, and while I see a script here that can scrub the output files using some find/sed commands, I've been learning more about python and would like to use that since it's kind of housed all in one bundle instead of a series of different apps (granted they are native apps, and typically found on every install).  I just want an excuse to use Python ok?

So back to the original problem, if some code is involved, let's keep it minimal!

Since I don't want to reinvent the wheel I looked for a quick find-and-replace script that would fit the bill, of which there were many.  The one that I thought was simple and elegant enough was using the os.walk method and fnmatch to recognize file types, making it easier for me to wrap my puny mortal brain around.

import os, fnmatch
def findReplace(directory, find, replace, filePattern):
    for path, dirs, files in os.walk(os.path.abspath(directory)):
        for filename in fnmatch.filter(files, filePattern):
            filepath = os.path.join(path, filename)
            with open(filepath) as f:
                s = f.read()
            s = s.replace(find, replace)
            with open(filepath, "w") as f:
                f.write(s)


findReplace(".", "jpg", "jpg", "*.html")
findReplace(".", "jpg", "jpg", "*.html")
findReplace(".", "jpg", "jpg", "*.html")
Thank you StackOverflow!

This script should jog though primary folder and subfolders looking for html files, inspect them for mangled jpg extensions, and then scrub-a-dub-dub.  

So back to our ol' pal wget, run mirror, then run python script and BEHOLD, IT'S ALIVE!

Ghost scrubbing with a Python
This represents my typical sunset view of Mount Fuji, from Signal Hill in Long Beach.

Ok time to be excited again!

Checking the internal htmls I can see that the jpg extensions have been scrubbed correctly across the board, and should I encounter any future issues with mangling I can just add+adjust the python easily and quickly.

So at this point I edit my GitHub Actions dev directive file to remove the previous httrack code, and in order to integrate the python I just need to tag in a new extension for jannekem/run-python-script-action@v1 which let's me run the code inside the Action YML, making sure I set up a python env prior to calling it

steps:
      - name: Checkout
        uses: actions/checkout@v1
      
      - name: Setup python support
        uses: actions/setup-python@v2
        with:
          python-version: ${{ matrix.python-version }}

      - name: Mirror ghosty site
        run: wget -E -r -k -p -q https://mysite
      
      - name: run Python stuff
        uses: jannekem/run-python-script-action@v1
        with:
          script: |
            import os, fnmatch
            def findReplace(directory, find, replace, filePattern):
              for path, dirs, files in os.walk(os.path.abspath(directory)):
                for filename in fnmatch.filter(files, filePattern):
                  filepath = os.path.join(path, filename)
                  with open(filepath) as f:
                    s = f.read()
                  s = s.replace(find, replace)
                  with open(filepath, "w") as f:
                    f.write(s)
            findReplace(".", "jpg", "jpg", "*.html")
            findReplace(".", "jpg", "jpg", "*.html")
            findReplace(".", "jpg", "jpg", "*.html")
            print("Ghost Scrubber Executed")
         

And after massaging the YML for spacing and ... more spacing.. and syntax.. and more spacing... that's what came out with no errors, and runs to completion.

Anddddd VOILA! (again, but for reals this time)

Ghost scrubbing with a Python
Mobile views, what's that about anyway?

The site is fully functional on the CF+S3 bucket with mobile views enabled, and nary a problem to be seen!

(ok now what is the deal with the cloudflare and email link?  I'll explore that later)

Yup!  Everything is happy in happy land!  

On to the next project, invoking an Action everytime I make a change to the site's content, that way I just focus just on publishing posts and the machines handle the rest.

]]>
<![CDATA[Ghost in the Mirror]]>So in an earlier episode I talked a bit about my project for hosting a project/resume site for my stuff, here I'll go into a bit more detail.

My objectives were to have a quick, CDN capable responsive site that I could just pop my thoughts onto,

]]>
https://ghosty.6ccorp.com/scrubbing-a-ghost/62170e4042b592000168c5faThu, 24 Feb 2022 17:39:08 GMT

So in an earlier episode I talked a bit about my project for hosting a project/resume site for my stuff, here I'll go into a bit more detail.

My objectives were to have a quick, CDN capable responsive site that I could just pop my thoughts onto, and then have it served up easy and in a modern, good looking style with minimal upkeep after getting it running.  Kind of like a diary, where I can just sip on my coffee and throw stuff down whenever I get some free time to document thoughts, projects, ideas, opinions on pineapple on pizza, etc.

"Which way did he go George?"

There are tons of options available for that, so narrowing it down further, I like sticking to open-source projects for the street cred.  I also like self-hosted because it lets me dig in the guts of the system if need be, and learn more about the how's and the why's.  

After perusing various options (and there are LOT), including my old go-to Wordpress, I came across Ghost CMS which is pretty light weight (ie fast), responsive, supports headless mode (might come in handy?), and markdown editing support (I started using this a lot with Obisidian - more on this in a later post).  

It also had a working front-end that didn't require me building one out in JS, which is a bit more work.  I'm admittedly weak in my JS though I am learning as I go.  It's got tons of other features but these ones are critical for me.

Ghost is also something I could host locally and push to my iceyandcloudy domain, as it is able to flatten which I can host on S3.  This allows me get more familiar with AWS's offerings (which is another one of my broad objectives).

Running in an AWS enabled site removes most of the issues with self-hosting and provides a reasonably priced CDN for fast delivery.

Ok let's do this!

So interestingly enough, my Portainer server had a ready-to-use Docker image template available to get a GhostCMS instance up and running with a click.  

Ghost in the Mirror
Huh, neat.

I was able to jump on the local instance and get crackin' on the design preferences, familiarize myself with it's interface, and actually have a working site in less than 10 minutes.  

Now that's what I'm looking for!

Ok, next to-do!  Let's get this puppy flattened out and hosted on S3!

How do dat?

Uhh let's see here.  Can't host a JS server in S3, can't pull data actively from the local site without using another server but I don't want to run an EC2 or ECS instance and all of that infrastructure.  

So how do I make a working mirror of my local site and mimic it on S3?  I need it to be repeatable and scriptable so it can run either on a cron timer, pulling updates daily and pushing them up to S3, or on a trigger after every post.  So something CLI based.  Ok!

For now let's do cron as I know how to get that up and running quick.  Ok!  Back to flattening.  Uhh.  Google-fu to the rescue!

There are a few options here, wget, httrack, Ghost Static Site Generator, Next.JS+Ghost, HUGO+Ghost, Sunny D, the purple stuff.  Let's try the Sunny D!  I mean, let's try GSSG as it is geared towards exactly what I want to do!

So after pulling from the Fried Chicken Repo and following the directions, I was not able to get the damn thing to run correctly.  Not in my main dev server, Node docker, or Macbook.  It would only run with default params and not accept my flags and options, spitting out either 404 errors or partially cloned sites.  Looking at the source files, it appears to just use wget as the mirror app and then run it through some filters to get everything lined up, then output locally for viewing.

Ok, next option!  Ghost + a general site mirroring app!

Lets start with wget and see how it compares, since I use wget a lot for grabbing stuff off the net.

wget -E -r -k -p https://mysite

Runs well!  Let's see how it looks with python3 -m http.server.

Ghost in the Mirror
Uh oh. That's an oopsie.

Let's dig a bit and see what's going on here.

Ghost in the Mirror
Jpgg? Jpgpg? What the hell is that?

For some reason wget turns these jpg filenames into mutants.  Horrible scourges of nature, never to be shown to the unwary public.

Ok, next option.  

Httrack, though old, still seems capable of mirroring sites effectively and can run from CLI.  It's also available via distro so it's as easy as running apt install, and after a quick read I was able to get a mirror version of the site available for browsing with python3 -m http.server.  Neat!

httrack -Q -q https://mysite

Ghost in the Mirror
Looks good! All the pages are connecting and it's running fast. Let's put it on S3!

So next steps are to stand up an S3 static site, set up permissions and IAMs, upload files, then expose it to CloudFront, connect to Route53 and add cert, then expose that end point.  Silky smooth!

Ghost in the Mirror
We're live on S3+CF! It's fast and makes me feel like a big man!

Ok, now let's set up a pipeline to mirror and publish page updates, so eventually we can have it check daily (or even better, trigger an operation on post updates).

For this part I am using GitHub actions as I already have my GH connected to my dev laptop + VSCode.  I'm lazy, ok?  Is that what you want to hear?  Oh, also I know it's very functional with lots of extensible support, including AWS Actions which can sync changes from my GH Actions environment to me buckets, complete with secrets in envvars.  

Clean, real clean, like my pants is.

So next up I built a dev S3 bucket and a GHA pipeline to work as follows:

On push to dev branch trigger

  • Run httrack on GH Actions to mirror local server Ghost site
  • Connect GH Action mirror to dev S3 bucket and sync changes
  • Run invalidation on CloudFront to ensure fresh cached version of changes

From there I can connect to my dev bucket and check everything out.  

name: Upload to Dev S3 Website Bucket and Invalidate CF Cache
on:
  push:
    branches:
      - dev

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    
    - name: Install httrack
      run: sudo apt install -y httrack

    - name: Checkout
      uses: actions/checkout@v1

    - name: Mirror ghost site
      run: httrack -Q -q https://mysite

    - name: Mirror complete
      run: echo "Mirrored and ready to sync to dev"

    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_DEV }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_DEV }}
        aws-region: us-west-1

    - name: Deploy static site to S3 bucket
      run: | 
        cd mysite
        aws s3 sync . s3://my-bucket --delete

    - name: Invalidate dev CF Distro
      run: aws cloudfront create-invalidation --distribution-id me-distro-id --paths "/*"

Everything's working like it's supposed to, so I add the changes to main.yml workflow, adjust it to reflect AWS resource changes for main bucket stuffs, then merge into main branch and have it run the actions making available on my main site (you are here).

And voila!  A Static Generated version of a Ghost CMS site hosted on S3, with CloudFront CDN speed and reliability!  That's mad fresh.

Next steps, add cron timer to check for updates daily, or figure out how to run a trigger if I'm feeling extra-motivated!

Total Project Time: about a week on and off.

Total Project Satisfaction: like, 1000 or more!

]]>
<![CDATA[Ghosting+Hosting+S3]]>Let me tell you about about JAM, Ghosts, Buckets, Static and Action

You might be wondering to yourself, "is this site using a flattened, static version of a Ghost CMS site on an S3 bucket via Cloudformation edge distribution?"

The answer might surprise you.  

🤩
Yes, yes
]]>
https://ghosty.6ccorp.com/ghosting-hosting-s3/620dbf940be0740001dd3914Thu, 17 Feb 2022 06:54:09 GMTLet me tell you about about JAM, Ghosts, Buckets, Static and ActionGhosting+Hosting+S3

You might be wondering to yourself, "is this site using a flattened, static version of a Ghost CMS site on an S3 bucket via Cloudformation edge distribution?"

The answer might surprise you.  

🤩
Yes, yes it is

Why would I do all of this? Am I mad? Maybe?

Well, it all started December of 2021 when I had come to the conclusion that I needed to move into the IT sector in order to hopefully prevent a future version of myself which was in constant pain and stiffened due to years of moving 55 gallon drums around, lifting heavy objects and driving for hours, then heading home and getting stuck in a chair.

While trying to decide which area of IT I should focus my attention on, I kept seeing suggestions to make a little blog site to introduce myself, talk about previous or current projects, and bare myself to the world to help people get to know me.

So after considering the old standby options of Joomla, Wordpress, Google sites or even Weebly, I looked around to see what was new and fun for hosting blog pages.

I came across all sorts of cool stuff, but what caught my attention was the Static Generated stuff coming out of the JAM stacks like Hugo, Gatsby and the like.

After goofing around with those and realizing I know far less JS than I anticipated, I came across a fairly easy to implement solution:

Ghost CMS fed into a Static Site Generator and then hosted on my S3 bucket under the iceyandcloudy.net domain.

After some initial testing it looks like Ghost is fast enough, can feed output via headless API's for SSGs, and is easy and fast to set up.  At this point I just wanted to get something up and running without too much headache, and without using really old outdated CMS software hosted on anything more than a bucket-type structure.  Oh, and it should be easy to feed into a CDN so it's fast and nimble, like my cat is.

More to follow, this is just my first post!

]]>