<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>The Comfy Seat</title><link>https://beanbag.technicalissues.us/</link><description>Recent content on The Comfy Seat</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Tue, 05 May 2026 20:52:46 -0400</lastBuildDate><atom:link href="https://beanbag.technicalissues.us/index.xml" rel="self" type="application/rss+xml"/><item><title>An Unsupportable Path</title><link>https://beanbag.technicalissues.us/an-unsupportable-path/</link><pubDate>Sat, 14 Jun 2025 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/an-unsupportable-path/</guid><description>&lt;p&gt;Back in December &lt;a href="https://beanbag.technicalissues.us/the-community-is-forking-puppet/"&gt;I wrote about&lt;/a&gt; how we, the community behind the open source project called Puppet, were being forced into forking the project. In the time since then, &lt;a href="https://voxpupuli.org/blog/2025/01/21/openvox-release/"&gt;OpenVox was born&lt;/a&gt; and has been diligently chugging along creating, among other things, builds based off of the last truly open versions of Puppet 7 &amp;amp; 8. We have also been trying to work with Perforce to ensure OpenVox remains compatible with Puppet Core and Puppet Enterprise. We&amp;rsquo;ve given them extensive feedback both in writing and via Zoom meetings on the EULA that is attached to Puppet Core to try to make it workable for the community, but they will not make the necessary changes &lt;a href="https://voxpupuli.org/blog/2025/05/19/perforce-eula/"&gt;so that it is tenable for Vox Pupuli to test our modules against Puppet Core&lt;/a&gt;. Additionally, they are steadfast in their commitment to keep Facter as a private repository going forward. Facter is a critical, load-bearing part of the Puppet technology stack. If they make private changes that we don&amp;rsquo;t anticipate or know to test for, it risks breaking the entire ecosystem. Similar to their promises about OSP, they said they&amp;rsquo;ll push changes back into &lt;a href="https://github.com/puppetlabs/facter"&gt;the public repo&lt;/a&gt; and take PRs, but given that they have done this zero times in the last 7 months on the puppet repo, this does not seem likely.&lt;/p&gt;</description></item><item><title>Proxying Bitcoin Core and LND with Tailscale and Nginx</title><link>https://beanbag.technicalissues.us/proxying-bitcoin-core-and-lnd-with-tailscale-and-nginx/</link><pubDate>Sat, 08 Feb 2025 10:30:00 +0100</pubDate><guid>https://beanbag.technicalissues.us/proxying-bitcoin-core-and-lnd-with-tailscale-and-nginx/</guid><description>&lt;p&gt;Recently I decided I wanted to run my own Bitcoin and Lightning node and I wanted it to be reachable on the public internet. I didn&amp;rsquo;t, however, want it to actually reside on the server that has the static public IPv4 and IPv6 addresses available. Thus, a reverse proxy was needed. This turned out to be a pretty simple thing to solve for thanks to the &lt;a href="https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html"&gt;Nginx Stream Proxy module&lt;/a&gt; and &lt;a href="https://tailscale.com/linuxunplugged"&gt;Tailscale&lt;/a&gt;. Here&amp;rsquo;s the basic architecture:&lt;/p&gt;</description></item><item><title>The Community Is Forking Puppet</title><link>https://beanbag.technicalissues.us/the-community-is-forking-puppet/</link><pubDate>Mon, 16 Dec 2024 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/the-community-is-forking-puppet/</guid><description>&lt;p&gt;So, here&amp;rsquo;s an updated tl;dr on Puppet as an OpenSource project: a fork is absolutely coming now. There was a &amp;ldquo;town hall&amp;rdquo; today in which &lt;a href="https://www.linkedin.com/feed/?trk=guest_homepage-basic_nav-header-signin#&amp;amp;lipi=urn%3Ali%3Apage%3Ad_flagship3_pulse_read%3B7s%2B%2BUcSIQQ%2BRbsznDHsEaA%3D%3D"&gt;Perforce Software&lt;/a&gt; made it quite clear they are going to claim they want to work with the community while not actually doing so. As a result, those of us who have been following this closely reassembled, determined there was no longer hope of really working together, and that it was time to move forward accordingly. Perforce also made it clear no community thing could use the brand mark / trademark “Puppet” as well, &lt;a href="https://github.com/OpenPuppetProject/planning/discussions/9"&gt;thus the naming discussion has started&lt;/a&gt; (the linked GitHub org will be renamed as soon as a name is decided upon). Additionally, governance discussions are underway as well. More will be shared about this as soon as we have come to a decision. In the mean time, some thoughts on the subject are at &lt;a href="https://github.com/OpenPuppetProject/planning/issues/7"&gt;https://github.com/OpenPuppetProject/planning/issues/7&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Routing Across AWS Subnets</title><link>https://beanbag.technicalissues.us/routing-across-aws-subnets/</link><pubDate>Fri, 31 May 2024 17:00:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/routing-across-aws-subnets/</guid><description>&lt;p&gt;This morning at work I was presented with an interesting question: why can&amp;rsquo;t two instances in AWS seem to talk to each other on their internal / private network interfaces? To answer this, let&amp;rsquo;s back up a second and let me show you what the architecture of the environment is. First, take a moment and look at the diagram below and observe not only how many layers there are, but also that this is a pretty simple setup with one VPC containing two instances that are spread across two Availability Zones:&lt;/p&gt;</description></item><item><title>Automated Plant Watering</title><link>https://beanbag.technicalissues.us/automated-plant-watering/</link><pubDate>Wed, 01 May 2024 15:00:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/automated-plant-watering/</guid><description>&lt;p&gt;&lt;img src="https://beanbag.technicalissues.us/automated-plant-watering/2024-05-01-raised-bed-wide.webp" alt="Our raised flower bed" loading="lazy"&gt;
&lt;/p&gt;
&lt;p&gt;Every spring, my wife and I get really excited about all the pretty plants and flowers that we can decorate our yard with. We also generally grow some vegetables and/or herbs. The problem with this is that we live in Georgia in the US and it gets freaking hot and humid here during the summer. The oppressive heat makes us not want to go outside to water the plants. Combine this with a little bit of traveling and you have a recipe for mostly dead plants during the latter part of the growing season. Well, this year we decided to not only acknowledge this reality, but to do something about it. You see, I&amp;rsquo;m a bit of a home automation nut and my wife knows it. She was shopping on Amazon and came across an inexpensive drip irrigation kit for gardens and decided to buy it for a raised flower bed we were already planning to setup this year. When it came in, she showed it to me and said &amp;ldquo;now I just need you to make it come on automatically.&amp;rdquo; As you might be able to guess, I was more than happy to take up that challenge. I spent a couple of days doing research to find a solution that fit within these self-imposed parameters:&lt;/p&gt;</description></item><item><title>Custom Weather Entity</title><link>https://beanbag.technicalissues.us/custom-weather-entity/</link><pubDate>Tue, 30 Apr 2024 15:00:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/custom-weather-entity/</guid><description>&lt;p&gt;In Home Assistant 2024.4, this note was in the &amp;ldquo;Backward-incompatible changes&amp;rdquo; section of &lt;a href="https://www.home-assistant.io/blog/2024/04/03/release-20244/"&gt;the release announcement&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The previously deprecated &lt;code&gt;forecast&lt;/code&gt; attribute of weather entities, has now been removed. Use the &lt;a href="https://www.home-assistant.io/integrations/weather#service-weatherget_forecasts"&gt;&lt;code&gt;weather.get_forecasts&lt;/code&gt;&lt;/a&gt; service to get the forecast data instead.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;(&lt;a href="https://github.com/gjohansson-ST"&gt;@gjohansson-ST&lt;/a&gt; - &lt;a href="https://github.com/home-assistant/core/pull/110761"&gt;#110761&lt;/a&gt;) (&lt;a href="https://www.home-assistant.io/integrations/metoffice"&gt;documentation&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;I had a heck of a time finding docs on this, so I have compiled what I did here.&lt;/p&gt;
&lt;h2 id="base-weather-integration"&gt;Base weather integration&lt;/h2&gt;
&lt;p&gt;I am using the &lt;a href="https://www.home-assistant.io/integrations/tomorrowio/"&gt;Tomorrow.io integration&lt;/a&gt; to get forecasts, but what I have done should work with any weather provider.&lt;/p&gt;</description></item><item><title>PowerPress Authorization Flow</title><link>https://beanbag.technicalissues.us/powerpress-authorization-flow/</link><pubDate>Wed, 12 Jul 2023 21:45:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/powerpress-authorization-flow/</guid><description>&lt;p&gt;There’s a proposal in the podcast namespace for an authorization tag and I think &lt;a href="https://wordpress.org/plugins/powerpress/"&gt;PowerPress&lt;/a&gt;, an existing &lt;a href="https://wordpress.org/"&gt;WordPress&lt;/a&gt; plugin that facilitates podcast hosting, can implement the same authorization flow as a podcast hosting provider. The proposed authorization flow is described to work something like this when a service wants to confirm a user owns a podcast:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Service reads an authorization url from the podcast’s rss feed&lt;/li&gt;
&lt;li&gt;Service generates a onetime token&lt;/li&gt;
&lt;li&gt;Service calls authorization url with token &amp;amp; rss feed as parameters&lt;/li&gt;
&lt;li&gt;Website hosting authorization url verifies it’s the home / host of the feed&lt;/li&gt;
&lt;li&gt;Website has user log in&lt;/li&gt;
&lt;li&gt;Website presents user a confirmation page&lt;/li&gt;
&lt;li&gt;If user confirms, website inserts the token into the &lt;code&gt;&amp;lt;podcast:txt&amp;gt;&lt;/code&gt; tag&lt;/li&gt;
&lt;li&gt;Website publishes updated rss feed&lt;/li&gt;
&lt;li&gt;If website supports it, it sends a podping to notify watchers of the updated feed&lt;/li&gt;
&lt;li&gt;Website sends a success response if the feed is updated and a failure response otherwise&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;User interaction is done at this point. The service still needs to see the updated feed, but that is beyond the bits I want to talk about here.&lt;/p&gt;</description></item><item><title>Crash Boom Bang: PoE &amp; Lightning Strikes</title><link>https://beanbag.technicalissues.us/crash-boom-bang-poe-lightning-strikes/</link><pubDate>Sat, 09 Apr 2022 09:00:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/crash-boom-bang-poe-lightning-strikes/</guid><description>&lt;p&gt;A couple of days ago there was a severe storm that rolled through my area. Lots of thunder that literally rattled the walls of my house, lightning strikes near by, and high winds. At one point the power blinked out too. No big deal&amp;hellip; or at least it wasn&amp;rsquo;t after I realized what was going on. The mystery I am referring to is that when the storm finished I realized my &lt;a href="https://www.tubeszb.com/product/cc2652_poe_coordinator/21?cp=true&amp;amp;sa=false&amp;amp;sbp=false&amp;amp;q=false&amp;amp;category_id=2"&gt;PoE Zigbee Coordinator&lt;/a&gt; wasn&amp;rsquo;t back up and running.&lt;/p&gt;</description></item><item><title>Starting Over with Home Assistant - Prep Time</title><link>https://beanbag.technicalissues.us/starting-over-with-home-assistant-prep-time/</link><pubDate>Sun, 06 Mar 2022 18:00:00 -0500</pubDate><guid>https://beanbag.technicalissues.us/starting-over-with-home-assistant-prep-time/</guid><description>&lt;p&gt;The other day I &lt;a href="https://www.reddit.com/r/homeassistant/comments/t5rsg4/starting_over_maybe/"&gt;posted this question&lt;/a&gt; to Redit:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I’m seriously considering redoing my Home Assistant setup from scratch now that I know what we actually use and what’s just cruft… anyone else done this?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;As expected, there were a variety of opinions. Surprisingly though, there was an overwhelming consensus that redos after having used &lt;a href="https://www.home-assistant.io"&gt;Home Assistant&lt;/a&gt; for a whe were a good thing.&lt;/p&gt;
&lt;p&gt;After reading all the comments and thinking about things more, I’ve decided I want to download all my backups, copy out several bits of yaml, export some other settings, and then take the plunge. In my “&lt;a href="https://beanbag.technicalissues.us/introducing-my-home-assistant-setup/"&gt;Introducing My Home Assistant Setup&lt;/a&gt;” post I said I’d be following up with one that breaks down all my automations. This new decision is going to delay that a bit. Instead, I’m going to start by chronicling my journey through rebuilding my setup.&lt;/p&gt;</description></item><item><title>That time I forgot to create alerts for a leak sensor</title><link>https://beanbag.technicalissues.us/that-time-i-forgot-to-create-alerts-for-a-leak-sensor/</link><pubDate>Sun, 20 Feb 2022 16:00:00 -0500</pubDate><guid>https://beanbag.technicalissues.us/that-time-i-forgot-to-create-alerts-for-a-leak-sensor/</guid><description>&lt;p&gt;I bought and setup a leak sensor&amp;hellip; but forgot to have it alert me if it detected water 🤦‍♂️ Here&amp;rsquo;s what happened and my new alerting system.&lt;/p&gt;</description></item><item><title>Temperature sensing for Jupiter Garage</title><link>https://beanbag.technicalissues.us/temperature-sensing-for-jupiter-garage/</link><pubDate>Sun, 30 Jan 2022 22:34:00 -0500</pubDate><guid>https://beanbag.technicalissues.us/temperature-sensing-for-jupiter-garage/</guid><description>&lt;p&gt;The other day I was listening to &lt;a href="https://linuxunplugged.com/441"&gt;Linux Unplugged 441&lt;/a&gt; and heard Chris mention how he wished he had a way to track the temperature in the garage where the server is.&lt;/p&gt;
&lt;p&gt;I decided that this was something I could help with, so I hit him up on Twitter:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hey &lt;a href="https://twitter.com/ChrisLAS"&gt;@ChrisLAS&lt;/a&gt; - I was listening to LUP today and heard you might need to monitor temperatures in your garage… DM me if you want a cloudless WiFi monitor based on ESPHome.&lt;/p&gt;
&lt;p&gt;— Technical Issues (@technicalissues) &lt;a href="https://twitter.com/technicalissues/status/1483594125832294404"&gt;January 19, 2022&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We chatted a tad via direct messages and then I built this:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://beanbag.technicalissues.us/temperature-sensing-for-jupiter-garage/jupiter-garage-data-photo-cropped.webp" alt="Photo of the device I made" loading="lazy"&gt;
&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s modeled after one I have in my own garage with a couple of small modifications to suite his use case better. The setup is made up of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a 1/2 size prototyping board&lt;/li&gt;
&lt;li&gt;a D1 Mini (aka a small ESP8266) microcontroller&lt;/li&gt;
&lt;li&gt;a BME280 temperature, pressure, and humidity sensor&lt;/li&gt;
&lt;li&gt;a 3 port spring terminal block&lt;/li&gt;
&lt;li&gt;a Dallas 1-wire temperature sensor in a waterproof housing with a cable attached&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The idea is that &lt;a href="https://twitter.com/ChrisLAS"&gt;Chris&lt;/a&gt; will be able to mount this on, or near, the new server cabinet with the microcontroller at the top so that the heat it generates rises above the onboard sensor. The onboard sensor (the purple part) will allow him to monitor the temperature, barometric pressure, and humidity in the garage while the corded sensor will allow for monitoring the temperature inside the server rack.&lt;/p&gt;</description></item><item><title>Introducing My Home Assistant Setup</title><link>https://beanbag.technicalissues.us/introducing-my-home-assistant-setup/</link><pubDate>Sun, 09 Jan 2022 23:00:00 -0500</pubDate><guid>https://beanbag.technicalissues.us/introducing-my-home-assistant-setup/</guid><description>&lt;p&gt;A year ago today (January 9th, 2021) I deployed what I consider my first production-grade instance of &lt;a href="https://www.home-assistant.io"&gt;Home Assistant&lt;/a&gt; and couldn&amp;rsquo;t be happier. It is an amazingly powerful tool that is 100% free and open source. One of Home Assistant&amp;rsquo;s key features is the fact that it takes a local first approach to everything. By that I mean that every aspect of the project makes a concerted effort to not rely on the internet or cloud services unless they are absolutely required, such as when integrating with a vendor who does not have a local api (or won&amp;rsquo;t provide access to it to the community). This means that if the internet is out I can still control the vast majority of the devices connected to Home Assistant using either the web interface or the app on my phone&amp;hellip; and push notifications from Home Assistant to my phone will continue to work too.&lt;/p&gt;</description></item><item><title>Epomaker GK68XS: A great keyboard, horrific order fulfillment</title><link>https://beanbag.technicalissues.us/epomaker-gk68xs-a-great-keyboard-horrific-order-fulfillment/</link><pubDate>Fri, 13 Nov 2020 20:10:00 -0500</pubDate><guid>https://beanbag.technicalissues.us/epomaker-gk68xs-a-great-keyboard-horrific-order-fulfillment/</guid><description>&lt;p&gt;3 months, 35 emails, and many Facebook Messenger messages later, I finally have all the things I paid for during the GK68XS Kickstarter. I originally ordered 3 keyboards: one of each color combination of the plastic framed wireless models. Sadly, delivery was nothing like what was advertised. Here&amp;rsquo;s the journey I had to take simply to get what I paid for.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;On Aug. 14th I got the first one.&lt;/li&gt;
&lt;li&gt;On Aug. 17th I was told by support that the shipping company missed my other ones and that both would be sent out right away.&lt;/li&gt;
&lt;li&gt;On Sept. 2nd I got a delivery but it only contained one keyboard. On Sept. 7th support claimed ignorance and asked for more information.&lt;/li&gt;
&lt;li&gt;On Sept. 9th they apologized and offered to refund me or fast ship the missing one. I opted for the latter as I still wanted the keyboard.&lt;/li&gt;
&lt;li&gt;On Sept. 14th they said I&amp;rsquo;d have it in had in a week.&lt;/li&gt;
&lt;li&gt;Two weeks later I followed up because I had not gotten anything.&lt;/li&gt;
&lt;li&gt;On Oct. 3rd they again claimed ignorance. I again provided all the requested info and on both Oct. 9 and Oct. 17 I asked for an update.&lt;/li&gt;
&lt;li&gt;On Oct. 17th I reached out via Facebook Messenger and asked if they could help. They said they would escalate my issue.&lt;/li&gt;
&lt;li&gt;On Oct. 21st I posted in Messenger that I still had not heard anything.&lt;/li&gt;
&lt;li&gt;On Oct. 22nd I got a response in Messenger that they&amp;rsquo;d pushed the team and I&amp;rsquo;d get a response the same day.&lt;/li&gt;
&lt;li&gt;On Oct. 23rd I finally got a response back that the missing keyboard would be back in stock &amp;ldquo;in a few weeks.&amp;rdquo; I replied to support and reiterated a previous message that I needed to have the keyboard before December so that I could get it sent out in time for Christmas. They offered to send a white one immediately, which I accepted.&lt;/li&gt;
&lt;li&gt;On Oct. 28th I followed up asking if there was a tracking number yet but did not get a response so I tried again on Nov. 1st.&lt;/li&gt;
&lt;li&gt;On Nov. 1st I posted in Messenger again, this time referring to the conversation from Oct. 23rd. I let them know I had not heard anything at all yet (similar to what I sent support via email). The person working Messenger was quite nice and seemed to try and help.&lt;/li&gt;
&lt;li&gt;On Nov. 2nd I got a message in Messenger saying a response had been sent to my email. I told them I did indeed get a message and that it said they&amp;rsquo;d send me tracking info in 1-2 working days. The person in Messenger asked I keep them updated on the status of things, which I was happy to do.&lt;/li&gt;
&lt;li&gt;On Nov. 6th I followed up again via Messenger and email as I had not heard anything back.&lt;/li&gt;
&lt;li&gt;On Nov. 8th I got a message in Messenger saying my order should be on the way in 1-2 days. I let them know I appreciated the help but that I&amp;rsquo;d already heard that same timeline six days prior. They said they&amp;rsquo;d keep an eye out for it and later that day I finally got the tracking information.&lt;/li&gt;
&lt;li&gt;Today, Nov. 13th I got the keyboard, just as I had ordered it, which was both good and bad. The good is I got exactly what I was after. The bad is that support apparently flat out lied to me when they said they&amp;rsquo;d ship one immediately if I accepted a different color.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I am sharing all this in the open in hopes that it will encourage Epomaker to make process changes so that this doesn&amp;rsquo;t happen again. My wife and I both love our GK68XS keyboards and have even backed the GK96S, though we did so separately in hopes of avoiding a repeat of the fiasco documented here.&lt;/p&gt;</description></item><item><title>OpenTelemetry Part 3: v0.6.0 gems and VMPooler</title><link>https://beanbag.technicalissues.us/opentelemetry-part-3-v0.6.0-gems-and-vmpooler/</link><pubDate>Mon, 05 Oct 2020 18:20:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/opentelemetry-part-3-v0.6.0-gems-and-vmpooler/</guid><description>&lt;p&gt;For part three of my journey in using OpenTelemetry (Otel) with Sinatra I am upgrading to the 0.6.0 release of the OTel gems to get many new features, adding instrumentation to VMPooler, and learning what not to do. Part 3 also includes opening several issues and making my first code contribution to opentelemetry-ruby. Lastly, I will be sharing some more complete code examples showing how all the bits are configured.&lt;/p&gt;</description></item><item><title>OpenTelemetry Part 2: Redoing Instrumentation</title><link>https://beanbag.technicalissues.us/opentelemetry-part-2-redoing-instrumentation/</link><pubDate>Sun, 06 Sep 2020 21:30:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/opentelemetry-part-2-redoing-instrumentation/</guid><description>&lt;p&gt;For part two of my journey in using OpenTelemetry (Otel) with Sinatra I am replacing my Lightstep instrumentation with the OTel version. Besides updating the instrumentation, I am also deploying production instances of an OTel Collector and Jaeger. The goal of part 2 is to have my first three applications shipping traces to both a local Jaeger instance and to Lightstep in both test and production and to have Jaeger included in the Docker compose workflows used during development.&lt;/p&gt;</description></item><item><title>Burnout Sucks</title><link>https://beanbag.technicalissues.us/burnout-sucks/</link><pubDate>Tue, 18 Aug 2020 23:05:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/burnout-sucks/</guid><description>&lt;p&gt;My favorite personal project is an application called &lt;a href="https://piweatherrock.technicalissues.us"&gt;PiWeatherRock&lt;/a&gt;&amp;hellip; or it was before I dove in head-first working to update it and create a community for it’s users. Tons of enthusiasm morphed into something else and, before I realized what was happening, I didn’t even want to touch my computer after work. Months have passed since I even opened anything related to the project and, as best as I can tell, this is that thing I’ve seen others talk about called “burnout.”&lt;/p&gt;</description></item><item><title>OpenTelemetry Part 1: Sinatra</title><link>https://beanbag.technicalissues.us/opentelemetry-part-1-sinatra/</link><pubDate>Fri, 07 Aug 2020 18:27:00 -0400</pubDate><guid>https://beanbag.technicalissues.us/opentelemetry-part-1-sinatra/</guid><description>&lt;p&gt;&lt;a href="https://opentelemetry.io/"&gt;OpenTelemetry&lt;/a&gt; (aka OTel) is becoming the standard for distributed tracing. This is the first in a multi-part series where I will document my trials, tribulations, and successes along the road of using OTel to instrument multiple applications. The first few are all ruby applications and some that I hope to be able to do later are written in Java. My goal is to instrument the applications using one or more standards-compliant libraries and then send the spans to an OTel collector. The OTel collector will then send them on to one or more backends such as &lt;a href="https://www.jaegertracing.io/"&gt;Jaeger&lt;/a&gt;, &lt;a href="https://lightstep.com/"&gt;Lightstep&lt;/a&gt;, and/or &lt;a href="https://www.datadoghq.com/"&gt;Datadog&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Fixing powerline in homebrew's vim</title><link>https://beanbag.technicalissues.us/fixing-powerline-in-homebrews-vim/</link><pubDate>Tue, 12 May 2020 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/fixing-powerline-in-homebrews-vim/</guid><description>&lt;p&gt;For the last few days I have been trying to figure out why, all of a sudden, &lt;a href="https://powerline.readthedocs.io/en/latest/usage.html"&gt;powerline&lt;/a&gt; has stopped working in vim. Whenever I start vim I get this error:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;╔ ☕️ gene:~
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;╚ᐅ vim
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Error detected &lt;span class="k"&gt;while&lt;/span&gt; processing /Users/gene.liverman/.vimrc_os_specific:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;line 2:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Traceback &lt;span class="o"&gt;(&lt;/span&gt;most recent call last&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;&amp;#34;&amp;lt;string&amp;gt;&amp;#34;&lt;/span&gt;, line 1, in &amp;lt;module&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ModuleNotFoundError: No module named &lt;span class="s1"&gt;&amp;#39;powerline&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;line 3:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Traceback &lt;span class="o"&gt;(&lt;/span&gt;most recent call last&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;&amp;#34;&amp;lt;string&amp;gt;&amp;#34;&lt;/span&gt;, line 1, in &amp;lt;module&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NameError: name &lt;span class="s1"&gt;&amp;#39;powerline_setup&amp;#39;&lt;/span&gt; is not defined
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;line 4:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Traceback &lt;span class="o"&gt;(&lt;/span&gt;most recent call last&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;&amp;#34;&amp;lt;string&amp;gt;&amp;#34;&lt;/span&gt;, line 1, in &amp;lt;module&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NameError: name &lt;span class="s1"&gt;&amp;#39;powerline_setup&amp;#39;&lt;/span&gt; is not defined
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Press ENTER or &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt; to &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The part that serious confused me about this was that this worked fine:&lt;/p&gt;</description></item><item><title>Puppet Camping in place: East meets West</title><link>https://beanbag.technicalissues.us/puppet-camping-in-place-east-meets-west/</link><pubDate>Tue, 28 Apr 2020 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/puppet-camping-in-place-east-meets-west/</guid><description>&lt;p&gt;I pitched a tent at Puppet Camp a couple of times before joining the company and have to say that last week’s event was superb, and it more than lived up to the standards set in ye olden times. It was great to hang out (virtually) with so many community members! There were some faces, or should I say Slack handles, that I knew, but many more I got to meet and chat with for the first time. The work these gurus are doing in their day jobs is just amazing! The best part is that a lot of what was demoed and talked about is directly applicable to the work that I and the other attendees do. Below are some of my takeaways from the event along with a boatload of reference material from the presenters and people in Slack.&lt;/p&gt;</description></item><item><title>Church and COVID-19</title><link>https://beanbag.technicalissues.us/church-and-covid-19/</link><pubDate>Mon, 13 Apr 2020 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/church-and-covid-19/</guid><description>&lt;p&gt;My church has been streaming our service for several years and has been live on a local FM station since before I became a member. I setup streaming initially and have been involved in all kinds of services. During that time I’ve run into just about every type of issue or challenge a sound or audio video tech in a midsize church can&amp;hellip; or so I thought.&lt;/p&gt;</description></item><item><title>De-forking a Puppet Module</title><link>https://beanbag.technicalissues.us/de-forking-a-puppet-module/</link><pubDate>Wed, 26 Feb 2020 13:39:00 -0500</pubDate><guid>https://beanbag.technicalissues.us/de-forking-a-puppet-module/</guid><description>&lt;p&gt;A couple of years ago, the team I’m on forked a Puppet module called &amp;ldquo;mrepo&amp;rdquo; that is used for creating and managing RPM-based repository mirrors. We recently had an issue arise with using the module, and I happened to notice that the upstream of our fork is now Vox Pupuli and that they had made several improvements that we could benefit from. Those changes, combined with knowing the quality work Vox Pupuli does on all of their modules, made me wonder what it would take to get off our fork and back on to the upstream version.&lt;/p&gt;</description></item><item><title>Fixing Vagrant's box index</title><link>https://beanbag.technicalissues.us/fixing-vagrants-box-index/</link><pubDate>Thu, 14 Nov 2019 10:36:00 -0500</pubDate><guid>https://beanbag.technicalissues.us/fixing-vagrants-box-index/</guid><description>&lt;p&gt;I use Vagrant a lot and sometimes things on my laptop get moved around or deleted by means other than &lt;code&gt;vagrant destroy&lt;/code&gt;. The problem with this is that when I later run &lt;code&gt;vagrant global-status&lt;/code&gt; it will show me things that don&amp;rsquo;t actually exist anymore. Today I finally got tired of this and figured out how to fix it with minimal pain.&lt;/p&gt;</description></item><item><title>Reclaiming Your Accounts</title><link>https://beanbag.technicalissues.us/reclaiming-your-accounts/</link><pubDate>Fri, 02 Aug 2019 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/reclaiming-your-accounts/</guid><description>&lt;p&gt;This post is a quick reference guide for what I recommend people do when someone has gotten into their accounts on various platforms.&lt;/p&gt;</description></item><item><title>Using the Multi-Resource Declaration and Defined Types to Simplify Manifests</title><link>https://beanbag.technicalissues.us/using-the-multi-resource-declaration-and-defined-types-to-simplify-manifests/</link><pubDate>Thu, 09 May 2019 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/using-the-multi-resource-declaration-and-defined-types-to-simplify-manifests/</guid><description>&lt;p&gt;Sometimes it seems you just keep repeating the same block of code with only one or two lines changed. Sometimes a single thing you need to do more than once is made up of the same two or three resources. These two scenarios are ones that I experience fairly often.&lt;/p&gt;
&lt;p&gt;They are also ones I regularly observe when doing code reviews for others. I am often met with interest and a response that is along the lines of “I didn’t know you could do that” when I mention the idea of simplifying the code I am reviewing by using a multi-resource declaration or a defined type. This post will introduce you to multi-resource declarations and defined types and then walk you through a real-world example of putting them to use to configure load balancing of Puppet Enterprise&amp;rsquo;s services.&lt;/p&gt;</description></item><item><title>Breaking Up a Large Pull Request</title><link>https://beanbag.technicalissues.us/breaking-up-a-large-pull-request/</link><pubDate>Wed, 17 Oct 2018 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/breaking-up-a-large-pull-request/</guid><description>&lt;p&gt;Ever finished up all the changes for a pull request on GitHub and realized it was just too big to review easily or to reason about what&amp;rsquo;s going on? I had just this issue recently. The solution: create multiple patches that each contain a subset of the changes and use them to generate more manageable pull requests.&lt;/p&gt;
&lt;p&gt;For this guide let&amp;rsquo;s make a few assumptions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the default branch in your git repo is &lt;code&gt;master&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;the branch containing the big PR is called &lt;code&gt;my_massive_change&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;the changes encompass many different files, many of which are in subfolders&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To expedite things I am also using &lt;a href="https://hub.github.com/"&gt;hub&lt;/a&gt; to interact with GitHub.&lt;/p&gt;</description></item><item><title>My journey to securing sensitive data in Puppet code</title><link>https://beanbag.technicalissues.us/my-journey-to-securing-sensitive-data-in-puppet-code/</link><pubDate>Wed, 15 Aug 2018 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/my-journey-to-securing-sensitive-data-in-puppet-code/</guid><description>&lt;p&gt;Dealing with secrets and sensitive data in Puppet is daunting, right? Nope, not at all. Let me show you how to do it. I&amp;rsquo;ve wrapped my head around the options available and want to share my journey in hopes of saving you from a few trials and tribulations. Just interested in the end result? Feel free to scroll down to the last section fittingly entitled &lt;a href="#finalproduct"&gt;&lt;em&gt;The final product&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Dual Booting macOS High Sierra and Linux Mint</title><link>https://beanbag.technicalissues.us/dual-booting-macos-high-sierra-and-linux-mint/</link><pubDate>Tue, 06 Feb 2018 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/dual-booting-macos-high-sierra-and-linux-mint/</guid><description>&lt;p&gt;This is a step-by-step walkthrough for dual booting a MacBook Pro (Mid-2015 aka MacBookPro11,5) that already has macOS High Sierra on it with Linux Mint. The hard drive is formatted APFS and has File Vault turned on.&lt;/p&gt;
&lt;p&gt;Before beginning I suggest reading this entire post to see how involved it is or, at a minimum, read the known issues at the bottom.&lt;/p&gt;</description></item><item><title>GitLab CI and Chocolatey Server</title><link>https://beanbag.technicalissues.us/gitlab-ci-and-chocolatey-server/</link><pubDate>Wed, 09 Aug 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/gitlab-ci-and-chocolatey-server/</guid><description>&lt;p&gt;If you are not familiar with &lt;a href="https://chocolatey.org/"&gt;chocolatey&lt;/a&gt;, its an awesome package manager, like &lt;code&gt;apt&lt;/code&gt; or &lt;code&gt;yum&lt;/code&gt;, for Windows. You can also host your own &lt;a href="https://github.com/chocolatey/choco/wiki/How-To-Host-Feed"&gt;internal chocolatey feed&lt;/a&gt; and there is even a &lt;a href="https://forge.puppet.com/chocolatey/chocolatey_server"&gt;Puppet module&lt;/a&gt; to build it for you. This can be especially useful for machines that cannot reach out to the internet to perform the installations. Chocolatey even provides a &lt;a href="https://chocolatey.org/docs/how-to-recompile-packages"&gt;step-by-step guide&lt;/a&gt; on how to internalize packages, this can be a lot of manual steps from building packages, to getting them up to the Chocolatey server, keeping history, and maintaining when there are package updates.&lt;/p&gt;
&lt;p&gt;This is why I created a quick solution for maintaining your package history in Git and using GitLab CI to automate building and deploying packages to your internal Chocolatey server. This guide assumes you have an internal GitLab instance, an internal Chocolatey server, and a Windows based GitLab Runner with powershell execution. Documentation &lt;a href="https://docs.gitlab.com/runner/"&gt;here&lt;/a&gt; on GitLab Runners&lt;/p&gt;</description></item><item><title>The Road to Puppet 5</title><link>https://beanbag.technicalissues.us/the-road-to-puppet-5/</link><pubDate>Wed, 09 Aug 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/the-road-to-puppet-5/</guid><description>&lt;p&gt;Not long ago Puppet released version 5 to the open source world so, naturally, it was time to start updating all my projects to be compatible with it. The first stop along the way was at the house of Vagrant&amp;hellip; only, there&amp;rsquo;s a catch: it&amp;rsquo;s been relocated. That&amp;rsquo;s right, my Vagrant boxes got a shiny new home at &lt;a href="https://app.vagrantup.com/genebean"&gt;app.vagrantup.com/genebean&lt;/a&gt; as part of some restructuring done by HashiCorp. After getting my new door key (aka account) I went next door to visit my friend &lt;a href="https://packer.io"&gt;Packer&lt;/a&gt;. I hung out in his workshop massaging &lt;a href="https://github.com/genebean/packer-templates"&gt;my templates&lt;/a&gt; with the goal of updating and simplifying the boxes I build. The end result included combining all the versions of RVM into a single build and creating a new box for Puppet 5. Now, if you&amp;rsquo;ve ever hung around Mr. Packer for any length of time then you know he loves to create multiple versions of anything he helps assemble. Seeing as I want him to be happy I figured I should oblige and let him create some &lt;a href="https://hub.docker.com/u/genebean/"&gt;Docker images&lt;/a&gt; too.&lt;/p&gt;</description></item><item><title>Add Puppetfile Validation to Testing</title><link>https://beanbag.technicalissues.us/add-puppetfile-validation-to-testing/</link><pubDate>Mon, 31 Jul 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/add-puppetfile-validation-to-testing/</guid><description>&lt;p&gt;This is a quick post about how to add validation of your &lt;code&gt;Puppetfile&lt;/code&gt;, primarily if you are using the &lt;a href="https://github.com/puppetlabs/control-repo"&gt;control-repo&lt;/a&gt; and r10k for deploying Puppet environments. This came about because I found myself entering incorrect syntax into this file on more than a one occasion. Additionally, there are no indications of any problem, even when importing environments in Foreman, so the only way to find out is by manually running r10k from the command line on the Puppet Server.&lt;/p&gt;</description></item><item><title>Python PIP Issues after Homebrew upgrade</title><link>https://beanbag.technicalissues.us/python-pip-issues-after-homebrew-upgrade/</link><pubDate>Mon, 24 Jul 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/python-pip-issues-after-homebrew-upgrade/</guid><description>&lt;p&gt;This is just a quick note for anyone else out there who recently ran &lt;code&gt;brew update &amp;amp;&amp;amp; brew upgrade&lt;/code&gt; and then found that Python no longer worked as expected. Here are the important points:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The issue is that Homebrew introduced a breaking change and did a crappy job of documenting it.&lt;/li&gt;
&lt;li&gt;The fix is to prefix your path with &lt;code&gt;/usr/local/opt/python/libexec/bin&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;More details can be found at &lt;a href="https://github.com/Homebrew/homebrew-core/issues/15746"&gt;https://github.com/Homebrew/homebrew-core/issues/15746&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For me, the fix was to add this to my &lt;code&gt;.zshrc&lt;/code&gt; file:&lt;/p&gt;</description></item><item><title>Upgrade to Puppet 5</title><link>https://beanbag.technicalissues.us/upgrade-to-puppet-5/</link><pubDate>Sat, 15 Jul 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/upgrade-to-puppet-5/</guid><description>&lt;p&gt;Today I successfully upgraded our Puppet Master from Puppet 4.x (puppetserver 2.7.2) to Puppet 5 (puppetserver 5.0.0). It was wildly helpful to go through the entire upgrade process and perform LOTS of testing and troubleshooting with the &lt;a href="https://github.com/genebean/vagrant-puppet-environment"&gt;Vagrant Puppet Environmet&lt;/a&gt;, which is basically an exact replica of my production environment. This is an all-in-one Open Source Puppet setup and, once the next release is out, I would highly recommend for testing!&lt;/p&gt;</description></item><item><title>Automatically Generate GoAccess stats</title><link>https://beanbag.technicalissues.us/automatically-generate-goaccess-stats/</link><pubDate>Fri, 16 Jun 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/automatically-generate-goaccess-stats/</guid><description>&lt;p&gt;I&amp;rsquo;ve been using &lt;a href="https://goaccess.io"&gt;GoAccess&lt;/a&gt; to look at my logs for a while now. The other day I decided I wanted be able to look at these stats for the different sites on my web server in a variety of ways including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;all data from all sites combined&lt;/li&gt;
&lt;li&gt;all data on a per-site basis&lt;/li&gt;
&lt;li&gt;daily stats from each site kept for a week&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The thing with wanting daily stats is it helps if they are created in a way that only covers that day. That sounds simple, but the logrotate generally runs around around 3am. So what&amp;rsquo;s the solution? Cron. To be more exact, run logrotate from cron and generate stats while you&amp;rsquo;re at it.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Puppet Name: rotate nginx logs&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; * * * /root/updatestats.sh
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Now, if you are going to run logrotate from cron you&amp;rsquo;d better turn of the original one. Here&amp;rsquo;s how I did that:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cat /etc/logrotate.d/nginx
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# this is managed by a cron job.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# the script that would normally be here is at /root/nginx-logroate.conf&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;You have to do something like this instead of just deleting the file because otherwise the next time there is an update to Nginx (or whatever web server you are running) it will just recreate the file. The reason this works is that installers generally don&amp;rsquo;t clobber existing files. Below is the referenced replacement:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cat /root/nginx-logroate.conf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# use date as a suffix of the rotated file&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;dateext
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/var/log/nginx/*log &lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; create &lt;span class="m"&gt;0644&lt;/span&gt; nginx nginx
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; daily
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; rotate &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; missingok
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; notifempty
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; compress
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sharedscripts
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; postrotate
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; /bin/kill -USR1 &lt;span class="sb"&gt;`&lt;/span&gt;cat /run/nginx.pid 2&amp;gt;/dev/null&lt;span class="sb"&gt;`&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; endscript
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description></item><item><title>Node.js, CentOS7, and libhttp_parser.so.2</title><link>https://beanbag.technicalissues.us/node.js-centos7-and-libhttp_parser.so.2/</link><pubDate>Wed, 07 Jun 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/node.js-centos7-and-libhttp_parser.so.2/</guid><description>&lt;p&gt;Don&amp;rsquo;t you just love it when package maintainers break you blog? Yeah, me too. Tonight I went to post an article (no, not this one) and found my site to be down. When I went to start it back up I got this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;ghost ~&lt;span class="o"&gt;]&lt;/span&gt;$ /usr/bin/npm start --production
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;node: error &lt;span class="k"&gt;while&lt;/span&gt; loading shared libraries: libhttp_parser.so.2: cannot open shared object file: No such file or directory
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;As it turns out, the maintainer of the &lt;code&gt;nodejs-6.10.3-1.el7.x86_64&lt;/code&gt; package added this to their changelog:&lt;/p&gt;</description></item><item><title>Saying Goodbye to dd-wrt</title><link>https://beanbag.technicalissues.us/saying-goodbye-to-dd-wrt/</link><pubDate>Tue, 23 May 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/saying-goodbye-to-dd-wrt/</guid><description>&lt;p&gt;Tonight I had to wave a sad goodbye to &lt;a href="https://www.dd-wrt.com"&gt;dd-wrt&lt;/a&gt; and revert back to a stock firmware. This travesty is because the &lt;a href="https://www.dd-wrt.com/site/support/other-downloads"&gt;dd-wrt firmware&lt;/a&gt; doesn&amp;rsquo;t support the hardware NAT function on the &lt;a href="http://www.tp-link.com/us/download/Archer-C7_V2.html"&gt;TP-Link Archer C7 v2&lt;/a&gt; which resulted in losing over two third of my bandwidth. Being that &lt;a href="https://waveg.wavebroadband.com/"&gt;my ISP&lt;/a&gt; provides me with a full gigabit upstream and down that equated to a getting only 200-300 megs each way instead of over 900 on a wired connection. On wireless things were even worse: I was getting 100-200 megs vs over 500.&lt;/p&gt;</description></item><item><title>Zabbix 3.2 is WAY more efficient!</title><link>https://beanbag.technicalissues.us/zabbix-3.2-is-way-more-efficient/</link><pubDate>Wed, 10 May 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/zabbix-3.2-is-way-more-efficient/</guid><description>&lt;p&gt;Recently our Oracle DBA hit me up and said that all of a sudden some of his servers were showing a load average of &lt;code&gt;0.00, 0.00, 0.00&lt;/code&gt;. To diagnose this I started looking at our Zabbix dashboard to see when the load dropped off. I noticed it was on March the 3rd so I checked a second host and found that it also dropped off on the same day&amp;hellip; &lt;em&gt;interesting&lt;/em&gt;.&lt;/p&gt;</description></item><item><title>Solving a WordPress 'http error'</title><link>https://beanbag.technicalissues.us/solving-a-wordpress-http-error/</link><pubDate>Sat, 25 Mar 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/solving-a-wordpress-http-error/</guid><description>&lt;p&gt;Tonight we were trying to make the first post on my wife&amp;rsquo;s blog and ran smack into a &amp;ldquo;Http error&amp;rdquo; message. When I looked in the console of my web browser I found an error 413 (Request Entity Too Large) message. After a bit of Googling it turns out that Nginx was the culprit. Apparently the default value of &lt;a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size"&gt;&lt;code&gt;client_max_body_size&lt;/code&gt;&lt;/a&gt; is 1 meg. As I am sure you can imagine, most images grabbed with a camera phone are larger than that now.&lt;/p&gt;</description></item><item><title>Exploring Grafana</title><link>https://beanbag.technicalissues.us/exploring-grafana/</link><pubDate>Sun, 05 Mar 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/exploring-grafana/</guid><description>&lt;p&gt;This weekend I decided to check out &lt;a href="http://grafana.org"&gt;Grafana&lt;/a&gt;. My first test for it was setting up the &lt;a href="https://grafana.net/plugins/alexanderzobnin-zabbix-app"&gt;Zabbix backend&lt;/a&gt;. This went much better than I had expected so I started looking at what other data I could pull in. It turns out that Grafana may well be a great tool for centralizing data and metrics from disparate sources. The consensus on the interwebs, as best as I can tell, is that InfluxDB is the backend I should store my metrics in so I&amp;rsquo;m going try that next. Once InfluxDB is setup my plan is to try out some one-off inputs to it such as:&lt;/p&gt;</description></item><item><title>SSL, Name-based Virtual Hosts, and Let's Encrypt</title><link>https://beanbag.technicalissues.us/ssl-name-based-virtual-hosts-and-lets-encrypt/</link><pubDate>Mon, 27 Feb 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/ssl-name-based-virtual-hosts-and-lets-encrypt/</guid><description>&lt;p&gt;When I started switching everything I could over to https-only I was under the impression that the only option was to tie each host to a single certificate unless I wanted to shell out the big bucks for a wildcard cert. This also meant one host per IP address if I wanted to use the standard port 443. That was two or three years ago. Just a few months ago I learned that SAN certificates were recognized by all the major browsers and started taking advantage of them to reduce the burden of needing two certs to cover things like example.com and &lt;a href="https://www.example.com"&gt;www.example.com&lt;/a&gt;. In my mind this still required two IP addresses though (one per domain). All this changed tonight when I decided on a whim to see if you could setup Nginx to recognize name-based virtual hosts that were all tied to a single SAN certificate on a single IP. As it turns out, this works just fine (who knew?!?). And, as the icing on the cake, Let&amp;rsquo;s Encrypt supports up to 100 SAN entries per certificate!&lt;/p&gt;</description></item><item><title>Updating to Puppet 4 Part 3</title><link>https://beanbag.technicalissues.us/updating-to-puppet-4-part-3/</link><pubDate>Tue, 07 Feb 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/updating-to-puppet-4-part-3/</guid><description>&lt;h5 id="hooked-and-proxied"&gt;&lt;em&gt;Hooked and Proxied&lt;/em&gt;&lt;/h5&gt;
&lt;p&gt;When I left off last time a webhook receiver was needed&amp;hellip; well, its finished and published to &lt;a href="https://forge.puppet.com/"&gt;Puppet Forge&lt;/a&gt; as &lt;a href="https://forge.puppet.com/genebean/puppetmaster_webhook"&gt;genebean/puppetmaster_webhook&lt;/a&gt;. The module creates a custom &lt;a href="http://www.sinatrarb.com/"&gt;Sinatra&lt;/a&gt; application and installs it along with &lt;a href="https://rvm.io/"&gt;RVM&lt;/a&gt;. The end result is that you can post messages from GitHub or GitLab and have it deploy the corresponding repository&amp;rsquo;s branch or environment.&lt;/p&gt;
&lt;p&gt;While I was setting all this up I also decided to front everything with HAProxy so that I could simulate being behind a load balancer immediately and to prepare for the eventual high availability setup that is my end goal. As of today I have it so that all nodes talk to the Puppet master by way of the proxy. &lt;a href="https://theforeman.org/"&gt;Foreman&lt;/a&gt; and my webhook receiver are also being fronted by the proxy.&lt;/p&gt;</description></item><item><title>Updating to Puppet 4 Part 2</title><link>https://beanbag.technicalissues.us/updating-to-puppet-4-part-2/</link><pubDate>Sat, 28 Jan 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/updating-to-puppet-4-part-2/</guid><description>&lt;h4 id="four-repos-become-one"&gt;&lt;strong&gt;&lt;em&gt;Four repos become one&amp;hellip;&lt;/em&gt;&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;When I last created a full Puppet environment &amp;ldquo;Roles &amp;amp; Profiles&amp;rdquo; were the &lt;em&gt;new&lt;/em&gt; way to do things. &lt;a href="http://garylarizza.com/"&gt;Gary Larizza&lt;/a&gt; was posting articles that talked all about how each of these should be in their own repository and how we should use r10k and hiera and how each of them should also have a repo. What that meant was that concerns were well separated but it also made for a rather complex environment.&lt;/p&gt;</description></item><item><title>Updating to Puppet 4 Part 1</title><link>https://beanbag.technicalissues.us/updating-to-puppet-4-part-1/</link><pubDate>Wed, 25 Jan 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/updating-to-puppet-4-part-1/</guid><description>&lt;p&gt;Better than two years ago I created a multi-node Vagrant setup based around a three node Puppet environment with boxes for:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Foreman acting as a CA, report viewer, and ENC&lt;/li&gt;
&lt;li&gt;PuppetDB&lt;/li&gt;
&lt;li&gt;A Puppet master with r10k&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The environment also has a client node to test against.&lt;/p&gt;
&lt;p&gt;At the time I built all this Puppet 3.x was as the latest version. Fast forward to January 2017 and Puppet 3 has been end-of-life&amp;rsquo;d, Puppet is on version 4.8, Puppet Server is on version 2.7, and control repos are a thing so I figured it was time to update my stuff.&lt;/p&gt;</description></item><item><title>The Ghost Lives</title><link>https://beanbag.technicalissues.us/the-ghost-lives/</link><pubDate>Mon, 16 Jan 2017 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/the-ghost-lives/</guid><description>&lt;p&gt;I decided it was time to stop procrastinating and actually update the live version of my blog to the one I have been working on for a while. So, here it is: &lt;em&gt;The Comfy Seat&lt;/em&gt;: Ghost edition.&lt;/p&gt;
&lt;p&gt;The new site is built on v0.11.4 of Ghost from &lt;a href="https://github.com/TryGhost/Ghost" title="TryGhost/Ghost on GitHub"&gt;https://github.com/TryGhost/Ghost&lt;/a&gt; and fronted by Nginx. Some additional improvements will be coming soon as will more posts (I hope).&lt;/p&gt;</description></item><item><title>ELK Stack v2 (and a correction)</title><link>https://beanbag.technicalissues.us/elk-stack-v2-and-a-correction/</link><pubDate>Mon, 20 Jul 2015 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/elk-stack-v2-and-a-correction/</guid><description>&lt;p&gt;I&amp;rsquo;ve learned a lot since my last post. One of those things is that I was wrong… setting up Logstash on your Redis nodes isn&amp;rsquo;t such a bad idea. Another thing that I have learned is that fluentd / td-agent is not as great as I thought it was. My revised plan as depicted in the updated design below is to use Logstash Forwarder on my non-Windows nodes and send that to a Logstash instance that does nothing but stick things into a local Redis instance. Doing this also eliminates the need for my custom receiver named Sawyer. The last change noted below is that I have upped my number of Elasticsearch data nodes and Logstash indexers to 3 each. This was a direct result of load. I also like the improved distribution of shards by having more than 2 nodes in a 5×2 shard setup.&lt;/p&gt;</description></item><item><title>ELK Stack Design</title><link>https://beanbag.technicalissues.us/elk-stack-design/</link><pubDate>Fri, 19 Jun 2015 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/elk-stack-design/</guid><description>&lt;p&gt;I&amp;rsquo;ve been working on a new logging system based around &lt;a href="https://www.elastic.co/products/elasticsearch"&gt;Elasticsearch&lt;/a&gt;, &lt;a href="https://www.elastic.co/products/logstash"&gt;Logstash&lt;/a&gt;, and &lt;a href="https://www.elastic.co/products/kibana"&gt;Kibana&lt;/a&gt;. One of my biggest challenges was that all the recommended designs I found said that logs should go from a shipper to &lt;a href="http://redis.io"&gt;Redis&lt;/a&gt;. The problems with this are twofold:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Logstash doesn&amp;rsquo;t seem like a good fit for Windows. The biggest issues are that it relies on Java which isn&amp;rsquo;t something that is very sellable to any Windows admin that I know. The other is that it simply didn&amp;rsquo;t work reliably in my testing. The &lt;a href="https://www.elastic.co/blog/logstash-1-5-0-ga-released" title="Logstash 1.5.0 GA release notes"&gt;1.4.x series had performance issues&lt;/a&gt; and the copy of 1.5.1 I just tried on Windows 7 is throwing&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;Windows Event Log error: Invoke of: NextEvent&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;
Source: SWbemEventSource&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;
Description: Timed out
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;errors under even the simplest of tests. Unlike other tools it also requires specifying each Event Log that you want to monitor individually as opposed to being able to just grab them all.
2. Not everything can have an agent on it which means that I needed a way to pipe syslog into Redis&lt;/p&gt;</description></item><item><title>Windows 7 x64 and Underscore-CLI</title><link>https://beanbag.technicalissues.us/windows-7-x64-and-underscore-cli/</link><pubDate>Sun, 23 Nov 2014 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/windows-7-x64-and-underscore-cli/</guid><description>&lt;p&gt;&lt;a href="https://github.com/ddopson/underscore-cli" title="Underscore-CLI Website"&gt;Underscore-CLI&lt;/a&gt; is a great utility for working with JSON data. Below are the steps it took to get it running on my Windows 7 laptop:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Install &lt;a href="http://nodejs.org/" title="node.js"&gt;node.js&lt;/a&gt;1. Node adds a trailing \ to it’s path… to actually use it you must remove this as Windows does not want it to be there&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://www.python.org/" title="python"&gt;Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Add Python to your path (something like C:\Python27)&lt;/li&gt;
&lt;li&gt;Underscore-CLI uses node-gyp… to get that to work on Windows 7 x64 you have to follow their guide at &lt;a href="https://github.com/TooTallNate/node-gyp/wiki/Visual-Studio-2010-Setup" title="vs2010setup"&gt;https://github.com/TooTallNate/node-gyp/wiki/Visual-Studio-2010-Setup&lt;/a&gt;. Be sure to pay attention to the part about utilizing the Windows 7 SDK command prompt.&lt;/li&gt;
&lt;/ol&gt;</description></item><item><title>Hyper-V, CentOS 6.5 kernel panic, and 7 long hours</title><link>https://beanbag.technicalissues.us/hyper-v-centos-6.5-kernel-panic-and-7-long-hours/</link><pubDate>Wed, 28 May 2014 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/hyper-v-centos-6.5-kernel-panic-and-7-long-hours/</guid><description>&lt;p&gt;In hopes of it helping someone else not spend hours of work like I just did here is my lesson-learned from my first day of using Windows Server 2012 r2 Hyper-V.&lt;/p&gt;</description></item><item><title>Vagrant, Fusion, &amp; DHCP Oddities</title><link>https://beanbag.technicalissues.us/vagrant-fusion-dhcp-oddities/</link><pubDate>Sun, 11 May 2014 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/vagrant-fusion-dhcp-oddities/</guid><description>&lt;p&gt;I’ve had some random weirdness that I thought was related to Vagrant’s VMware Fusion provider until I turned on debugging tonight. As it turns out, Fusion had decided at some point in the past to start storing it’s DHCP leases in vmnet-dhcpd-vmnet8.leases~ instead of vmnet-dhcpd-vmnet8.leases. The same was true for vmnet1 too. After quitting Fusion and running ‘sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli –stop’ I removed vmnet-dhcpd-vmnet* so that all the leases would be reset. After that I reran ‘vagrant up’ and (finally) things worked as expected.&lt;/p&gt;</description></item><item><title>What's Missing from GitLab?</title><link>https://beanbag.technicalissues.us/whats-missing-from-gitlab/</link><pubDate>Fri, 02 May 2014 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/whats-missing-from-gitlab/</guid><description>&lt;p&gt;The other day I was asked what &lt;a href="http://bit.ly/1nRNoIH"&gt;GitLab&lt;/a&gt; was missing and I realized that, really, it&amp;rsquo;s not much. The single biggest thing to me is the inability to create new projects and interact with existing ones from a remote shell session a la &lt;a href="https://github.com/jingweno/gh/blob/master/README.md"&gt;gh / GitHub CLI&lt;/a&gt;. Other than that it really comes down to polish and aesthetics. Below is my $0.02 based on interacting with GitLab as a person who runs a server and as an end user.&lt;/p&gt;</description></item><item><title>Configuration Management Part 3: Vagrant &amp; Packer</title><link>https://beanbag.technicalissues.us/configuration-management-part-3-vagrant-packer/</link><pubDate>Sun, 27 Apr 2014 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/configuration-management-part-3-vagrant-packer/</guid><description>&lt;p&gt;To facilitate developing my Puppet code, the &lt;a href="http://amzn.to/QPzitQ"&gt;Pro Puppet&lt;/a&gt; book suggests using &lt;a href="http://bit.ly/QPzpFI"&gt;Vagrant&lt;/a&gt;. Seeing as I’ve been meaning to get around to learning it for a while I decided now was the time to finally do so. The only problem is that, being a responsibly paranoid SysAdmin, I was never a fan of using a base for my work that I didn’t know the contents of. I also never liked the idea of basing my work off of something I didn’t understand (a Vagrant box) or that could go away at anytime.&lt;/p&gt;</description></item><item><title>Configuration Management Part 2: puppetlabs-apache &amp; puppet-lint</title><link>https://beanbag.technicalissues.us/configuration-management-part-2-puppetlabs-apache-puppet-lint/</link><pubDate>Wed, 23 Apr 2014 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/configuration-management-part-2-puppetlabs-apache-puppet-lint/</guid><description>&lt;p&gt;Today was a good day. I installed puppet-lint and ran it against a custom module I’m writing for my first node and found lots of issues that it was kind enough to tell me exactly how to resolve. I then got down to using my first module from Puppet Forge: &lt;a href="http://bit.ly/1nq94vg"&gt;puppetlabs-apache&lt;/a&gt;  Installing it was a piece of cake but understanding how to use it took a bit of trial and error.&lt;/p&gt;</description></item><item><title>Configuration Management Part 1: The Restart</title><link>https://beanbag.technicalissues.us/configuration-management-part-1-the-restart/</link><pubDate>Tue, 22 Apr 2014 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/configuration-management-part-1-the-restart/</guid><description>&lt;p&gt;As mentioned in my &lt;a href="http://bit.ly/1eXR5cQ"&gt;last post&lt;/a&gt;, I’ve decided to start over on my journey to doing configuration management in an environment where we treat our infrastructure as code. Today I kicked things off by setting up a new Puppet Master on CentOS 6.5. Once my usual setup was applied to the system via a PXE boot &amp;amp; Kickstart installed Git and the puppetmaster package and was off.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Version Control&lt;/strong&gt;&lt;br&gt;
One of my main goals is to track everything in Git so my first task was to change the group ownership of /etc/puppet to my puppetadmins group and give them write access. Then I needed to initialize a repo in that directly, tell Git that it’s a shared repository so other admins can work in it too, and tell Git to ignore the modules folder. I then applied the group permissions to everything inside the folder, did setgid on modules &amp;amp; manifests, and lastly I did a setfacl on modules &amp;amp; manifests so that us admins would retain rwx on all files and folders. Lastly I cloned my first module from our GitLab instance into a folder under modules.&lt;/p&gt;</description></item><item><title>Foreman: Too Much Voodoo</title><link>https://beanbag.technicalissues.us/foreman-too-much-voodoo/</link><pubDate>Mon, 21 Apr 2014 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/foreman-too-much-voodoo/</guid><description>&lt;p&gt;I finally got around to setting up &lt;a href="http://bit.ly/1eXLC65"&gt;Foreman&lt;/a&gt; at work and managing my first node with it.  After digging around I found that I felt really boxed in using this setup because so much of the work is done behind the scenes in some magical way.  One of my main goals is to facilitate the concept of infrastructure as code and, like my code, track changes via git and store them in our &lt;a href="http://bit.ly/1eXPDY8"&gt;GitLab&lt;/a&gt; instance.  The Foreman, as best I can tell, takes and hides &lt;span style="text-decoration: underline"&gt;everything&lt;/span&gt; it does inside a database which prevents me from being able to apply any version control to it’s settings.  This is an unforeseen and unfortunate reality because the developers have made a really good looking product that can do a lot of really cool things.  For me though, this is too much voodoo at this early of a stage of us doing configuration management and I think I’m going to back out my install and start over with a different approach that defines nodes in pain text .pp  files.  I’m sure I’ll take advantage of pulling in data from some external source like &lt;a href="http://bit.ly/1eXM6cm"&gt;Hiera&lt;/a&gt; and / or other systems we have to help make decisions dynamically but I don’t think I want the configs themselves in a db… who knows; guess I’ll try it out and see.&lt;/p&gt;</description></item><item><title>Puppet, Factor, Dashboard, MCollective, Hiera, Foreman... Where does it end?</title><link>https://beanbag.technicalissues.us/puppet-factor-dashboard-mcollective-hiera-foreman...-where-does-it-end/</link><pubDate>Sun, 27 Oct 2013 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/puppet-factor-dashboard-mcollective-hiera-foreman...-where-does-it-end/</guid><description>&lt;p&gt;I want to setup a system based around the open source &lt;a href="http://projects.puppetlabs.com/projects/puppet"&gt;Puppet&lt;/a&gt; software stack… I am just not sure what the stack contains. My goal is a system that is holistic where Puppet, VMware vSphere, monitoring, revision control, and continuous integration all play well together. I also want to bring &lt;a href="http://www.vagrantup.com"&gt;Vagrant&lt;/a&gt; and &lt;a href="http://www.packer.io"&gt;Packer&lt;/a&gt; into the mix for development environments. I’ve gathered so far that all the stuff listed in the title will be useful but what else is needed? Please comment below and I’ll also post again as I find answers.&lt;/p&gt;</description></item><item><title>Automatically Starting and Stopping Oracle Fusion Middleware on Red Hat 5</title><link>https://beanbag.technicalissues.us/automatically-starting-and-stopping-oracle-fusion-middleware-on-red-hat-5/</link><pubDate>Sat, 28 Sep 2013 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/automatically-starting-and-stopping-oracle-fusion-middleware-on-red-hat-5/</guid><description>&lt;p&gt;At work we utilize Oracle Fusion Middleware on Red Hat 5.8.  As the primary systems administrator for the servers running FMW, I always found it to be a real pain that something as simple as a reboot required me to involve the app admin.  Instead of just being annoyed I got with that app admin to learn how the services were started and stopped and then wrote a set of SysV init scripts to automate that process.  These scripts seem to be reliable now so I have released the code on BitBucket at &lt;a href="https://bitbucket.org/genebean/oracle-fmw-sysv-init"&gt;https://bitbucket.org/genebean/oracle-fmw-sysv-init&lt;/a&gt;.  These scripts cover all the components used when running Ellucian’s Internet Native Banner and Self Service Banner.&lt;/p&gt;</description></item><item><title>Vagrant, Veewee, &amp; Me (part 1)</title><link>https://beanbag.technicalissues.us/vagrant-veewee-me-part-1/</link><pubDate>Tue, 21 May 2013 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/vagrant-veewee-me-part-1/</guid><description>&lt;p&gt;I have toyed with the idea of diving into &lt;a href="http://www.vagrantup.com/"&gt;Vagrant&lt;/a&gt; for a while now and, tonight, decided it was time.  I decided to be different and RTFM… this left me with two big questions: where can I get “boxes” from and how can I easily make my own?  After a little Googling I discovered that &lt;a href="http://puppetlabs.com"&gt;Puppet Labs&lt;/a&gt; provides a small library of &lt;a href="http://puppet-vagrant-boxes.puppetlabs.com/"&gt;the boxes they use&lt;/a&gt; internally.  On their page I also found the answer to my second question of how to make my own: &lt;a href="http://github.com/jedi4ever/veewee"&gt;Veewee&lt;/a&gt;.  It seems I have a bit of setup to do before I can start using Veewee but I think it will be worth it.  My plan is to bring up a base &lt;a href="http://www.centos.org"&gt;CentOS&lt;/a&gt; 6.4 x86_64 box and then make &lt;a href="http://docs.vagrantup.com/v2/provisioning/puppet_apply.html"&gt;a Vagrantfile that uses Puppet&lt;/a&gt; to configure it for building RPM’s in.  Ideally, I will start including this Vagrantfile with the source of any RPM I publish so that building a new one is easy-peasy.&lt;/p&gt;</description></item><item><title>Gentoo &amp; MySQL Binary Logging</title><link>https://beanbag.technicalissues.us/gentoo-mysql-binary-logging/</link><pubDate>Sun, 07 Apr 2013 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/gentoo-mysql-binary-logging/</guid><description>&lt;p&gt;So, I learned today that the root cause of my site issues was that Gentoo apparently decided to enable binary logging by default yet did not have a max size or may days set in my.cnf like other distros do so, as a result, I had MANY gigabytes of logs which filled up /. Lesson learned. Thanks to Zabbix I knew about the issue straight away and was able to minimize my downtime.&lt;/p&gt;</description></item><item><title>More Monitoring... NOW!</title><link>https://beanbag.technicalissues.us/more-monitoring...-now/</link><pubDate>Mon, 01 Apr 2013 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/more-monitoring...-now/</guid><description>&lt;p&gt;As some may have noticed, I have had a few site issues lately. Those should be in the past now. As a result of these issues, I am now actively monitoring this site &amp;amp; it’s server via Zabbix. If I can compile all the pieces of the different guides I used to set it up in between other thing I’ll post it here. Until then, here’s the short version:&lt;/p&gt;
&lt;p&gt;ClearOS @ home + v2.0 packages from EPEL = Zabbix server&lt;/p&gt;</description></item><item><title>Using Oracle Wallet with Wildcard Certificates</title><link>https://beanbag.technicalissues.us/using-oracle-wallet-with-wildcard-certificates/</link><pubDate>Tue, 05 Feb 2013 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/using-oracle-wallet-with-wildcard-certificates/</guid><description>&lt;p&gt;Do you have to use Oracle Wallet as part of Fusion Middleware? Do you also have a wildcard SSL certificate?  If so, then this tutorial is for you.  This tutorial is the result of trying to make a new install of Internet Native Banner (INB) play nicely with our wildcard certs so that if we change the hostname of the system or clone the virtual machine SSL does not break or require adjustment.&lt;/p&gt;</description></item><item><title>Building Android</title><link>https://beanbag.technicalissues.us/building-android/</link><pubDate>Wed, 18 Jul 2012 20:45:45 +0000</pubDate><guid>https://beanbag.technicalissues.us/building-android/</guid><description>&lt;p&gt;I wonder how hard it is going to be to pick pieces I like out of one or more Android ROMs and add them to my own AOSP-based ROM. To start finding out, I am currently downloading both the AOSP repo &amp;amp; the CyanogenMod repo. Then I guess it will be time to dig into both via Eclipse. This should be interesting.&lt;/p&gt;</description></item><item><title>Putting this site to use...</title><link>https://beanbag.technicalissues.us/putting-this-site-to-use.../</link><pubDate>Tue, 17 Jul 2012 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/putting-this-site-to-use.../</guid><description>&lt;p&gt;So… I was perusing &lt;a href="http://rootzwiki.com"&gt;RootzWiki&lt;/a&gt; looking for the next great Jellybean ROM for my phone and noticed that frequently dev’s need a place to mirror things.  With that in mind, I think I will look into a good way to mirror some files and also to provide torrents for any mirrored file.  This should be interesting…&lt;/p&gt;</description></item><item><title>Uptimed Site Now Live</title><link>https://beanbag.technicalissues.us/uptimed-site-now-live/</link><pubDate>Tue, 17 Jul 2012 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/uptimed-site-now-live/</guid><description>&lt;p&gt;Just a quick update to let you know that &lt;a href="http://uptimed.technicalissues.us"&gt;uptimed.technicalissues.us&lt;/a&gt; is its own site now.&lt;/p&gt;</description></item><item><title>Sitting down for the first time…</title><link>https://beanbag.technicalissues.us/sitting-down-for-the-first-time/</link><pubDate>Sat, 12 May 2012 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/sitting-down-for-the-first-time/</guid><description>&lt;p&gt;Welcome to The Comfy Seat! This site will serve as the base for all my domain names and hosting needs. Other domains that will share this site from Day 1 are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.geneliverman.com"&gt;www.geneliverman.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://technicalissues.us"&gt;technicalissues.us&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://uptimed.technicalissues.us"&gt;uptimed.technicalissues.us&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Gene Liverman</title><link>https://beanbag.technicalissues.us/authors/gene/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/authors/gene/</guid><description>&lt;p&gt;I&amp;rsquo;m a bicycle riding Eagle Scout tech geek. I&amp;rsquo;m also a genealogy nut who drinks too much coffee.&lt;/p&gt;</description></item><item><title>Jake Spain</title><link>https://beanbag.technicalissues.us/authors/jake/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://beanbag.technicalissues.us/authors/jake/</guid><description/></item></channel></rss>