I’ve thought about this numerous times over the course of my career and it is something I want to share. This is for the junior and intermediate developers out there who spend time visiting conferences, reading blogs, listening to podcasts, and doing their best to hone their craft. The developers who week after week take in content from industry influencers talking about the big projects they’ve worked on at BigCompany. Maybe you have thought about what it would feel like to work on similar big projects and experience growth in the industry like them.
I get this terminal error about every three to six months:
error: gpg failed to sign the data fatal: failed to write commit object When I see it I think “I really should have written down the problem last time”. So here we are, ready and willing to document.
The issue for me is that I set short expiration times for my gpg keys.
To verify my key is expired I run: $ gpg -K --keyid-format SHORT and get results like below:
I finally decided to run some updates on the ol' blog and noticed some posts had missing content after the update. At a glance all the missing content seemingly used raw HTML tags.
A quick look to the rendered source found comment tags in all the previous HTML locations:
<!-- raw HTML omitted --> It turns out somewhere along the line (Hugo v0.6.0) the markdown rendering engine was changed to Goldmark.
I was running some commands that I later found out had left me with many orphaned ruby processes. Too many to kill one by one. I needed to just get rid of them all and quickly so killing all ruby processes was the best way to go.
Here is a short list of convenient and inconvenient ways to do that:
for each in `ps -eo pid,command | grep ruby | grep -v grep | awk '{print $1}' `; do kill -9 $each;done killall −9 ruby pkill -9 ruby pidof ruby | xargs kill -9 ps aux | grep sidekiq | awk '{print $2}' | xargs kill The easiest commands and best suggestions were brought to my attention by Alan Bailward and Gavin Mogan.
As soon as the MacBook Air (Mid 2013) started shipping; reports began flooding in about intermittant wifi connectivity problems.[1] The day after it was released I ordered it. I can’t say I experinced the same issues everyone else was claiming to run into. Apple denied the issue at first (as they always do) but later released a software update for the issue.[2]
Not soon after the software update I began experiencing the wifi connectivity issues.
Excited to start running Garnish against Ruby 2.0.0 I attempted installation tonight to get build failed:
downloading ruby-2.0.0-p0.tar.gz... -> http://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p0.tar.gz Installing ruby-2.0.0-p0... BUILD FAILED Inspect or clean up the working tree at /var/folders/td/0z79ghbs125193ngl8y0j8180000gn/T/ruby-build.20130312001656.54615 Results logged to /var/folders/td/0z79ghbs125193ngl8y0j8180000gn/T/ruby-build.20130312001656.54615.log Last 10 log lines: installing default gems: /Users/brianp/.rbenv/versions/2.0.0-p0/lib/ruby/gems/2.0.0 (build_info, cache, doc, gems, specifications) bigdecimal 1.2.0 io-console 0.4.2 json 1.7.7 minitest 4.3.2 psych 2.0.0 rake 0.9.6 rdoc 4.0.0 test-unit 2.0.0.0 The Ruby openssl extension was not compiled.
Using: nginx version: nginx/1.2.4
I manage a few Linode instances as client servers but I would far from consider myself a sysadmin. I received an email from a client regarding some uploading issues. After some testing I concluded any combination of files over 1mb would raise Nginx 413 Request Entity Too Large
I upped the file limit within the virtualhost file by changing the client_max_body_size
upstream app { server unix:/tmp/app.sock fail_timeout=0; } server { listen 80; location / { proxy_pass http://app; proxy_redirect off; client_max_body_size 6m; } } This still did not work.