Author Archives: Andrew

Reco: Termux

Published / by Andrew

I’ve been working on Threadstr a lot lately, but I think I’ll take a break to make a reco.

(“Reco” is short for “recommendation,” not “reconaissance,” by the way.)

There’s a really neat program on Android called “Termux.” It is, more or less, a CLI Linux distro for mobile (which seems to be based on Debian), complete with its own software repos containing the most common CLI tools you’ll need. I’ve found it to be incredibly useful because it gives me the tools to be able to SSH into my DO droplets, being able to use a full version of Vim, being able to use Git, and so much more. It has a complete BASH implementation with .bashrc just like you’d expect from any Linux distro. Thus far, there have only been two things that I haven’t been able to do: Compile LaTeX documents (texlive is not in the repo– yet) and use the Perl engine for grep (there’s probably a way to enable that, though).

I was beginning to seriously consider renting a DO droplet for the sole purpose of being able to SSH into it through Juice because of how useful it would be. That’s now no longer necessary for me.

With a Bluetooth keyboard, I have greatly reduced my need for a laptop by using Termux. (When I am sans physical keyboard, I am able to get by with Hacker’s Keyboard.)

If you have need for a Linux distro on-the-go, I’d definitely suggest you give Termux a try.

Threadstr reliability issues

Published / by Andrew

I am aware that Threadstr is occasionally going down. Even though it’s not currently functional anyway, I want to get this resolved ASAP, because I don’t want it to be an issue when it is fully-functional.

The issue seems to be that MySQL is getting an out-of-memory error, and I’m not the only one having this problem on DO.

I’m approaching this on two fronts:

  1. Stop the out-of-memory error from occurring.
  2. Have the DO droplet restart and restart Node.js when it exits abruptly.

Both of these have their own challenges. In the first case, adding RAM doesn’t always seem to make much of a difference, according to the thread linked above. In the second case, Node.js, unlike Apache, doesn’t start up in the background by default when the machine starts up, and I don’t know how to get Node.js to restart the machine if it crashes.

It’s an ongoing learning experience, but I am definitely determined to make this work as well as I possibly can.

Working(ish) model of Threadstr now online.

Published / by Andrew

I bought the domain name threadstr.com a while ago (before I bought this one, actually). The project isn’t anywhere near completion, but, since DO droplets are a mere $5 a month, I decided to go ahead and put up what I have so far– Partly as having it out there publically will be incentive to get myself working on it some more, but mostly just to show what I have so far.

So, it’s up now.

“Incomplete” is an understatement. At this point, the only thing that a user can do is create an account, but, nonetheless, a lot of work has gone into what’s there so far. Unfortunately, “a lot of work” doesn’t mean the website is usable, so more work is going to be required to make it functional.

Development on this project is slow-moving since it’s an at-home project (and it’s also my first exposure to Node.js). Slow-moving, but not at a standstill.

I’m going to take a quick break from it to work on a timekeeping program that I discussed with my sister. I was telling her about how useful scripting languages can be and, among other things, how I’ve been using a VBScript program to keep track of how long I spend on each project (it would be PowerShell if I wrote it today). She’s expressed interest, and I plan to make it into a full-fledged program to add to my portfolio. I thought about making it a C++ Qt program, but decided that, in the interest of time, I should go with something that I know, so it’ll be a Java program instead. That will, of course, be showing up on this site and GitHub eventually.

Threadstr: My current big project.

Published / by Andrew

I’ve decided to go ahead and open source my current big project (at home, at least): Threadstr.  Soon, I’ll also set up a Droplet to host the incomplete project just to show it at work, although it’s not yet functional (or even close to functional).

The basic idea is to create a discussion thread that’s not actually attached to a particular message board.  This might seem unusual at first, but the idea is that it can be shared and organized in a way that’s better than Facebook comments or Twitter replies.

As for the code, I’ll let my in-code documentation speak for itself.

A future plan is to allow the user to create debate threads, which would let two people have a moderated text debate, where the moderator is the server.  It would determine when you can post (opening statement, rebuttal, cross-x, and closing statement), a minimum and maximum post length, and a time window that would determine when someone is allowed to post.

I’ll put up a working (but incomplete) model in the near future– Possibly tomorrow.  I would do it tonight, but it’s getting too late to do much more work.

Oh, and a shout-out to my friend and coworker Cari Landrum for the logo.  My version looked awful, so I took it to her and she made this version that looks fantastic.

UPDATE: I think I’m going to wait just a little while before putting up a working model.  Right now, the only thing you can do is create an account, which will be wiped out eventually anyway.

Setting up a VirtualBox VM for Ubuntu Server testing

Published / by Andrew

Something really useful about virtualization, besides getting those one-off Windows programs running without having to dual-boot, is the ability to set up a virtual server.  I use VirtualBox to test Ubuntu Server projects that I plan to put on Digital Ocean.  This way I can test it on the virtual machine before actually spending money to spin up a droplet.

For the sake of brevity and getting to bed at a decent hour, I’ll only explain in detail the one part that was tricky to me the first time I did it.  The easy parts are installing VirtualBox, then installing Ubuntu Server Edition (I prefer an LTS version, currently 16.04)  onto a new virtual machine.  Though I will say these two things:

  • Installation can take a long time, so I would also suggest making a backup copy of the newly-created VM so that you can make a copy of that backup instead of reinstalling every time you want to make a new one.
  • You’ll want to install OpenSSH so that you can a.) SSH into the VM so you can control it from a terminal (from any machine on your network, no less, when we’re done with it) and b.) SFTP files onto the VM (again, from any machine on your network).  You can do this while installing Ubuntu or, if you accidentally hit Enter instead of Space, which I do every time, you can install it after-the-fact with sudo apt-get install openssh-server.

Now for the part that I had a hard time figuring out, but, with some internet research, I did, eventually, figure out– Setting the VM to appear as a separate machine on the network. It’s actually really easy if you know what to do and can be done before or after the operating system is installed– Go to the settings of the VM, go to “Network,” and, for “Attached to,” select “Bridged Adapter.”

Now when you start up the VM, it will have its own ip address on your LAN that’s independent of the host machine.  You can find it with the command ifconfig from within the VM’s window: It’s going to be the “inet addr,” which, in the example picture below, is 192.168.1.111.

Now that this is done, while the VM is powered on, you can use SSH to log into it, SFTP to transfer files to it, install Git and download a project onto it, or do anything else that you need to.  This is a virtual machine on your network.  Use it as a test webserver, use it for Zoneminder, use it for anything that you would use a server for.

The Freedom of Software Development

Published / by Andrew

One of the worst things of using a computer is having to work with tools that you don’t like. Sometimes it’s just because you don’t think the same way that the original devs did, or because the devs are stubbornly holding to a feature that’s widely-acknowledged to be broken.

The most freeing part of software development is the ability to make your own tools. That’s not to say that you’ll never be dependent on other people’s work, of course, but, if you don’t like the tool that you’re using, you have the freedom to make a new one if you’re willing to spend the time to do it. A good example of this would be NeoVim.

Case in point, the image above. The autocorrect in Google has been driving me absolutely insane for years. I’ve been using DuckDuckGo for a long time because the autocorrect is less aggressive, but it’s still pretty bad.

Then I noticed that the only thing that Google does in the get query (though I don’t know if it’s really a “get” query, since it starts with a ‘#’ instead of a ‘?’) to disable autocorrect is a simple flag: “nfpr=1”. That’s it. Adding that programatically is a very simple task.

So, I set out to create that very simple webpage. Apart from the Rage comic that I added for kicks, it’s just a single HTML file. I didn’t even separate the CSS and JS files. It was really that easy to do, but I wouldn’t have known that if I didn’t already know how to use JS. When I set Firefox to use this as my default search, it uses Google without autocorrect.

And that webpage is public. There are a couple of reasons for this. The first is that, I may as well share it, because I know I’m not the only one aggravated by this. The second reason is that I can now use it anywhere. From my phone to my office (if I can convince IT to let me through to it).

I relish the freedom that I have as someone with knowledge of software development, and I hope to learn more on a wide area of topics so that I can continue to create and tweak tools to my own purposes.

Shell languages are the gifts that keep on giving

Published / by Andrew

I am currently downloading every episode of The British History Podcast. I wrote a small BASH script to do this:

function downloadFullPodcast(){
    xhtml=$(wget -qO- $1)
    regex='.*?\.mp3'
    readarray links < <(grep -oP "$regex" <<<"$xhtml")
    arrLen=${#links[@]}

    i=0
    tempStr=${links[i]}
    #dmz1
    while [ "$tempStr" != "" ]; do
        regex='>.*<'
        tempStr=$(grep -oP "$regex" <<<"$tempStr")
        len=${#tempStr}
        mp3url=${tempStr:1:len-2}
        numDum=$(($arrLen-$i))
        number=$(printf "%0*d" 4 $numDum)
        wget -qO $number\.mp3 $mp3url
        ((i++))
        tempStr=${links[i]}
    done
}

downloadFullPodcast "https://feeds.feedburner.com/TheBritishHistoryPodcast"

This isn't exactly a novel idea, but I'm surprised at how few devs I come across that have any interest at all in using the command line languages or other scripting languages.

Even before I became a developer, I recognized how powerful and useful the command line could be. When I was an undergrad (with no coding experience whatsoever at the time), someone on a Linux forum helped me to write a BASH script that played a random episode of Scrubs in the Totem player, and I ran this with KAlarm in lieu of an alarm clock. (In fact, I might want to set that up again with a Raspberry Pi or something.)

These days, I live in the command line, whether BASH at home or PowerShell at work. I couldn't go without it. My primary tools at work, apart from Firefox to test the code, of course, are PowerShell, Vim, and MySQL Monitor. All three of these are CLI tools. (Also, notably, all three of these are scriptable.) At home, it's the same, except for BASH instead of PowerShell (and having the PERL engine for the grep command is really nice).

My office's database is separated into almost 50 separate ports, so it's not uncommon for an inconsistency to appear. For example, one dev may add a column to one database and none of the others, then update the code in SVN to match. This causes mysqli to freak out when it can't find a column. It's not uncommon to see emails to the entire dev team saying "Could whoever is in charge of column XYZ add it to all levels?"

I guess the people sending out these emails must refuse to use the command line. I wrote a simple PowerShell function to run a command on all databases. With that function, it's a two-command process to run "SELECT table_name,column_type FROM information_schema.columns where column_name='XYZ'" across all databases to find what table to add the column to and what type to use, and then add the column to that particular table across all of the databases. This takes about 60 seconds, so I've never felt the need to send out a mass email to every dev to add a column. I just find what needs to be added and add it myself.

I won't go so far as to say something obnoxious like "You're an idiot if you're not using CLI tools," of course, but I do think that these tools offer advantages that a lot of people seem to miss. The most useful tools that I've built for myself have been PowerShell, BASH, and VimScript functions.

What I find is that having a strong grasp (or even a mediocre grasp) of scripting languages like command line languages can really help you to complete a lot of tasks that GUI tools just aren't designed to handle. This is because these tasks are too nuanced for the designers of the GUI tools to have anticipated. The example that I have above with the British History Podcast is a decent one. I did that because my podcatcher program on my phone is great for listening to the most recent podcast, but not so great for binge-listening archived podcasts. I decided to get all of the mp3s so that I could put them in my audiobook reader instead. This could be done in a better scripting language like Python or Ruby, of course, but that's just using different scripting language-- The process remains the same.

Tip 21 from The Pragmatic Programmer by Hunt and Thomas is "Use the Power of Command Shells." "Gain familiarity with the shell, and you'll find your productivity soaring," they say. I would extend that to other scripting languages, and I think Hunt and Thomas would, too, because, in the same chapter, they recommend using a text editor that's programmable. I say, if you're interested, give it a shot. You don't really need to buy a book to learn them (the one PowerShell book that I bought turned out to be a complete waste). Just read a few brief tutorials online, and then you can Google everything else you need to learn as you go. It's been really beneficial to me.

Documentation Is Undervalued

Published / by Andrew

<@Logan> I spent a minute looking at my own code by accident.
<@Logan> I was thinking "What the hell is this guy doing?"

It seems that the conventional wisdom of the day is that documentation for code should be minimal, if it exists at all. This may be related to Agile methods, but that seems to be in some dispute. (I have a number of criticisms of Agile in general, but I’ll leave that for another time.) What I hear most often is that “Function names should be clear enough that you know what they do.”

I respectfully disagree. The reason why is because it requires that everybody on the team think in exactly the same way. Not everybody agrees that functions shouldn’t have side effects. Not everybody encapsulates code into functions. Not everybody adheres to MVC design patterns. Unfortunately, It’s not going to do me much good to scream at my team that they use the same standards that I do. Even if that worked, not everyone is going to interpret those standards and implement them in exactly the same way.

Most of the scripts that I come across are just giant procedures, usually between 400 to 3000 lines long. (Who knows why they did it that way– Probably wrote it in a hurry and it snowballed after a lot of tweaking.) Then someone asks me to add a feature, and I have no idea where to even begin.

Someone may reply, “This isn’t a lack of documentation problem, but a code clarity problem.” This is partly true. However, the problem is that not everyone has the same idea of what constitutes clearly-written code. Many people think those giant procedures are easier to understand.

What eventually ends up happening is someone has to explain those procedures. So, we still end up having documentation, but now it’s word-of-mouth instead of written word. Then begins a new nightmare. (The telephone game in a business setting is not fun.)

Since not everyone is an Uncle Bob clone, it’s better long-term practice to go ahead and encourage documentation. Standardizing a team’s practice is also great; don’t get me wrong. However, not everybody is going to interpret standards the same way and implement them consistently. The idea of “Write what this script does” is much easier to put into practice. It may take a little longer at first, but it really reduces the amount of confusion later on. (And use DocBlocks– They’re really convenient to write and useful to read.)

First Entry

Published / by Andrew

Hi, everyone.  I’ve decided to take the advice of Eli the Computer Guy and make a professional website for myself.  I think it’s a good idea because it combines two things that I really enjoy, code and writing, and it also increases my web presence so that people know more about who I am and what I can do.

In the future, I may do some customization on this WP blog, if for no other reason than to get the experience.  However, for the moment, I’m more concerned with the actual content rather than showing off PHP skillz.

(Is spelling “skills” with a “z” still a thing?  I don’t remember if they did that in Mr. Robot.)

I will be linking to all future posts in my Twitter feed and my LinkedIn account (hopefully automating that process in the near future).  I do have some ideas for blog posts that I’ve written down, and I will probably start creating new entries tomorrow.

These blog entries will be exclusively regarding software development and related technologies.

Thanks for reading and have a nice day.
–Andrew