Tuesday, December 23, 2014

Pre Commit Hook for JS and Go

If you have a web project that has both JavaScript and Go, then you may need to create a pre-commit hook to validate both .js and .go source files.

Here's one way to do it using two script files and a git pre-commit hook.

Create pre commit scripts for JS and Go


files=$(git diff --cached --name-only --diff-filter=ACM | grep ".js$")
if [ "$files" = "" ]; then
    exit 0

echo "\nValidating JavaScript:\n"

for file in ${files}; do
    result=$($LINTER ${file})
    if [ "$?" == "0" ]; then
        echo "\t\033[32m$LINTER Passed: ${file}\033[0m"
        echo "\t\033[31m$LINTER Failed: ${file}\033[0m"

echo "\nJavaScript validation complete\n"

if ! $pass; then
    echo "\033[41mCOMMIT FAILED:\033[0m Your commit contains files that should pass $LINTER but do not. Please fix the $LINTER errors and try again.\n"
    exit 1
    echo "\033[42mCOMMIT SUCCEEDED\033[0m\n"


# Copyright 2012 The Go Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.

# git gofmt pre-commit hook
# To use, store as .git/hooks/pre-commit inside your repository and make sure
# it has execute permissions.
# This script does not handle file names that contain spaces.

gofiles=$(git diff --cached --name-only --diff-filter=ACM | grep '.go$')
[ -z "$gofiles" ] && exit 0

unformatted=$(gofmt -l $gofiles)
[ -z "$unformatted" ] && exit 0

# Some files are not gofmt'd. Print message and fail.

echo >&2 "Go files must be formatted with gofmt. Please run:"
for fn in $unformatted; do
 echo >&2 "  gofmt -w $PWD/$fn"

exit 1


js-pre-commit-git-hook && go-pre-commit-git-hook


Put the scripts in your $PATH and make them executable.

Create a .jshintrc file with your desired jshint configuration settings.

Format a file (or directory)

In case you've already com=mited files before setting up this git hook, you can gofmt individual files (or directories) like this:

gofmt -w dirname/filename.go


Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Wednesday, December 17, 2014

Implications of the Sony Cyber Attack

When Sony Pictures employees got into the office on Monday, November 24, they discovered that their corporate network had been hacked.

The attackers took terabytes of private data, deleted the original copies from Sony computers, and left messages threatening to release the information if Sony didn't comply with the attackers' demands.

My First Guess

When I first heard that Sony got hacked, my first thought was, "I bet those guys run Windows."

That suspicion has been confirmed.

Possible Attack Vector

The attackers researched Sony's IT infrastructure and knew their victim's vulnerabilities.

Cyber Attack

The attackers could have used email system or Microsoft IE browser vulnerabilities to initially gain access to a regular employees workstation.

Gain Admin Access

Knowing that Sony ran Microsoft Windows, the attackers could have used a known Microsoft Implemented Kerberos solution vulnerability to forge a Privilege Attribute Certificate, that the Kerberos Key Distribution Center validates, to elevate their privileges to that of any other account on the domain.

Destructive Malware

Once attackers gets administrative keys to a Microsoft-based network with unencrypted file systems, they were able to extract that information and expose corporate secrets *** and follow up with destroying the files using destructive BKDR_WIPALL Malware.

*** Corporate Secrets Exposed

Men are paid more than women. Sony's 17 biggest-earning executives are predominantly white men. According to a spreadsheet called "Comp Roster by Supervisory Organization 2014-10-21," Amy Pascal, the co-chair of Sony Pictures Entertainment is the only woman earning $1 million or more at the studio.

A series of emails between Pascal and movie producer Scott Rudin showed an ugly side to the beautiful business of Hollywood. Rudin called Angelina Jolie a "minimally talented spoiled brat" in an email exchange with Pascal. Pascal and Rudin also made racially charged jokes about President Obama's taste in movies. As you would expect, Pascal and Rudin apologized, saying they are so sorry for what they said.

For more details, see: http://www.cnet.com/news/13-revelations-from-the-sony-hack/

Why the Hack was so Effective

  • Sony's Employee Workstations and Network Run on Microsoft Windows
  • Private data was not encrypted
  • Woefully Inadequate Network Security Monitoring

Law Suit Filed

Two former employees of Sony Pictures filed a lawsuit against Sony alleging it didn't do enough to safeguard their personal information and prevent its loss in that cyberattack.

The lawsuit was filed Monday, December 15th 2014, in U.S. District Court for the Central District of California, asks the court to award monetary damages and also class-action status. Thousands of Sony employees past and present could join the suit.

The lawsuit alleges, "Sony failed to secure its computer system, servers and databases, despite weaknesses that it has known about for years, because Sony made a business decision to accept the risk of losses associated with being hacked."

How can Sony defend itself against solid claims of negligence?

IT Security Laws for Corporations


Sarbanes-Oxley, or 'Sarbox' as it is sometimes called, was enacted in 2002 to help prevent future Enron-like episodes from happening again. It requires companies to be accountable for identifying and mitigating risks to their financial stability and this includes information security.

Sarbanes-Oxley details a "chain of accountability" where senior executives and board members must sign off on the accuracy of financial reporting, then the managers that report to them must be darned sure that their information is accurate. That applies to the managers who report to them and the people who report to them and so on. While the average employee of a public company will most likely not go to jail over a Sarbanes-Oxley violation (C-level executives are not so fortunate) each employee does have an important role in maintaining the security and integrity of corporate data.

When Sarbanes-Oxley mentions "controls" it it talking about policies, procedures and guidelines that protect information in your company with a direct implication of adequate IT security enforcement.

HIPAA Security Rule

This massive cyberattack constitutes unauthorized access or acquisition of personal information subject to most state and federal data breach notification requirements, including the HIPAA Data Breach Notification Rule. The HIPAA Security Rule contains a number of provisions that require covered entities and business associates to maintain procedures to monitor system activity for potential security incidents and investigate any such potential security incidents.

The HIPAA Security Rule requires covered entities and business associates to “regularly review records of information system activity, such as audit logs, access records, and security incident tracking reports.” 45 C.F.R. § 164.306(a)(1)(ii)(D). HHS guidance materials further state that this specification “should also promote continual awareness of any information system activity that could suggest a security incident.” See CMS, HIPAA Security Series Vol. 2 Security Standards: Administrative Safeguards

The HIPAA Security Rule requires covered entities and business associates to create and maintain appropriate records of system activity. See 45 C.F.R. 164.312(b). However, covered entities and business associates have significant discretion to create and maintain activity records based upon the formal assessment of their security risks.

Breach Notification

Breach notice laws typically define, “personal information” as, "A user name or email address, in combination with a password or security question and answer that would permit access to an online account."


  • IT security should be taken seriously
  • As a C-level executive, you should know the laws pertaining to safeguarding your company and employees' data.
  • As a C-level executive, you are liable for lax IT security enforcement at your company.

Lessons Learned

  1. If you are a C-level executive and your company runs Windows, change that or get another job.
  2. Hire a professional to thoroughly evaluate your current security policies.
  3. Don't ask for trouble, but if you do don't run Windows.

SANS Instituted Cyber Attack Response Plan

For many organizations today, the question is no longer if they will fall victim to a targeted attack, but when. In such an event, how an organization responds will determine whether it becomes a serious event or if it stays a mere annoyance.

This requires something of a change of mindset for information security professionals. Previous techniques and many best practices are under the premise that an attacker can be kept out.

However, that’s no longer the case today. The malware used in targeted attacks is frequently not detected (because it’s been custom-made for specific organizations). A well-crafted social engineering attack can look like a normal business email or engaging click bait.

In short, an attacker with sufficient resources will be able to find their way inside their target, regardless of what the defender does. The defender can raise the price of getting in, but not prevent it entirely.

The SANS Institute provides some guidelines to organizations on how they should react to incidents. Broadly speaking, however, the response can be divided into four steps:


This involves responding to a targeted attack even before the attack actually takes place. Security professionals need to plan for a response to a targeted attack on their network. System administrators will routinely have plans, for example, for downtime-related events such as a data center going offline.

Similarly, it’s important to be aware of the normal, day-to-day threats that an organization faces. Information security professionals must not only deal with these attacks as they happen, but should understand what their “normal” problems are so that abnormal threats like targeted attacks can be quickly spotted. Threat intelligence and analysis is valuable in this step, in order to guide security professionals into understanding what the current situation is.

Security professionals must also plan to acquire the right skills to effectively deal with targeted attacks. One of the most important skills to learn is digital forensic techniques, which allow for the proper acquisition and analysis of information from compromise devices.

Many of these techniques are quite foreign to normal IT day-to-day work, but learning these techniques will help organizations gain information and be better prepared to deal with any attack in progress.


Upon identifying targeted attack in progress, the next step is to respond decisively. Responding to a targeted attacks has several components: containing the threat, removing it, and determining the scope of damage. The first step is to immediately isolate or contain the scope of any threat. Steps that can be performed here include isolating infected machines or taking compromised services offline. Ultimately, the goal is to prevent an attack from gaining further ground.

To determine any threats in place, working hand in hand with a security vendor that has knowledge of commonly used targeted attack tools and grayware is useful in order to locate the threats within an organization. Similarly, continuous monitoring of existing network activity can help determine the scale and scope of any existing attack.


Just as important as responding to an attack is restoring an organization to normal operations. While some disruption is a necessary part of responding to a targeted attack, in the long run an organization has to “return to normal” and go back to normal operations.

“Restoring” an organization to normal is not only about technical considerations. If necessary, an organization needs to reach out to partners, stakeholders, and customers to clearly communicate the scope of a targeted attack’s damage, and any steps being taken to reduce the damage. In many cases, goodwill and trust are big casualties of a targeted attack, and these must be addressed as well.


Once an attack is over, organizations need to figure out what can be learned from it. Every attack offers lessons for defenders – what worked? What could we have done better? It may turn out that the some of the assumptions and information that went into planning for security incidents was not correct or incomplete.

However, it is also important to not overreact to any single incident. Overreacting can be just as bad as under-reaction: it can impose burdens on the organization that have marginal gains in security, if any. Decisions must be made bas

In today’s world of frequent targeted attacks – when breaches are a matter of when and not if - a carefully crafted strategy to respond to targeted attacks must be part and parcel of the larger defense strategy. This can be the difference between a minor nuisance and a major breach that could spell the demise of an organization.

For original reference to this section see: http://blog.trendmicro.com/trendlabs-security-intelligence/four-steps-to-an-effective-targeted-attack-response/


http://www.whitehouse.gov/issues/foreign-policy/cybersecurity/national-initiative http://blog.trendmicro.com/trendlabs-security-intelligence/an-analysis-of-the-destructive-malware-behind-fbi-warnings/

Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Java's verbose Python is too slow... It's time you know...

Write in Go! Write in Go!


The schedule's tight on the cluster tonight.
So I parallelized my code.
All those threads and continuations.
My head's going to explode.
And all that boilerplate.
That FactoryBuilderAdapterDelegateImpl
Seems unjustified
Give me something simple
Don't write in Scheme
Don't write in C
No more pointers that I forgot to free()
Java's verbose
Python's too slow
It's time you know
Write in Go
Write in Go
No inheritance anymore
Write in Go
Write in Go
There's no do or while,just for
I don't care what your linters say
I've got tools for that
The code never bothered me anyway
dodododo diudiudiu...
It's funny how some features
Make every change seem small
And the errors that once slowed me
Don't get me down at all
It's time to see what Go can do
Cause it seems too good to be true
No long compile times for me
I'm free
Write in Go
Write in Go
Kiss your pointer math goodbye
Write in Go
Write in Go
Time to give GC a try
I don't care if my structures stay
on the heap or stack
My program spawns its goroutines without a sound
Control is spiraling through buffered channels all around
I don't remember why I ever once subclassed
I'm never going back
My tests all build and pass
Write in Go
Write in Go
You won't use Eclipse anymore
Write in Go
Write in Go
Who cares what Boost is for?
I don't care what the tech leads say
oo wow oo...
I'll rewrite it all
Writing code never bothered me ,anyway

Sung by ScaleAbility, an acapella group at Google.

Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Saturday, December 13, 2014

Set Default File Type to Shell Script (Bash) in TextMate


Run this:

defaults write com.macromates.textmate OakDefaultLanguage DDEEA3ED-6B1C-11D9-8B10-000D93589AF6


All the default languages are stored in TextMate.app/Contents/SharedSupport/Bundles in the Syntaxes folder of the bundle.

This is in the binary plist format, so you'll need to convert it first to readable form. Let's say we want HTML to be the new default language, we would do (from terminal):

$ cd /Applications/TextMate.app/Contents/SharedSupport/Bundles/Shell Script.tmbundle/Syntaxes 
$ plutil -convert xml1 Shell-Unix-Generic.plist
$ grep -A1 uuid Shell-Unix-Generic.plist

Here “DDEEA3ED-6B1C-11D9-8B10-000D93589AF6” is the UUID for Bash Shell Script. Now we need to tell TM to use that as default by altering its defaults database.

First quit TextMate, then from terminal run:

$ defaults write com.macromates.textmate OakDefaultLanguage DDEEA3ED-6B1C-11D9-8B10-000D93589AF6

Start TextMate, and notice how all new documents are set to be Shell Script (Bash) by default.


This works for TextMate 1.5

Here's the list of supported file types:
  • ActionScript.tmbundle
  • Apache.tmbundle
  • AppleScript.tmbundle
  • Blogging.tmbundle
  • Bundle Development.tmbundle
  • C.tmbundle
  • CoffeeScriptBundle.tmbundle
  • CSS.tmbundle
  • Diff.tmbundle
  • Git.tmbundle
  • HTML.tmbundle
  • Hyperlink Helper.tmbundle
  • Java.tmbundle
  • JavaDoc.tmbundle
  • JavaScript.tmbundle
  • LaTeX.tmbundle
  • Mail.tmbundle
  • Make.tmbundle
  • Markdown.tmbundle
  • Math.tmbundle
  • Objective-C.tmbundle
  • OpenGL.tmbundle
  • Perl.tmbundle
  • PHP.tmbundle
  • Property List.tmbundle
  • Python.tmbundle
  • Ruby on Rails.tmbundle
  • Ruby.tmbundle
  • Shell Script.tmbundle
  • Source.tmbundle
  • SQL.tmbundle
  • Subversion.tmbundle
  • Text.tmbundle
  • Textile.tmbundle
  • TextMate.tmbundle
  • TODO.tmbundle
  • Transmit.tmbundle
  • Xcode.tmbundle
  • XML.tmbundle
  • YAML.tmbundle



Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Sunday, December 7, 2014

Bash File Testing


Conditional expressions are used by the [[ compound command and the test and [ builtin commands.

Expressions may be unary or binary. Unary expressions are often used to examine the status of a file. There are string operators and numeric comparison operators as well. If the file argument to one of the primaries is of the form /dev/fd/N, then file descriptor N is checked. If the file argument to one of the primaries is one of /dev/stdin, /dev/stdout, or /dev/stderr, file descriptor 0, 1, or 2, respectively, is checked.

When used with ‘[[’, the ‘<’ and ‘>’ operators sort lexicographically using the current locale. The test command uses ASCII ordering.

Unless otherwise specified, primaries that operate on files follow symbolic links and operate on the target of the link, rather than the link itself.

Bash Conditional Expressions

builtin (see The Set Builtin).
-a file True if file exists.
-b file True if file exists and is a block special file.
-c file True if file exists and is a character special file.
-d file True if file exists and is a directory.
-e file True if file exists.
-f file True if file exists and is a regular file.
-g file True if file exists and its set-group-id bit is set.
-h file True if file exists and is a symbolic link.
-k file True if file exists and its "sticky" bit is set.
-p file True if file exists and is a named pipe (FIFO).
-r file True if file exists and is readable.
-s file True if file exists and has a size greater than zero.
-t fd True if file descriptor fd is open and refers to a terminal.
-u file True if file exists and its set-user-id bit is set.
-w file True if file exists and is writable.
-x file True if file exists and is executable.
-G file True if file exists and is owned by the effective group id.
-L file True if file exists and is a symbolic link.
-N file True if file exists and has been modified since it was last read.
-O file True if file exists and is owned by the effective user id.
-S file True if file exists and is a socket.
file1 -ef file2 True if file1 and file2 refer to the same device and inode numbers.
file1 -nt file2 True if file1 is newer (according to modification date) than file2, or if file1 exists and file2 does not.
file1 -ot file2 True if file1 is older than file2, or if file2 exists and file1 does not.
-o optname True if the shell option optname is enabled. The list of options appears in the description of the -o option to the set
-v varname True if the shell variable varname is set (has been assigned a value).
-z string True if the length of string is zero.
-n string True if the length of string is non-zero.
string1 == string2 True if the strings are equal. ‘=’ should be used with the test command for POSIX conformance.
string1 != string2 True if the strings are not equal.
string1 < string2 True if string1 sorts before string2 lexicographically.
string1 > string2 True if string1 sorts after string2 lexicographically.
arg1 OP arg2 OP is one of ‘-eq’, ‘-ne’, ‘-lt’, ‘-le’, ‘-gt’, or ‘-ge’. These arithmetic binary operators return true if arg1 is equal to, not equal to, less than, less than or equal to, greater than, or greater than or equal to arg2, respectively. Arg1 and arg2 may be positive or negative integers.


if [ -e "$FNAME" ]; then 
    echo "$FNAME exists."
    echo "$FNAME does not exist."



Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Thursday, November 27, 2014

You have mail. (that you don't want from openssl)


Do you see the You have mail. when you open your terminal?

And do you see this in your mail? (in /private/var/mail/`whoami`)

WARNING: can't open config file: /usr/local/etc/openssl/openssl.cnf

If so, then perhaps your openssl is misconfigured.

Prevent the unwanted "You have mail." message by configuring your mac to use the brew installed version of openssl.

First, check which version of openssl you are running:

$ openssl version
OpenSSL 0.9.8za 5 Jun 2014

Next, backup stock version of openssl:

sudo mv /usr/bin/openssl /usr/bin/openssl_v0.9.8za

Now, install openssl using brew:

brew uninstall openssl
brew prune
brew cleanup
sudo brew install openssl

Then, make brew's openssl the system default:

sudo ln -s `find /usr/local/Cellar/openssl -name openssl| grep \/bin` /usr/bin/openssl

And verify that the openssl you are running is from brew:

$ openssl version -a
OpenSSL 1.0.1j 15 Oct 2014
built on: Fri Oct 17 21:14:05 BST 2014
platform: darwin64-x86_64-cc
options:  bn(64,64) rc4(ptr,char) des(idx,cisc,16,int) idea(int) blowfish(idx)
OPENSSLDIR: "/usr/local/etc/openssl"

Create openssl.cnf File

Lastly, if you want to prevent the "WARNING: can't open config file: /usr/local/etc/openssl/openssl.cnf" message, you may need to create that file.

Here's one that should work:

# OpenSSL configuration file.
# Establish working directory.
dir     = .
[ ca ]
default_ca    = CA_default
[ CA_default ]
serial     = $dir/serial
database    = $dir/certindex.txt
new_certs_dir    = $dir/certs
certificate    = $dir/cacert.pem
private_key    = $dir/private/cakey.pem
default_days    = 3650
default_md    = md5
preserve    = no
email_in_dn    = no
nameopt     = default_ca
certopt     = default_ca
policy     = policy_match
[ policy_match ]
countryName    = match
stateOrProvinceName   = match
organizationName   = match
organizationalUnitName   = optional
commonName    = supplied
emailAddress    = optional
[ req ]
default_bits    = 1024   # Size of keys
default_keyfile    = key.pem  # name of generated keys
default_md    = md5    # message digest algorithm
string_mask    = nombstr  # permitted characters
distinguished_name   = req_distinguished_name
req_extensions    = v3_req
[ req_distinguished_name ]
# Variable name    Prompt string
#-------------------------   ----------------------------------
0.organizationName   = Organization Name (company)
organizationalUnitName   = Organizational Unit Name (department, division)
emailAddress    = Email Address
emailAddress_max   = 40
localityName    = Locality Name (city, district)
stateOrProvinceName   = State or Province Name (full name)
countryName    = Country Name (2 letter code)
countryName_min    = 2
countryName_max    = 2
commonName    = Common Name (hostname, IP, or your name)
commonName_max    = 64
# Default values for the above, for consistency and less typing.
# Variable name    Value
#------------------------   ------------------------------
0.organizationName_default  = My Company
localityName_default   = My Town
stateOrProvinceName_default  = State or Providence
countryName_default   = US
[ v3_ca ]
basicConstraints   = CA:TRUE
subjectKeyIdentifier   = hash
authorityKeyIdentifier   = keyid:always,issuer:always
[ v3_req ]
basicConstraints   = CA:FALSE
subjectKeyIdentifier   = hash

Note that I made the certificate life 10 years. The rest is standard stuff.

Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Wednesday, November 26, 2014

How to Install Any Version of Node and NPM on OSX


Have the need to downgrade your Node and NPM installations?

If so, here's how I downgraded node from v0.10.33 to v0.10.26 and npm from 2.1.9 to 1.3.6.

Run these commands

sudo rm -rf /usr/local/lib/node_modules
echo prefix=~/.node >> ~/.npmrc
brew uninstall node
cd /usr/local
brew versions node|grep 0.10.26   # << this shows the git commit id (bae051d)
git checkout 0901e77 Library/Formula/node.rb
brew unlink node
brew install node
curl -L https://www.npmjs.org/install.sh|pbcopy
# This is when I created the install-npm-1.3.6.sh file
chmod +x install-npm-1.3.6.sh
ln -s `which npm` $HOME/.node/bin/npm

Created install-npm-1.3.6.sh File

Replace "latest" text with "1.3.6"


# A word about this shell script:
# It must work everywhere, including on systems that lack
# a /bin/bash, map 'sh' to ksh, ksh97, bash, ash, or zsh,
# and potentially have either a posix shell or bourne
# shell living at /bin/sh.
# See this helpful document on writing portable shell scripts:
# http://www.gnu.org/s/hello/manual/autoconf/Portable-Shell.html
# The only shell it won't ever work on is cmd.exe.

if [ "x$0" = "xsh" ]; then
  # run as curl | sh
  # on some systems, you can just do cat>npm-install.sh
  # which is a bit cuter.  But on others, &1 is already closed,
  # so catting to another script file won't do anything.
  curl -s https://www.npmjs.org/install.sh > npm-install-$$.sh
  sh npm-install-$$.sh
  rm npm-install-$$.sh
  exit $ret

# See what "npm_config_*" things there are in the env,
# and make them permanent.
# If this fails, it's not such a big deal.
configures="`env | grep 'npm_config_' | sed -e 's|^npm_config_||g'`"

if [ "x$npm_debug" = "x" ]; then
  (exit 0)
  echo "Running in debug mode."
  echo "Note that this requires bash or zsh."
  set -o xtrace
  set -o pipefail
export npm_config_loglevel

# make sure that node exists
node=`which node 2>&1`
if [ $ret -eq 0 ] && [ -x "$node" ]; then
  (exit 0)
  echo "npm cannot be installed without node.js." >&2
  echo "Install node first, and then try again." >&2
  echo "" >&2
  echo "Maybe node is installed, but not in the PATH?" >&2
  echo "Note that running as sudo can change envs." >&2
  echo ""
  echo "PATH=$PATH" >&2
  exit $ret

# set the temp dir
if [ "x$TMP" = "x" ]; then
rm -rf "$TMP" || true
mkdir "$TMP"
if [ $? -ne 0 ]; then
  echo "failed to mkdir $TMP" >&2
  exit 1


if [ -z "$tar" ]; then
if [ -z "$tar" ]; then
  tar=`which tar 2>&1`

if [ $ret -eq 0 ] && [ -x "$tar" ]; then
  echo "tar=$tar"
  echo "version:"
  $tar --version

if [ $ret -eq 0 ]; then
  (exit 0)
  echo "No suitable tar program found."
  exit 1

# Try to find a suitable make
# If the MAKE environment var is set, use that.
# otherwise, try to find gmake, and then make.
# If no make is found, then just execute the necessary commands.

# XXX For some reason, make is building all the docs every time.  This
# is an annoying source of bugs. Figure out why this happens.

if [ "x$MAKE" = "x" ]; then
  make=`which gmake 2>&1`
  if [ $? -eq 0 ] && [ -x "$make" ]; then
    (exit 0)
    make=`which make 2>&1`
    if [ $? -eq 0 ] && [ -x "$make" ]; then
      (exit 0)

if [ -x "$make" ]; then
  (exit 0)
  # echo "Installing without make. This may fail." >&2

# If there's no bash, then don't even try to clean
if [ -x "/bin/bash" ]; then
  (exit 0)

node_version=`"$node" --version 2>&1`
if [ $ret -ne 0 ]; then
  echo "You need node to run this program." >&2
  echo "node --version reports: $node_version" >&2
  echo "with exit code = $ret" >&2
  echo "Please install node before continuing." >&2
  exit $ret

if [ -z "$t" ]; then
  # switch based on node version.
  # note that we can only use strict sh-compatible patterns here.
  case $node_version in
    0.[01234567].* | v0.[01234567].*)
      echo "You are using an outdated and unsupported version of" >&2
      echo "node ($node_version).  Please update node and try again." >&2
      exit 99
      echo "install npm@1.3.6"

# need to echo "" after, because Posix sed doesn't treat EOF
# as an implied end of line.
url=`(curl -SsL https://registry.npmjs.org/npm/$t; echo "") \
     | sed -e 's/^.*tarball":"//' \
     | sed -e 's/".*$//'`

if [ "x$url" = "x" ]; then
  # try without the -e arg to sed.
  url=`(curl -SsL https://registry.npmjs.org/npm/$t; echo "") \
       | sed 's/^.*tarball":"//' \
       | sed 's/".*$//'`
  if [ "x$url" = "x" ]; then
if [ $ret -ne 0 ]; then
  echo "Failed to get tarball url for npm/$t" >&2
  exit $ret

echo "fetching: $url" >&2

cd "$TMP" \
  && curl -SsL "$url" \
     | $tar -xzf - \
  && cd "$TMP"/* \
  && (ver=`"$node" bin/read-package-json.js package.json version`
      if [ $ret -eq 0 ]; then
        if [ -d node_modules ]; then
          if "$node" node_modules/semver/bin/semver -v "$ver" -r "1"
          if "$node" bin/semver -v "$ver" -r ">=1.0"; then

      if [ $isnpm10 -eq 1 ] && [ -f "scripts/clean-old.sh" ]; then
        if [ "x$skipclean" = "x" ]; then
          (exit 0)
        if [ "x$clean" = "xno" ] \
            || [ "x$clean" = "xn" ]; then
          echo "Skipping 0.x cruft clean" >&2
        elif [ "x$clean" = "xy" ] || [ "x$clean" = "xyes" ]; then
          NODE="$node" /bin/bash "scripts/clean-old.sh" "-y"
          NODE="$node" /bin/bash "scripts/clean-old.sh" &2
        exit $ret
      fi) \
  && (if [ "x$configures" = "x" ]; then
        (exit 0)
        echo "./configure $configures"
        echo "$configures" > npmrc
      fi) \
  && (if [ "$make" = "NOMAKE" ]; then
        (exit 0)
      elif "$make" uninstall install; then
        (exit 0)
      if [ "$make" = "NOMAKE" ]; then
        "$node" cli.js rm npm -gf
        "$node" cli.js install -gf
      fi) \
  && cd "$BACK" \
  && rm -rf "$TMP" \
  && echo "It worked"

if [ $ret -ne 0 ]; then
  echo "It failed" >&2
exit $ret

Check Your Downgraded Versions

$ node -v
\$ npm -v


To get the correct version of node to checkout you'll need to do something like this:

brew versions node | grep 0.10.26

... and to get brew versions to work you may need to run this:

brew tap homebrew/boneyard

I do not advocate downgrading node and npm.

This was for example purposes.

Reverting Back to Latest Versions

Not sure why any one would want to remove such a useful command, but for the time being you can use the brew versions command to get the git command used to download a specific node version (0.10.33 in our case).

$ brew versions node | grep 0.10.33
Warning: brew-versions is unsupported and will be removed soon.
You should use the homebrew-versions tap instead:

0.10.33  git checkout 4b9395f /usr/local/Library/Formula/node.rb

$ brew cleanup
$ brew cleanup --cache
$ cd /usr/local/Library/
$ git checkout 4b9395f /usr/local/Library/Formula/node.rb
$ brew install node


No other combination of brew commands would upgrade node from 0.10.26 to 0.10.33.

So, none of these had any other effect, other than keeping node at version 0.10.26:

$ brew uninstall node
$ brew upgrade
$ brew cleanup 
$ brew cleanup --cache
$ brew upgrade node

Verify Versions

Verify that both node and npm are back to the latest versions:

~ $ node -v
~ $ npm -v

Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

For All Git Repos, Ignore All of This


Are there files that you are always having to add to the .gitignore file for all of your git repositories?

If so, use this technique to create a global ignore file that is automatically applied to all of your git repos on you local workstation.

For all git repos, ignore all of this

First, create the file to contain the global git ignore patterns:


# Compiled source #

# Packages #
# it's better to unpack these files and commit the raw source
# git has its own built in compression methods

# Logs and databases #

# OS generated files #


Then, configure git to use the gitignore_global ignore patterns.

git config --global core.excludesfile ~/.gitignore_global

Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Monday, November 17, 2014

Add Golang Bundle to TextMate


Want Go language syntax highlighting in TextMate v2?

Alan Quatermain's Textmate bundle

Run the following commands and you should be set...

mkdir -p ~/Library/Application\ Support/Avian/Bundles
cd ~/Library/Application\ Support/Avian/Bundles
git clone git://github.com/AlanQuatermain/go-tmbundle.git Go.tmbundle


This also works for TextMate 1.5.


https://github.com/AlanQuatermain/go-tmbundle https://github.com/rsms/Go.tmbundle

Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Sunday, November 16, 2014

Upgrade PHP on OSX 10.10 (yosemite)


First, you need to upgrade your to the latest version of XCode and reinstall XCode client tools: xcode-select --install

Even then, you may run into problems upgrading PHP:

Problems Upgrading PHP

~ $ brew upgrade
==> Upgrading 2 outdated packages, with result:
gdal 1.11.1_2, php55 5.5.19
==> Upgrading gdal
==> Downloading https://downloads.sf.net/project/machomebrew/Bottles/gdal-1.11.1_2.yosemite.bottle.tar.gz
######################################################################## 100.0%
==> Pouring gdal-1.11.1_2.yosemite.bottle.tar.gz
🍺  /usr/local/Cellar/gdal/1.11.1_2: 229 files, 34M
==> Upgrading php55
==> Downloading https://www.php.net/get/php-5.5.19.tar.bz2/from/this/mirror
######################################################################## 100.0%
Error: Permission denied - /usr/local/Cellar/php55/5.5.19
~ $ brew doctor
Your system is ready to brew.

Problems Upgrading PHP

Run the following commands and you should be set...

$ sudo chmod -R g+w /Library/Caches/Homebrew
$ brew rm zlib
$ brew update && brew upgrade
$ brew install git
$ brew install openssl
$ brew install php56 --homebrew-apxs --with-apache --with-homebrew-curl --with-homebrew-openssl --with-phpdbg --with-tidy --without-snmp


http://lexsheehan.blogspot.com/2013/12/while-testing-new-async-infrastructure.html http://superuser.com/questions/550305/homebrew-install-problems-permission-denied-library-caches-homebrew-formula http://stackoverflow.com/questions/25149032/brew-install-php55-intl-fails-cant-install-composer

Share this article

This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Script to Install or Update Go on OSX


Here's a bash script to Install or Update Go on OSX.


function goversion {
    go version|cut -d" " -f3|while read n; do echo "${n:2}"; done    
set -x
brew update
brew doctor
{ set +x; } &>/dev/null
echo "Current Go version: `goversion`"
echo ""
echo "Need to do what brew doctor suggested? (It's safe to rerun this command.)  CTRL+C to stop --or-- Enter to continue..."
read x

if [ "`goversion`" == "" ]; then
    brew install go 
    brew link --overwrite go
    brew upgrade go
    brew link go
echo "New Go version: `goversion`"

# Following currently enables intellij Go plugin to use new Go version
function reset-idea-go-plugin {
    if [ "`which idea`" != "" ]; then
        printf "intellij is installed.  Now, updating its go plugin..."
        # Note: version numbers may change
        rm /usr/local/Cellar/go/1.2.2
        rm /usr/local/Cellar/go/1.3
        ln -s /usr/local/Cellar/go/1.3.3 /usr/local/Cellar/go/1.2.2
        echo "Done."

reset-idea-go-plugin 2>/dev/null


~ $ update-go
+ brew update
Already up-to-date.
+ brew doctor
Your system is ready to brew.
Current Go version: 1.3.3

Need to do what brew doctor suggested? (It's safe to rerun this command.)  CTRL+C to stop --or-- Enter to continue...

Error: go-1.3.3 already installed
Warning: Already linked: /usr/local/Cellar/go/1.3.3
To relink: brew unlink go && brew link go
New Go version: 1.3.3
intellij is installed.  Now, updating its go plugin...Done.
~ $


It's safe to rerun this script.

Assumes you're on a mac.

Assumes you have homebrew installed.

If you use the Go plugin in IntelliJ, this script will direct it to use the newly installed version of Golang.



This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Tuesday, November 4, 2014

Create Executable Jar from IntelliJ Java Project


  • Define Jar file Components
  • Build Jar file
  • Run command

Define Jar file Components

File | Project Structure...
Select Artifacts in left side under Project Settings
  • Click "+"
  • Select "jar"
  • Choose "From modules with dependencies"
  • Select the class with your executable main method.

Build Jar file

Build | Build Artifacts...
  • Under "Build Artifact", choose the jar file that you defined earlier
  • Under "Build", click "Build"
The jar file can be found in the /out/artifacts/_jar directory

Run command

If your project name is "ijtest" the command to execute your jar would look something like this:

$ java -jar /Users/lex/dev/java/ijtest/out/artifacts/ijtest_jar/ijtest.jar


You can pass parameters, too.

$ java -jar /Users/lex/dev/java/ijtest/out/artifacts/IJTest_jar/ij.jar --parm1=ONE --parm2=TWO


First, make sure you are running a recent version of InjelliJ.

I'm running InjelliJ Community Edition v14.0.2

Next, make sure the sdk you've configured in IntelliJ is consistent with the one you use on your console.

This is what happens when your IntelliJ JDK is version 1.7 but your java in your console is version 1.6...

~/dev/java $ java -jar /Users/lex/dev/java/ijtest/out/artifacts/ijtest_jar/ijtest.jar
Exception in thread "main" java.lang.UnsupportedClassVersionError: com/company/Main : Unsupported major.minor version 51.0
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClassCond(ClassLoader.java:637)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
 at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
 at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
~/dev/java $ java -version
java version "1.6.0_65"
Java(TM) SE Runtime Environment (build 1.6.0_65-b14-466.1-11M4716)
Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-466.1, mixed mode)
~/dev/java $ find /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home -name java
~/dev/java $ /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/jre/bin/java  -jar /Users/lex/dev/java/ijtest/out/artifacts/ijtest_jar/ijtest.jar
~/dev/java $ /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/jre/bin/java  -jar /Users/lex/dev/java/ijtest/out/artifacts/ijtest_jar/ijtest.jar
hello, world!


Here we run the jar, from the console, that I we just created using the steps outlined above...

~/dev/java/ijtest/out/artifacts/ijtest_jar $ ls -lrt
total 24
-rw-r--r--  1 lex  staff   968 Jan  5 22:56 ijtest.jar
drwx------  4 lex  staff   136 Jan  5 22:57 ijtest
-rw-r--r--@ 1 lex  staff  6148 Jan  5 22:57 .DS_Store
~/dev/java/ijtest/out/artifacts/ijtest_jar $ jar xf ./ijtest.jar
~/dev/java/ijtest/out/artifacts/ijtest_jar $ ls -lrt
total 24
drwxr-xr-x  3 lex  staff   102 Jan  5 22:56 com
drwxr-xr-x  3 lex  staff   102 Jan  5 22:56 META-INF
-rw-r--r--  1 lex  staff   968 Jan  5 22:56 ijtest.jar
drwx------  4 lex  staff   136 Jan  5 22:57 ijtest
-rw-r--r--@ 1 lex  staff  6148 Jan  5 22:57 .DS_Store
~/dev/java/ijtest/out/artifacts/ijtest_jar $ find .

Notice that all files exist in the jar file, including MANIFEST.MF.



This work is licensed under the Creative Commons Attribution 3.0 Unported License.