Wednesday, December 17, 2014

Implications of the Sony Cyber Attack



When Sony Pictures employees got into the office on Monday, November 24, they discovered that their corporate network had been hacked.

The attackers took terabytes of private data, deleted the original copies from Sony computers, and left messages threatening to release the information if Sony didn't comply with the attackers' demands.

My First Guess

When I first heard that Sony got hacked, my first thought was, "I bet those knuckle heads run Windows."

That suspicion has been confirmed.

Possible Attack Vector

The attackers researched Sony's IT infrastructure and knew their victim's vulnerabilities.

Cyber Attack

The attackers could have used email system or Microsoft IE browser vulnerabilities to initially gain access to a regular employees workstation.



Gain Admin Access

Knowing that Sony ran Microsoft Windows, the attackers could have used a known Microsoft Implemented Kerberos solution vulnerability to forge a Privilege Attribute Certificate, that the Kerberos Key Distribution Center validates, to elevate their privileges to that of any other account on the domain.


Destructive Malware

Once attackers gets administrative keys to a Microsoft-based network with unencrypted file systems, they were able to extract that information and expose corporate secrets *** and follow up with destroying the files using destructive BKDR_WIPALL Malware.


*** Corporate Secrets Exposed

Men are paid more than women. Sony's 17 biggest-earning executives are predominantly white men. According to a spreadsheet called "Comp Roster by Supervisory Organization 2014-10-21," Amy Pascal, the co-chair of Sony Pictures Entertainment is the only woman earning $1 million or more at the studio.

A series of emails between Pascal and movie producer Scott Rudin showed an ugly side to the beautiful business of Hollywood. Rudin called Angelina Jolie a "minimally talented spoiled brat" in an email exchange with Pascal. Pascal and Rudin also made racially charged jokes about President Obama's taste in movies. As you would expect, Pascal and Rudin apologized, saying they are so sorry for what they said.

For more details, see: http://www.cnet.com/news/13-revelations-from-the-sony-hack/


Why the Hack was so Effective

  • Sony's Employee Workstations and Network Run on Microsoft Windows
  • Private data was not encrypted
  • Woefully Inadequate Network Security Monitoring


Law Suit Filed

Two former employees of Sony Pictures filed a lawsuit against Sony alleging it didn't do enough to safeguard their personal information and prevent its loss in that cyberattack.

The lawsuit was filed Monday, December 15th 2014, in U.S. District Court for the Central District of California, asks the court to award monetary damages and also class-action status. Thousands of Sony employees past and present could join the suit.

The lawsuit alleges, "Sony failed to secure its computer system, servers and databases, despite weaknesses that it has known about for years, because Sony made a business decision to accept the risk of losses associated with being hacked."

How can Sony defend itself against solid claims of negligence?


IT Security Laws for Corporations

Sarbanes-Oxley

Sarbanes-Oxley, or 'Sarbox' as it is sometimes called, was enacted in 2002 to help prevent future Enron-like episodes from happening again. It requires companies to be accountable for identifying and mitigating risks to their financial stability and this includes information security.

Sarbanes-Oxley details a "chain of accountability" where senior executives and board members must sign off on the accuracy of financial reporting, then the managers that report to them must be darned sure that their information is accurate. That applies to the managers who report to them and the people who report to them and so on. While the average employee of a public company will most likely not go to jail over a Sarbanes-Oxley violation (C-level executives are not so fortunate) each employee does have an important role in maintaining the security and integrity of corporate data.

When Sarbanes-Oxley mentions "controls" it it talking about policies, procedures and guidelines that protect information in your company with a direct implication of adequate IT security enforcement.

HIPAA Security Rule

This massive cyberattack constitutes unauthorized access or acquisition of personal information subject to most state and federal data breach notification requirements, including the HIPAA Data Breach Notification Rule. The HIPAA Security Rule contains a number of provisions that require covered entities and business associates to maintain procedures to monitor system activity for potential security incidents and investigate any such potential security incidents.

The HIPAA Security Rule requires covered entities and business associates to “regularly review records of information system activity, such as audit logs, access records, and security incident tracking reports.” 45 C.F.R. § 164.306(a)(1)(ii)(D). HHS guidance materials further state that this specification “should also promote continual awareness of any information system activity that could suggest a security incident.” See CMS, HIPAA Security Series Vol. 2 Security Standards: Administrative Safeguards

The HIPAA Security Rule requires covered entities and business associates to create and maintain appropriate records of system activity. See 45 C.F.R. 164.312(b). However, covered entities and business associates have significant discretion to create and maintain activity records based upon the formal assessment of their security risks.

Breach Notification

Breach notice laws typically define, “personal information” as, "A user name or email address, in combination with a password or security question and answer that would permit access to an online account."

Implications

  • IT security should be taken seriously
  • As a C-level executive, you should know the laws pertaining to safeguarding your company and employees' data.
  • As a C-level executive, you are liable for lax IT security enforcement at your company.

Lessons Learned

  1. If you are a C-level executive and your company runs Windows, change that or get another job.
  2. Hire a professional to thoroughly evaluate your current security policies.
  3. Don't ask for trouble, but if you do don't run Windows.


SANS Instituted Cyber Attack Response Plan

For many organizations today, the question is no longer if they will fall victim to a targeted attack, but when. In such an event, how an organization responds will determine whether it becomes a serious event or if it stays a mere annoyance.

This requires something of a change of mindset for information security professionals. Previous techniques and many best practices are under the premise that an attacker can be kept out.

However, that’s no longer the case today. The malware used in targeted attacks is frequently not detected (because it’s been custom-made for specific organizations). A well-crafted social engineering attack can look like a normal business email or engaging click bait.

In short, an attacker with sufficient resources will be able to find their way inside their target, regardless of what the defender does. The defender can raise the price of getting in, but not prevent it entirely.

The SANS Institute provides some guidelines to organizations on how they should react to incidents. Broadly speaking, however, the response can be divided into four steps:

Prepare

This involves responding to a targeted attack even before the attack actually takes place. Security professionals need to plan for a response to a targeted attack on their network. System administrators will routinely have plans, for example, for downtime-related events such as a data center going offline.

Similarly, it’s important to be aware of the normal, day-to-day threats that an organization faces. Information security professionals must not only deal with these attacks as they happen, but should understand what their “normal” problems are so that abnormal threats like targeted attacks can be quickly spotted. Threat intelligence and analysis is valuable in this step, in order to guide security professionals into understanding what the current situation is.

Security professionals must also plan to acquire the right skills to effectively deal with targeted attacks. One of the most important skills to learn is digital forensic techniques, which allow for the proper acquisition and analysis of information from compromise devices.

Many of these techniques are quite foreign to normal IT day-to-day work, but learning these techniques will help organizations gain information and be better prepared to deal with any attack in progress.

Respond

Upon identifying targeted attack in progress, the next step is to respond decisively. Responding to a targeted attacks has several components: containing the threat, removing it, and determining the scope of damage. The first step is to immediately isolate or contain the scope of any threat. Steps that can be performed here include isolating infected machines or taking compromised services offline. Ultimately, the goal is to prevent an attack from gaining further ground.

To determine any threats in place, working hand in hand with a security vendor that has knowledge of commonly used targeted attack tools and grayware is useful in order to locate the threats within an organization. Similarly, continuous monitoring of existing network activity can help determine the scale and scope of any existing attack.

Restore

Just as important as responding to an attack is restoring an organization to normal operations. While some disruption is a necessary part of responding to a targeted attack, in the long run an organization has to “return to normal” and go back to normal operations.

“Restoring” an organization to normal is not only about technical considerations. If necessary, an organization needs to reach out to partners, stakeholders, and customers to clearly communicate the scope of a targeted attack’s damage, and any steps being taken to reduce the damage. In many cases, goodwill and trust are big casualties of a targeted attack, and these must be addressed as well.

Learn

Once an attack is over, organizations need to figure out what can be learned from it. Every attack offers lessons for defenders – what worked? What could we have done better? It may turn out that the some of the assumptions and information that went into planning for security incidents was not correct or incomplete.

However, it is also important to not overreact to any single incident. Overreacting can be just as bad as under-reaction: it can impose burdens on the organization that have marginal gains in security, if any. Decisions must be made bas


In today’s world of frequent targeted attacks – when breaches are a matter of when and not if - a carefully crafted strategy to respond to targeted attacks must be part and parcel of the larger defense strategy. This can be the difference between a minor nuisance and a major breach that could spell the demise of an organization.

For original reference to this section see: http://blog.trendmicro.com/trendlabs-security-intelligence/four-steps-to-an-effective-targeted-attack-response/


References

http://ww2.cfo.com/technology/2014/12/sony-says-private-data-stolen-brazen-cyber-attack/
http://www.computerworld.com/article/2860456/lawsuit-filed-against-sony-after-massive-hack.html
http://recode.net/2014/12/02/details-emerge-on-malware-used-in-sony-hacking-attack/
http://www.computerweekly.com/news/2240236006/Sony-hack-exposes-poor-security-practices
http://blogs.technet.com/b/srd/archive/2014/11/18/additional-information-about-cve-2014-6324.aspx
http://www.whitehouse.gov/issues/foreign-policy/cybersecurity/national-initiative http://blog.trendmicro.com/trendlabs-security-intelligence/an-analysis-of-the-destructive-malware-behind-fbi-warnings/

Share this article



This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Java's verbose Python is too slow... It's time you know...

Write in Go! Write in Go!



Lyrics

The schedule's tight on the cluster tonight.
So I parallelized my code.
All those threads and continuations.
My head's going to explode.
And all that boilerplate.
That FactoryBuilderAdapterDelegateImpl
Seems unjustified
Give me something simple
Don't write in Scheme
Don't write in C
No more pointers that I forgot to free()
Java's verbose
Python's too slow
It's time you know
Write in Go
Write in Go
No inheritance anymore
Write in Go
Write in Go
There's no do or while,just for
I don't care what your linters say
I've got tools for that
The code never bothered me anyway
dodododo diudiudiu...
It's funny how some features
Make every change seem small
And the errors that once slowed me
Don't get me down at all
It's time to see what Go can do
Cause it seems too good to be true
No long compile times for me
I'm free
Write in Go
Write in Go
Kiss your pointer math goodbye
Write in Go
Write in Go
Time to give GC a try
I don't care if my structures stay
on the heap or stack
donononododono...
My program spawns its goroutines without a sound
Control is spiraling through buffered channels all around
I don't remember why I ever once subclassed
I'm never going back
My tests all build and pass
Write in Go
Write in Go
You won't use Eclipse anymore
Write in Go
Write in Go
Who cares what Boost is for?
I don't care what the tech leads say
oo wow oo...
I'll rewrite it all
nonononono...
Writing code never bothered me ,anyway

Sung by Scaleability, an acapella group at Google.

Share this article



This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Saturday, December 13, 2014

Set Default File Type to Shell Script (Bash) in TextMate

TLDR

Run this:

defaults write com.macromates.textmate OakDefaultLanguage DDEEA3ED-6B1C-11D9-8B10-000D93589AF6

Details

All the default languages are stored in TextMate.app/Contents/SharedSupport/Bundles in the Syntaxes folder of the bundle.

This is in the binary plist format, so you'll need to convert it first to readable form. Let's say we want HTML to be the new default language, we would do (from terminal):


$ cd /Applications/TextMate.app/Contents/SharedSupport/Bundles/Shell Script.tmbundle/Syntaxes 
$ plutil -convert xml1 Shell-Unix-Generic.plist
$ grep -A1 uuid Shell-Unix-Generic.plist
 uuid
 DDEEA3ED-6B1C-11D9-8B10-000D93589AF6 


Here “DDEEA3ED-6B1C-11D9-8B10-000D93589AF6” is the UUID for Bash Shell Script. Now we need to tell TM to use that as default by altering its defaults database.

First quit TextMate, then from terminal run:


$ defaults write com.macromates.textmate OakDefaultLanguage DDEEA3ED-6B1C-11D9-8B10-000D93589AF6


Start TextMate, and notice how all new documents are set to be Shell Script (Bash) by default.

Notes

This works for TextMate 1.5

Here's the list of supported file types:
  • ActionScript.tmbundle
  • Apache.tmbundle
  • AppleScript.tmbundle
  • Blogging.tmbundle
  • Bundle Development.tmbundle
  • C.tmbundle
  • CoffeeScriptBundle.tmbundle
  • CSS.tmbundle
  • Diff.tmbundle
  • Git.tmbundle
  • HTML.tmbundle
  • Hyperlink Helper.tmbundle
  • Java.tmbundle
  • JavaDoc.tmbundle
  • JavaScript.tmbundle
  • LaTeX.tmbundle
  • Mail.tmbundle
  • Make.tmbundle
  • Markdown.tmbundle
  • Math.tmbundle
  • Objective-C.tmbundle
  • OpenGL.tmbundle
  • Perl.tmbundle
  • PHP.tmbundle
  • Property List.tmbundle
  • Python.tmbundle
  • Ruby on Rails.tmbundle
  • Ruby.tmbundle
  • Shell Script.tmbundle
  • Source.tmbundle
  • SQL.tmbundle
  • Subversion.tmbundle
  • Text.tmbundle
  • Textile.tmbundle
  • TextMate.tmbundle
  • TODO.tmbundle
  • Transmit.tmbundle
  • Xcode.tmbundle
  • XML.tmbundle
  • YAML.tmbundle

References

http://lists.macromates.com/textmate/2006-February/008276.html

Share this article



This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Sunday, December 7, 2014

Bash File Testing

Summary

Conditional expressions are used by the [[ compound command and the test and [ builtin commands.

Expressions may be unary or binary. Unary expressions are often used to examine the status of a file. There are string operators and numeric comparison operators as well. If the file argument to one of the primaries is of the form /dev/fd/N, then file descriptor N is checked. If the file argument to one of the primaries is one of /dev/stdin, /dev/stdout, or /dev/stderr, file descriptor 0, 1, or 2, respectively, is checked.

When used with ‘[[’, the ‘<’ and ‘>’ operators sort lexicographically using the current locale. The test command uses ASCII ordering.

Unless otherwise specified, primaries that operate on files follow symbolic links and operate on the target of the link, rather than the link itself.

Bash Conditional Expressions

builtin (see The Set Builtin).
-a file True if file exists.
-b file True if file exists and is a block special file.
-c file True if file exists and is a character special file.
-d file True if file exists and is a directory.
-e file True if file exists.
-f file True if file exists and is a regular file.
-g file True if file exists and its set-group-id bit is set.
-h file True if file exists and is a symbolic link.
-k file True if file exists and its "sticky" bit is set.
-p file True if file exists and is a named pipe (FIFO).
-r file True if file exists and is readable.
-s file True if file exists and has a size greater than zero.
-t fd True if file descriptor fd is open and refers to a terminal.
-u file True if file exists and its set-user-id bit is set.
-w file True if file exists and is writable.
-x file True if file exists and is executable.
-G file True if file exists and is owned by the effective group id.
-L file True if file exists and is a symbolic link.
-N file True if file exists and has been modified since it was last read.
-O file True if file exists and is owned by the effective user id.
-S file True if file exists and is a socket.
file1 -ef file2 True if file1 and file2 refer to the same device and inode numbers.
file1 -nt file2 True if file1 is newer (according to modification date) than file2, or if file1 exists and file2 does not.
file1 -ot file2 True if file1 is older than file2, or if file2 exists and file1 does not.
-o optname True if the shell option optname is enabled. The list of options appears in the description of the -o option to the set
-v varname True if the shell variable varname is set (has been assigned a value).
-z string True if the length of string is zero.
-n string True if the length of string is non-zero.
string1 == string2 True if the strings are equal. ‘=’ should be used with the test command for POSIX conformance.
string1 != string2 True if the strings are not equal.
string1 < string2 True if string1 sorts before string2 lexicographically.
string1 > string2 True if string1 sorts after string2 lexicographically.
arg1 OP arg2 OP is one of ‘-eq’, ‘-ne’, ‘-lt’, ‘-le’, ‘-gt’, or ‘-ge’. These arithmetic binary operators return true if arg1 is equal to, not equal to, less than, less than or equal to, greater than, or greater than or equal to arg2, respectively. Arg1 and arg2 may be positive or negative integers.

Examples


FNAME='/etc/hosts'
if [ -e "$FNAME" ]; then 
    echo "$FNAME exists."
else 
    echo "$FNAME does not exist."
fi

References

http://tldp.org/LDP/abs/html/string-manipulation.html

Share this article



This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Thursday, November 27, 2014

You have mail. (that you don't want from openssl)

Summary

Do you see the You have mail. when you open your terminal?

And do you see this in your mail? (in /private/var/mail/`whoami`)

WARNING: can't open config file: /usr/local/etc/openssl/openssl.cnf

If so, then perhaps your openssl is misconfigured.

Prevent the unwanted "You have mail." message by configuring your mac to use the brew installed version of openssl.

First, check which version of openssl you are running:


$ openssl version
OpenSSL 0.9.8za 5 Jun 2014

Next, backup stock version of openssl:


sudo mv /usr/bin/openssl /usr/bin/openssl_v0.9.8za

Now, install openssl using brew:


brew uninstall openssl
brew prune
brew cleanup
sudo brew install openssl

Then, make brew's openssl the system default:


sudo ln -s `find /usr/local/Cellar/openssl -name openssl| grep \/bin` /usr/bin/openssl

And verify that the openssl you are running is from brew:


$ openssl version -a
OpenSSL 1.0.1j 15 Oct 2014
built on: Fri Oct 17 21:14:05 BST 2014
platform: darwin64-x86_64-cc
options:  bn(64,64) rc4(ptr,char) des(idx,cisc,16,int) idea(int) blowfish(idx)
compiler: clang -fPIC -fno-common -DOPENSSL_PIC -DZLIB_SHARED -DZLIB -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -arch x86_64 -O3 -DL_ENDIAN -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
OPENSSLDIR: "/usr/local/etc/openssl"


Create openssl.cnf File

Lastly, if you want to prevent the "WARNING: can't open config file: /usr/local/etc/openssl/openssl.cnf" message, you may need to create that file.

Here's one that should work:

#
# OpenSSL configuration file.
#
 
# Establish working directory.
 
dir     = .
 
[ ca ]
default_ca    = CA_default
 
[ CA_default ]
serial     = $dir/serial
database    = $dir/certindex.txt
new_certs_dir    = $dir/certs
certificate    = $dir/cacert.pem
private_key    = $dir/private/cakey.pem
default_days    = 3650
default_md    = md5
preserve    = no
email_in_dn    = no
nameopt     = default_ca
certopt     = default_ca
policy     = policy_match
 
[ policy_match ]
countryName    = match
stateOrProvinceName   = match
organizationName   = match
organizationalUnitName   = optional
commonName    = supplied
emailAddress    = optional
 
[ req ]
default_bits    = 1024   # Size of keys
default_keyfile    = key.pem  # name of generated keys
default_md    = md5    # message digest algorithm
string_mask    = nombstr  # permitted characters
distinguished_name   = req_distinguished_name
req_extensions    = v3_req
 
[ req_distinguished_name ]
# Variable name    Prompt string
#-------------------------   ----------------------------------
0.organizationName   = Organization Name (company)
organizationalUnitName   = Organizational Unit Name (department, division)
emailAddress    = Email Address
emailAddress_max   = 40
localityName    = Locality Name (city, district)
stateOrProvinceName   = State or Province Name (full name)
countryName    = Country Name (2 letter code)
countryName_min    = 2
countryName_max    = 2
commonName    = Common Name (hostname, IP, or your name)
commonName_max    = 64
 
# Default values for the above, for consistency and less typing.
# Variable name    Value
#------------------------   ------------------------------
0.organizationName_default  = My Company
localityName_default   = My Town
stateOrProvinceName_default  = State or Providence
countryName_default   = US
 
[ v3_ca ]
basicConstraints   = CA:TRUE
subjectKeyIdentifier   = hash
authorityKeyIdentifier   = keyid:always,issuer:always
 
[ v3_req ]
basicConstraints   = CA:FALSE
subjectKeyIdentifier   = hash


Note that I made the certificate life 10 years. The rest is standard stuff.

Share this article



This work is licensed under the Creative Commons Attribution 3.0 Unported License.

Wednesday, November 26, 2014

How to Install Any Version of Node and NPM on OSX

Summary

Have the need to downgrade your Node and NPM installations?

If so, here's how I downgraded node from v0.10.33 to v0.10.26 and npm from 2.1.9 to 1.3.6.

Run these commands


sudo rm -rf /usr/local/lib/node_modules
echo prefix=~/.node >> ~/.npmrc
brew uninstall node
cd /usr/local
brew versions node|grep 0.10.26   # << this shows the git commit id (bae051d)
git checkout 0901e77 Library/Formula/node.rb
brew unlink node
brew install node
curl -L https://www.npmjs.org/install.sh|pbcopy
# This is when I created the install-npm-1.3.6.sh file
chmod +x install-npm-1.3.6.sh
install-npm-1.3.6.sh
ln -s `which npm` $HOME/.node/bin/npm


Created install-npm-1.3.6.sh File

Replace "latest" text with "1.3.6"

#!/bin/sh

# A word about this shell script:
#
# It must work everywhere, including on systems that lack
# a /bin/bash, map 'sh' to ksh, ksh97, bash, ash, or zsh,
# and potentially have either a posix shell or bourne
# shell living at /bin/sh.
#
# See this helpful document on writing portable shell scripts:
# http://www.gnu.org/s/hello/manual/autoconf/Portable-Shell.html
#
# The only shell it won't ever work on is cmd.exe.

if [ "x$0" = "xsh" ]; then
  # run as curl | sh
  # on some systems, you can just do cat>npm-install.sh
  # which is a bit cuter.  But on others, &1 is already closed,
  # so catting to another script file won't do anything.
  curl -s https://www.npmjs.org/install.sh > npm-install-$$.sh
  sh npm-install-$$.sh
  ret=$?
  rm npm-install-$$.sh
  exit $ret
fi

# See what "npm_config_*" things there are in the env,
# and make them permanent.
# If this fails, it's not such a big deal.
configures="`env | grep 'npm_config_' | sed -e 's|^npm_config_||g'`"

npm_config_loglevel="error"
if [ "x$npm_debug" = "x" ]; then
  (exit 0)
else
  echo "Running in debug mode."
  echo "Note that this requires bash or zsh."
  set -o xtrace
  set -o pipefail
  npm_config_loglevel="verbose"
fi
export npm_config_loglevel

# make sure that node exists
node=`which node 2>&1`
ret=$?
if [ $ret -eq 0 ] && [ -x "$node" ]; then
  (exit 0)
else
  echo "npm cannot be installed without node.js." >&2
  echo "Install node first, and then try again." >&2
  echo "" >&2
  echo "Maybe node is installed, but not in the PATH?" >&2
  echo "Note that running as sudo can change envs." >&2
  echo ""
  echo "PATH=$PATH" >&2
  exit $ret
fi

# set the temp dir
TMP="${TMPDIR}"
if [ "x$TMP" = "x" ]; then
  TMP="/tmp"
fi
TMP="${TMP}/npm.$$"
rm -rf "$TMP" || true
mkdir "$TMP"
if [ $? -ne 0 ]; then
  echo "failed to mkdir $TMP" >&2
  exit 1
fi

BACK="$PWD"

ret=0
tar="${TAR}"
if [ -z "$tar" ]; then
  tar="${npm_config_tar}"
fi
if [ -z "$tar" ]; then
  tar=`which tar 2>&1`
  ret=$?
fi

if [ $ret -eq 0 ] && [ -x "$tar" ]; then
  echo "tar=$tar"
  echo "version:"
  $tar --version
  ret=$?
fi

if [ $ret -eq 0 ]; then
  (exit 0)
else
  echo "No suitable tar program found."
  exit 1
fi



# Try to find a suitable make
# If the MAKE environment var is set, use that.
# otherwise, try to find gmake, and then make.
# If no make is found, then just execute the necessary commands.

# XXX For some reason, make is building all the docs every time.  This
# is an annoying source of bugs. Figure out why this happens.
MAKE=NOMAKE

if [ "x$MAKE" = "x" ]; then
  make=`which gmake 2>&1`
  if [ $? -eq 0 ] && [ -x "$make" ]; then
    (exit 0)
  else
    make=`which make 2>&1`
    if [ $? -eq 0 ] && [ -x "$make" ]; then
      (exit 0)
    else
      make=NOMAKE
    fi
  fi
else
  make="$MAKE"
fi

if [ -x "$make" ]; then
  (exit 0)
else
  # echo "Installing without make. This may fail." >&2
  make=NOMAKE
fi

# If there's no bash, then don't even try to clean
if [ -x "/bin/bash" ]; then
  (exit 0)
else
  clean="no"
fi

node_version=`"$node" --version 2>&1`
ret=$?
if [ $ret -ne 0 ]; then
  echo "You need node to run this program." >&2
  echo "node --version reports: $node_version" >&2
  echo "with exit code = $ret" >&2
  echo "Please install node before continuing." >&2
  exit $ret
fi

t="${npm_install}"
if [ -z "$t" ]; then
  # switch based on node version.
  # note that we can only use strict sh-compatible patterns here.
  case $node_version in
    0.[01234567].* | v0.[01234567].*)
      echo "You are using an outdated and unsupported version of" >&2
      echo "node ($node_version).  Please update node and try again." >&2
      exit 99
      ;;
    *)
      echo "install npm@1.3.6"
      t="1.3.6"
      ;;
  esac
fi

# need to echo "" after, because Posix sed doesn't treat EOF
# as an implied end of line.
url=`(curl -SsL https://registry.npmjs.org/npm/$t; echo "") \
     | sed -e 's/^.*tarball":"//' \
     | sed -e 's/".*$//'`

ret=$?
if [ "x$url" = "x" ]; then
  ret=125
  # try without the -e arg to sed.
  url=`(curl -SsL https://registry.npmjs.org/npm/$t; echo "") \
       | sed 's/^.*tarball":"//' \
       | sed 's/".*$//'`
  ret=$?
  if [ "x$url" = "x" ]; then
    ret=125
  fi
fi
if [ $ret -ne 0 ]; then
  echo "Failed to get tarball url for npm/$t" >&2
  exit $ret
fi


echo "fetching: $url" >&2

cd "$TMP" \
  && curl -SsL "$url" \
     | $tar -xzf - \
  && cd "$TMP"/* \
  && (ver=`"$node" bin/read-package-json.js package.json version`
      isnpm10=0
      if [ $ret -eq 0 ]; then
        if [ -d node_modules ]; then
          if "$node" node_modules/semver/bin/semver -v "$ver" -r "1"
          then
            isnpm10=1
          fi
        else
          if "$node" bin/semver -v "$ver" -r ">=1.0"; then
            isnpm10=1
          fi
        fi
      fi

      ret=0
      if [ $isnpm10 -eq 1 ] && [ -f "scripts/clean-old.sh" ]; then
        if [ "x$skipclean" = "x" ]; then
          (exit 0)
        else
          clean=no
        fi
        if [ "x$clean" = "xno" ] \
            || [ "x$clean" = "xn" ]; then
          echo "Skipping 0.x cruft clean" >&2
          ret=0
        elif [ "x$clean" = "xy" ] || [ "x$clean" = "xyes" ]; then
          NODE="$node" /bin/bash "scripts/clean-old.sh" "-y"
          ret=$?
        else
          NODE="$node" /bin/bash "scripts/clean-old.sh" &2
        exit $ret
      fi) \
  && (if [ "x$configures" = "x" ]; then
        (exit 0)
      else
        echo "./configure $configures"
        echo "$configures" > npmrc
      fi) \
  && (if [ "$make" = "NOMAKE" ]; then
        (exit 0)
      elif "$make" uninstall install; then
        (exit 0)
      else
        make="NOMAKE"
      fi
      if [ "$make" = "NOMAKE" ]; then
        "$node" cli.js rm npm -gf
        "$node" cli.js install -gf
      fi) \
  && cd "$BACK" \
  && rm -rf "$TMP" \
  && echo "It worked"

ret=$?
if [ $ret -ne 0 ]; then
  echo "It failed" >&2
fi
exit $ret

Check Your Downgraded Versions


$ node -v
v0.10.26
\$ npm -v
1.3.6

Notes

To get the correct version of node to checkout you'll need to do something like this:

brew versions node | grep 0.10.26


... and to get brew versions to work you may need to run this:

brew tap homebrew/boneyard


I do not advocate downgrading node and npm.

This was for example purposes.

Reverting Back to Latest Versions

Not sure why any one would want to remove such a useful command, but for the time being you can use the brew versions command to get the git command used to download a specific node version (0.10.33 in our case).

$ brew versions node | grep 0.10.33
Warning: brew-versions is unsupported and will be removed soon.
You should use the homebrew-versions tap instead:
  https://github.com/Homebrew/homebrew-versions

0.10.33  git checkout 4b9395f /usr/local/Library/Formula/node.rb


$ brew cleanup
$ brew cleanup --cache
$ cd /usr/local/Library/
$ git checkout 4b9395f /usr/local/Library/Formula/node.rb
$ brew install node


Notes

No other combination of brew commands would upgrade node from 0.10.26 to 0.10.33.

So, none of these had any other effect, other than keeping node at version 0.10.26:

$ brew uninstall node
$ brew upgrade
$ brew cleanup 
$ brew cleanup --cache
$ brew upgrade node


Verify Versions

Verify that both node and npm are back to the latest versions:

~ $ node -v
v0.10.33
~ $ npm -v
2.1.9


Share this article



This work is licensed under the Creative Commons Attribution 3.0 Unported License.

For All Git Repos, Ignore All of This

Summary

Are there files that you are always having to add to the .gitignore file for all of your git repositories?

If so, use this technique to create a global ignore file that is automatically applied to all of your git repos on you local workstation.

For all git repos, ignore all of this

First, create the file to contain the global git ignore patterns:

~/.gitignore_global


# Compiled source #
###################
*.com
*.class
*.dll
*.exe
*.o
*.so

# Packages #
############
# it's better to unpack these files and commit the raw source
# git has its own built in compression methods
*.7z
*.dmg
*.gz
*.iso
*.jar
*.rar
*.tar
*.zip

# Logs and databases #
######################
*.log
*.sql
*.sqlite

# OS generated files #
######################
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
Icon?
ehthumbs.db
Thumbs.db
.idea
.metadata

/lex_ignore

Then, configure git to use the gitignore_global ignore patterns.

git config --global core.excludesfile ~/.gitignore_global


Share this article



This work is licensed under the Creative Commons Attribution 3.0 Unported License.