Attention SPAMMERS: Don't advertise your services on my blog. If you do then you will have implicitly given me permission to disable any service you advertise via my blog by any means necessary.
If you have a web project that has both JavaScript and Go, then you may need to create a pre-commit hook to validate both .js and .go source files.
Here's one way to do it using two script files and a git pre-commit hook.
Create pre commit scripts for JS and Go
js-pre-commit-git-hook
files=$(git diff --cached --name-only --diff-filter=ACM | grep ".js$")
if [ "$files" = "" ]; then
exit 0
fi
LINTER=jshint
pass=true
echo "\nValidating JavaScript:\n"
for file in ${files}; do
result=$($LINTER ${file})
if [ "$?" == "0" ]; then
echo "\t\033[32m$LINTER Passed: ${file}\033[0m"
else
echo "\t\033[31m$LINTER Failed: ${file}\033[0m"
pass=false
fi
done
echo "\nJavaScript validation complete\n"
if ! $pass; then
echo "\033[41mCOMMIT FAILED:\033[0m Your commit contains files that should pass $LINTER but do not. Please fix the $LINTER errors and try again.\n"
exit 1
else
echo "\033[42mCOMMIT SUCCEEDED\033[0m\n"
fi
go-pre-commit-git-hook
#!/bin/sh
# Copyright 2012 The Go Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
# git gofmt pre-commit hook
#
# To use, store as .git/hooks/pre-commit inside your repository and make sure
# it has execute permissions.
#
# This script does not handle file names that contain spaces.
gofiles=$(git diff --cached --name-only --diff-filter=ACM | grep '.go$')
[ -z "$gofiles" ] && exit 0
unformatted=$(gofmt -l $gofiles)
[ -z "$unformatted" ] && exit 0
# Some files are not gofmt'd. Print message and fail.
echo >&2 "Go files must be formatted with gofmt. Please run:"
for fn in $unformatted; do
echo >&2 " gofmt -w $PWD/$fn"
done
exit 1
.git/hooks/pre-commit
js-pre-commit-git-hook && go-pre-commit-git-hook
Notes
Put the scripts in your $PATH and make them executable.
Create a .jshintrc file with your desired jshint configuration settings.
Format a file (or directory)
In case you've already com=mited files before setting up this git hook, you can gofmt individual files (or directories) like this:
When Sony Pictures employees got into the office on Monday, November 24, they discovered that their corporate network had been hacked.
The attackers took terabytes of private data, deleted the original copies from Sony computers, and left messages threatening to release the information if Sony didn't comply with the attackers' demands.
My First Guess
When I first heard that Sony got hacked, my first thought was, "I bet those guys run Windows."
That suspicion has been confirmed.
Possible Attack Vector
The attackers researched Sony's IT infrastructure and knew their victim's vulnerabilities.
Cyber Attack
The attackers could have used email system or Microsoft IE browser vulnerabilities to initially gain access to a regular employees workstation.
Gain Admin Access
Knowing that Sony ran Microsoft Windows, the attackers could have used a known Microsoft Implemented Kerberos solution vulnerability to forge a Privilege Attribute Certificate, that the Kerberos Key Distribution Center validates, to elevate their privileges to that of any other account on the domain.
Destructive Malware
Once attackers gets administrative keys to a Microsoft-based network with unencrypted file systems, they were able to extract that information and expose corporate secrets *** and follow up with destroying the files using destructive BKDR_WIPALL Malware.
*** Corporate Secrets Exposed
Men are paid more than women. Sony's 17 biggest-earning executives are predominantly white men. According to a spreadsheet called "Comp Roster by Supervisory Organization 2014-10-21," Amy Pascal, the co-chair of Sony Pictures Entertainment is the only woman earning $1 million or more at the studio.
A series of emails between Pascal and movie producer Scott Rudin showed an ugly side to the beautiful business of Hollywood. Rudin called Angelina Jolie a "minimally talented spoiled brat" in an email exchange with Pascal. Pascal and Rudin also made racially charged jokes about President Obama's taste in movies. As you would expect, Pascal and Rudin apologized, saying they are so sorry for what they said.
Sony's Employee Workstations and Network Run on Microsoft Windows
Private data was not encrypted
Woefully Inadequate Network Security Monitoring
Law Suit Filed
Two former employees of Sony Pictures filed a lawsuit against Sony alleging it didn't do enough to safeguard their personal information and prevent its loss in that cyberattack.
The lawsuit was filed Monday, December 15th 2014, in U.S. District Court for the Central District of California, asks the court to award monetary damages and also class-action status. Thousands of Sony employees past and present could join the suit.
The lawsuit alleges, "Sony failed to secure its computer system, servers and databases, despite weaknesses that it has known about for years, because Sony made a business decision to accept the risk of losses associated with being hacked."
How can Sony defend itself against solid claims of negligence?
IT Security Laws for Corporations
Sarbanes-Oxley
Sarbanes-Oxley, or 'Sarbox' as it is sometimes called, was enacted in 2002 to help prevent future Enron-like episodes from happening again. It requires companies to be accountable for identifying and mitigating risks to their financial stability and this includes information security.
Sarbanes-Oxley details a "chain of accountability" where senior executives and board members must sign off on the accuracy of financial reporting, then the managers that report to them must be darned sure that their information is accurate. That applies to the managers who report to them and the people who report to them and so on. While the average employee of a public company will most likely not go to jail over a Sarbanes-Oxley violation (C-level executives are not so fortunate) each employee does have an important role in maintaining the security and integrity of corporate data.
When Sarbanes-Oxley mentions "controls" it it talking about policies, procedures and guidelines that protect information in your company with a direct implication of adequate IT security enforcement.
HIPAA Security Rule
This massive cyberattack constitutes unauthorized access or acquisition of personal information subject to most state and federal data breach notification requirements, including the HIPAA Data Breach Notification Rule. The HIPAA Security Rule contains a number of provisions that require covered entities and business associates to maintain procedures to monitor system activity for potential security incidents and investigate any such potential security incidents.
The HIPAA Security Rule requires covered entities and business associates to “regularly review records of information system activity, such as audit logs, access records, and security incident tracking reports.” 45 C.F.R. § 164.306(a)(1)(ii)(D). HHS guidance materials further state that this specification “should also promote continual awareness of any information system activity that could suggest a security incident.”
See CMS, HIPAA Security Series Vol. 2 Security Standards: Administrative Safeguards
The HIPAA Security Rule requires covered entities and business associates to create and maintain appropriate records of system activity. See 45 C.F.R. 164.312(b). However, covered entities and business associates have significant discretion to create and maintain activity records based upon the formal assessment of their security risks.
Breach Notification
Breach notice laws typically define, “personal information” as, "A user name or email address, in combination with a password or security question and answer that would permit access to an online account."
Implications
IT security should be taken seriously
As a C-level executive, you should know the laws pertaining to safeguarding your company and employees' data.
As a C-level executive, you are liable for lax IT security enforcement at your company.
Lessons Learned
If you are a C-level executive and your company runs Windows, change that or get another job.
Hire a professional to thoroughly evaluate your current security policies.
Don't ask for trouble, but if you do don't run Windows.
SANS Instituted Cyber Attack Response Plan
For many organizations today, the question is no longer if they will fall victim to a targeted attack, but when. In such an event, how an organization responds will determine whether it becomes a serious event or if it stays a mere annoyance.
This requires something of a change of mindset for information security professionals. Previous techniques and many best practices are under the premise that an attacker can be kept out.
However, that’s no longer the case today. The malware used in targeted attacks is frequently not detected (because it’s been custom-made for specific organizations). A well-crafted social engineering attack can look like a normal business email or engaging click bait.
In short, an attacker with sufficient resources will be able to find their way inside their target, regardless of what the defender does. The defender can raise the price of getting in, but not prevent it entirely.
The SANS Institute provides some guidelines to organizations on how they should react to incidents. Broadly speaking, however, the response can be divided into four steps:
Prepare
This involves responding to a targeted attack even before the attack actually takes place. Security professionals need to plan for a response to a targeted attack on their network. System administrators will routinely have plans, for example, for downtime-related events such as a data center going offline.
Similarly, it’s important to be aware of the normal, day-to-day threats that an organization faces. Information security professionals must not only deal with these attacks as they happen, but should understand what their “normal” problems are so that abnormal threats like targeted attacks can be quickly spotted. Threat intelligence and analysis is valuable in this step, in order to guide security professionals into understanding what the current situation is.
Security professionals must also plan to acquire the right skills to effectively deal with targeted attacks. One of the most important skills to learn is digital forensic techniques, which allow for the proper acquisition and analysis of information from compromise devices.
Many of these techniques are quite foreign to normal IT day-to-day work, but learning these techniques will help organizations gain information and be better prepared to deal with any attack in progress.
Respond
Upon identifying targeted attack in progress, the next step is to respond decisively. Responding to a targeted attacks has several components: containing the threat, removing it, and determining the scope of damage. The first step is to immediately isolate or contain the scope of any threat. Steps that can be performed here include isolating infected machines or taking compromised services offline. Ultimately, the goal is to prevent an attack from gaining further ground.
To determine any threats in place, working hand in hand with a security vendor that has knowledge of commonly used targeted attack tools and grayware is useful in order to locate the threats within an organization. Similarly, continuous monitoring of existing network activity can help determine the scale and scope of any existing attack.
Restore
Just as important as responding to an attack is restoring an organization to normal operations. While some disruption is a necessary part of responding to a targeted attack, in the long run an organization has to “return to normal” and go back to normal operations.
“Restoring” an organization to normal is not only about technical considerations. If necessary, an organization needs to reach out to partners, stakeholders, and customers to clearly communicate the scope of a targeted attack’s damage, and any steps being taken to reduce the damage. In many cases, goodwill and trust are big casualties of a targeted attack, and these must be addressed as well.
Learn
Once an attack is over, organizations need to figure out what can be learned from it. Every attack offers lessons for defenders – what worked? What could we have done better? It may turn out that the some of the assumptions and information that went into planning for security incidents was not correct or incomplete.
However, it is also important to not overreact to any single incident. Overreacting can be just as bad as under-reaction: it can impose burdens on the organization that have marginal gains in security, if any. Decisions must be made bas
In today’s world of frequent targeted attacks – when breaches are a matter of when and not if - a carefully crafted strategy to respond to targeted attacks must be part and parcel of the larger defense strategy. This can be the difference between a minor nuisance and a major breach that could spell the demise of an organization.
For original reference to this section see: http://blog.trendmicro.com/trendlabs-security-intelligence/four-steps-to-an-effective-targeted-attack-response/
The schedule's tight on the cluster tonight. So I parallelized my code. All those threads and continuations. My head's going to explode. And all that boilerplate. That FactoryBuilderAdapterDelegateImpl Seems unjustified Give me something simple Don't write in Scheme Don't write in C No more pointers that I forgot to free() Java's verbose Python's too slow It's time you know Write in Go Write in Go No inheritance anymore Write in Go Write in Go There's no do or while,just for I don't care what your linters say I've got tools for that The code never bothered me anyway dodododo diudiudiu... It's funny how some features Make every change seem small And the errors that once slowed me Don't get me down at all It's time to see what Go can do Cause it seems too good to be true No long compile times for me I'm free Write in Go Write in Go Kiss your pointer math goodbye Write in Go Write in Go Time to give GC a try I don't care if my structures stay on the heap or stack donononododono... My program spawns its goroutines without a sound Control is spiraling through buffered channels all around I don't remember why I ever once subclassed I'm never going back My tests all build and pass Write in Go Write in Go You won't use Eclipse anymore Write in Go Write in Go Who cares what Boost is for? I don't care what the tech leads say oo wow oo... I'll rewrite it all nonononono... Writing code never bothered me ,anyway Sung by ScaleAbility, an acapella group at Google.
All the default languages are stored in TextMate.app/Contents/SharedSupport/Bundles in the Syntaxes folder of the bundle.
This is in the binary plist format, so you'll need to convert it
first to readable form. Let's say we want HTML to be the new default
language, we would do (from terminal):
Here “DDEEA3ED-6B1C-11D9-8B10-000D93589AF6” is the UUID for Bash Shell Script. Now we need to tell TM to use that as default by altering its defaults database.
Conditional expressions are used by the [[ compound command and the test and [ builtin commands.
Expressions may be unary or binary. Unary expressions are often used to examine the status of a file. There are string operators and numeric comparison operators as well. If the file argument to one of the primaries is of the form /dev/fd/N, then file descriptor N is checked. If the file argument to one of the primaries is one of /dev/stdin, /dev/stdout, or /dev/stderr, file descriptor 0, 1, or 2, respectively, is checked.
When used with ‘[[’, the ‘<’ and ‘>’ operators sort lexicographically using the current locale. The test command uses ASCII ordering.
Unless otherwise specified, primaries that operate on files follow symbolic links and operate on the target of the link, rather than the link itself.
Bash Conditional Expressions
-a file
True if file exists.
-b file
True if file exists and is a block special file.
-c file
True if file exists and is a character special file.
-d file
True if file exists and is a directory.
-e file
True if file exists.
-f file
True if file exists and is a regular file.
-g file
True if file exists and its set-group-id bit is set.
-h file
True if file exists and is a symbolic link.
-k file
True if file exists and its "sticky" bit is set.
-p file
True if file exists and is a named pipe (FIFO).
-r file
True if file exists and is readable.
-s file
True if file exists and has a size greater than zero.
-t fd
True if file descriptor fd is open and refers to a terminal.
-u file
True if file exists and its set-user-id bit is set.
-w file
True if file exists and is writable.
-x file
True if file exists and is executable.
-G file
True if file exists and is owned by the effective group id.
-L file
True if file exists and is a symbolic link.
-N file
True if file exists and has been modified since it was last read.
-O file
True if file exists and is owned by the effective user id.
-S file
True if file exists and is a socket.
file1 -ef file2
True if file1 and file2 refer to the same device and inode numbers.
file1 -nt file2
True if file1 is newer (according to modification date) than file2, or if file1 exists and file2 does not.
file1 -ot file2
True if file1 is older than file2, or if file2 exists and file1 does not.
-o optname
True if the shell option optname is enabled. The list of options appears in the description of the -o option to the set
builtin (see The Set Builtin).
-v varname
True if the shell variable varname is set (has been assigned a value).
-z string
True if the length of string is zero.
-n string
True if the length of string is non-zero.
string1 == string2
True if the strings are equal. ‘=’ should be used with the test command for POSIX conformance.
string1 != string2
True if the strings are not equal.
string1 < string2
True if string1 sorts before string2 lexicographically.
string1 > string2
True if string1 sorts after string2 lexicographically.
arg1 OP arg2
OP is one of ‘-eq’, ‘-ne’, ‘-lt’, ‘-le’, ‘-gt’, or ‘-ge’. These arithmetic binary operators return true if arg1 is equal to, not equal to, less than, less than or equal to, greater than, or greater than or equal to arg2, respectively. Arg1 and arg2 may be positive or negative integers.
Examples
FNAME='/etc/hosts'
if [ -e "$FNAME" ]; then
echo "$FNAME exists."
else
echo "$FNAME does not exist."
fi
Have the need to downgrade your Node and NPM installations?
If so, here's how I downgraded node from v0.10.33 to v0.10.26 and npm from 2.1.9 to 1.3.6.
Run these commands
sudo rm -rf /usr/local/lib/node_modules
echo prefix=~/.node >> ~/.npmrc
brew uninstall node
cd /usr/local
brew versions node|grep 0.10.26 # << this shows the git commit id (bae051d)
git checkout 0901e77 Library/Formula/node.rb
brew unlink node
brew install node
curl -L https://www.npmjs.org/install.sh|pbcopy
# This is when I created the install-npm-1.3.6.sh file
chmod +x install-npm-1.3.6.sh
install-npm-1.3.6.sh
ln -s `which npm` $HOME/.node/bin/npm
Created install-npm-1.3.6.sh File
Replace "latest" text with "1.3.6"
#!/bin/sh
# A word about this shell script:
#
# It must work everywhere, including on systems that lack
# a /bin/bash, map 'sh' to ksh, ksh97, bash, ash, or zsh,
# and potentially have either a posix shell or bourne
# shell living at /bin/sh.
#
# See this helpful document on writing portable shell scripts:
# http://www.gnu.org/s/hello/manual/autoconf/Portable-Shell.html
#
# The only shell it won't ever work on is cmd.exe.
if [ "x$0" = "xsh" ]; then
# run as curl | sh
# on some systems, you can just do cat>npm-install.sh
# which is a bit cuter. But on others, &1 is already closed,
# so catting to another script file won't do anything.
curl -s https://www.npmjs.org/install.sh > npm-install-$$.sh
sh npm-install-$$.sh
ret=$?
rm npm-install-$$.sh
exit $ret
fi
# See what "npm_config_*" things there are in the env,
# and make them permanent.
# If this fails, it's not such a big deal.
configures="`env | grep 'npm_config_' | sed -e 's|^npm_config_||g'`"
npm_config_loglevel="error"
if [ "x$npm_debug" = "x" ]; then
(exit 0)
else
echo "Running in debug mode."
echo "Note that this requires bash or zsh."
set -o xtrace
set -o pipefail
npm_config_loglevel="verbose"
fi
export npm_config_loglevel
# make sure that node exists
node=`which node 2>&1`
ret=$?
if [ $ret -eq 0 ] && [ -x "$node" ]; then
(exit 0)
else
echo "npm cannot be installed without node.js." >&2
echo "Install node first, and then try again." >&2
echo "" >&2
echo "Maybe node is installed, but not in the PATH?" >&2
echo "Note that running as sudo can change envs." >&2
echo ""
echo "PATH=$PATH" >&2
exit $ret
fi
# set the temp dir
TMP="${TMPDIR}"
if [ "x$TMP" = "x" ]; then
TMP="/tmp"
fi
TMP="${TMP}/npm.$$"
rm -rf "$TMP" || true
mkdir "$TMP"
if [ $? -ne 0 ]; then
echo "failed to mkdir $TMP" >&2
exit 1
fi
BACK="$PWD"
ret=0
tar="${TAR}"
if [ -z "$tar" ]; then
tar="${npm_config_tar}"
fi
if [ -z "$tar" ]; then
tar=`which tar 2>&1`
ret=$?
fi
if [ $ret -eq 0 ] && [ -x "$tar" ]; then
echo "tar=$tar"
echo "version:"
$tar --version
ret=$?
fi
if [ $ret -eq 0 ]; then
(exit 0)
else
echo "No suitable tar program found."
exit 1
fi
# Try to find a suitable make
# If the MAKE environment var is set, use that.
# otherwise, try to find gmake, and then make.
# If no make is found, then just execute the necessary commands.
# XXX For some reason, make is building all the docs every time. This
# is an annoying source of bugs. Figure out why this happens.
MAKE=NOMAKE
if [ "x$MAKE" = "x" ]; then
make=`which gmake 2>&1`
if [ $? -eq 0 ] && [ -x "$make" ]; then
(exit 0)
else
make=`which make 2>&1`
if [ $? -eq 0 ] && [ -x "$make" ]; then
(exit 0)
else
make=NOMAKE
fi
fi
else
make="$MAKE"
fi
if [ -x "$make" ]; then
(exit 0)
else
# echo "Installing without make. This may fail." >&2
make=NOMAKE
fi
# If there's no bash, then don't even try to clean
if [ -x "/bin/bash" ]; then
(exit 0)
else
clean="no"
fi
node_version=`"$node" --version 2>&1`
ret=$?
if [ $ret -ne 0 ]; then
echo "You need node to run this program." >&2
echo "node --version reports: $node_version" >&2
echo "with exit code = $ret" >&2
echo "Please install node before continuing." >&2
exit $ret
fi
t="${npm_install}"
if [ -z "$t" ]; then
# switch based on node version.
# note that we can only use strict sh-compatible patterns here.
case $node_version in
0.[01234567].* | v0.[01234567].*)
echo "You are using an outdated and unsupported version of" >&2
echo "node ($node_version). Please update node and try again." >&2
exit 99
;;
*)
echo "install npm@1.3.6"
t="1.3.6"
;;
esac
fi
# need to echo "" after, because Posix sed doesn't treat EOF
# as an implied end of line.
url=`(curl -SsL https://registry.npmjs.org/npm/$t; echo "") \
| sed -e 's/^.*tarball":"//' \
| sed -e 's/".*$//'`
ret=$?
if [ "x$url" = "x" ]; then
ret=125
# try without the -e arg to sed.
url=`(curl -SsL https://registry.npmjs.org/npm/$t; echo "") \
| sed 's/^.*tarball":"//' \
| sed 's/".*$//'`
ret=$?
if [ "x$url" = "x" ]; then
ret=125
fi
fi
if [ $ret -ne 0 ]; then
echo "Failed to get tarball url for npm/$t" >&2
exit $ret
fi
echo "fetching: $url" >&2
cd "$TMP" \
&& curl -SsL "$url" \
| $tar -xzf - \
&& cd "$TMP"/* \
&& (ver=`"$node" bin/read-package-json.js package.json version`
isnpm10=0
if [ $ret -eq 0 ]; then
if [ -d node_modules ]; then
if "$node" node_modules/semver/bin/semver -v "$ver" -r "1"
then
isnpm10=1
fi
else
if "$node" bin/semver -v "$ver" -r ">=1.0"; then
isnpm10=1
fi
fi
fi
ret=0
if [ $isnpm10 -eq 1 ] && [ -f "scripts/clean-old.sh" ]; then
if [ "x$skipclean" = "x" ]; then
(exit 0)
else
clean=no
fi
if [ "x$clean" = "xno" ] \
|| [ "x$clean" = "xn" ]; then
echo "Skipping 0.x cruft clean" >&2
ret=0
elif [ "x$clean" = "xy" ] || [ "x$clean" = "xyes" ]; then
NODE="$node" /bin/bash "scripts/clean-old.sh" "-y"
ret=$?
else
NODE="$node" /bin/bash "scripts/clean-old.sh" &2
exit $ret
fi) \
&& (if [ "x$configures" = "x" ]; then
(exit 0)
else
echo "./configure $configures"
echo "$configures" > npmrc
fi) \
&& (if [ "$make" = "NOMAKE" ]; then
(exit 0)
elif "$make" uninstall install; then
(exit 0)
else
make="NOMAKE"
fi
if [ "$make" = "NOMAKE" ]; then
"$node" cli.js rm npm -gf
"$node" cli.js install -gf
fi) \
&& cd "$BACK" \
&& rm -rf "$TMP" \
&& echo "It worked"
ret=$?
if [ $ret -ne 0 ]; then
echo "It failed" >&2
fi
exit $ret
Check Your Downgraded Versions
$ node -v
v0.10.26
\$ npm -v
1.3.6
Notes
To get the correct version of node to checkout you'll need to do something like this:
brew versions node | grep 0.10.26
... and to get brew versions to work you may need to run this:
brew tap homebrew/boneyard
I do not advocate downgrading node and npm.
This was for example purposes.
Reverting Back to Latest Versions
Not sure why any one would want to remove such a useful command, but for the time being you can use the brew versions command to get the git command used to download a specific node version (0.10.33 in our case).
$ brew versions node | grep 0.10.33
Warning: brew-versions is unsupported and will be removed soon.
You should use the homebrew-versions tap instead:
https://github.com/Homebrew/homebrew-versions
0.10.33 git checkout 4b9395f /usr/local/Library/Formula/node.rb
$ brew cleanup
$ brew cleanup --cache
$ cd /usr/local/Library/
$ git checkout 4b9395f /usr/local/Library/Formula/node.rb
$ brew install node
Notes
No other combination of brew commands would upgrade node from 0.10.26 to 0.10.33.
So, none of these had any other effect, other than keeping node at version 0.10.26:
Here's a bash script to Install or Update Go on OSX.
update-go
#!/bin/bash
function goversion {
go version|cut -d" " -f3|while read n; do echo "${n:2}"; done
}
set -x
brew update
brew doctor
{ set +x; } &>/dev/null
echo "Current Go version: `goversion`"
echo ""
echo "Need to do what brew doctor suggested? (It's safe to rerun this command.) CTRL+C to stop --or-- Enter to continue..."
read x
if [ "`goversion`" == "" ]; then
brew install go
brew link --overwrite go
else
brew upgrade go
brew link go
fi
echo "New Go version: `goversion`"
# Following currently enables intellij Go plugin to use new Go version
function reset-idea-go-plugin {
if [ "`which idea`" != "" ]; then
printf "intellij is installed. Now, updating its go plugin..."
# Note: version numbers may change
rm /usr/local/Cellar/go/1.2.2
rm /usr/local/Cellar/go/1.3
ln -s /usr/local/Cellar/go/1.3.3 /usr/local/Cellar/go/1.2.2
echo "Done."
fi
}
reset-idea-go-plugin 2>/dev/null
Output
~ $ update-go
+ brew update
Already up-to-date.
+ brew doctor
Your system is ready to brew.
Current Go version: 1.3.3
Need to do what brew doctor suggested? (It's safe to rerun this command.) CTRL+C to stop --or-- Enter to continue...
Error: go-1.3.3 already installed
Warning: Already linked: /usr/local/Cellar/go/1.3.3
To relink: brew unlink go && brew link go
New Go version: 1.3.3
intellij is installed. Now, updating its go plugin...Done.
~ $
Notes
It's safe to rerun this script.
Assumes you're on a mac.
Assumes you have homebrew installed.
If you use the Go plugin in IntelliJ, this script will direct it to use the newly installed version of Golang.
First, make sure you are running a recent version of InjelliJ.
I'm running InjelliJ Community Edition v14.0.2
Next, make sure the sdk you've configured in IntelliJ is consistent with the one you use on your console.
This is what happens when your IntelliJ JDK is version 1.7 but your java in your console is version 1.6...
~/dev/java $ java -jar /Users/lex/dev/java/ijtest/out/artifacts/ijtest_jar/ijtest.jar
Exception in thread "main" java.lang.UnsupportedClassVersionError: com/company/Main : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:637)
at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
~/dev/java $ java -version
java version "1.6.0_65"
Java(TM) SE Runtime Environment (build 1.6.0_65-b14-466.1-11M4716)
Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-466.1, mixed mode)
~/dev/java $ find /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home -name java
/Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/bin/java
/Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/jre/bin/java
~/dev/java $ /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/jre/bin/java -jar /Users/lex/dev/java/ijtest/out/artifacts/ijtest_jar/ijtest.jar
~/dev/java $ /Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/jre/bin/java -jar /Users/lex/dev/java/ijtest/out/artifacts/ijtest_jar/ijtest.jar
hello, world!
Profit!
Here we run the jar, from the console, that I we just created using the steps outlined above...
~/dev/java/ijtest/out/artifacts/ijtest_jar $ ls -lrt
total 24
-rw-r--r-- 1 lex staff 968 Jan 5 22:56 ijtest.jar
drwx------ 4 lex staff 136 Jan 5 22:57 ijtest
-rw-r--r--@ 1 lex staff 6148 Jan 5 22:57 .DS_Store
~/dev/java/ijtest/out/artifacts/ijtest_jar $ jar xf ./ijtest.jar
~/dev/java/ijtest/out/artifacts/ijtest_jar $ ls -lrt
total 24
drwxr-xr-x 3 lex staff 102 Jan 5 22:56 com
drwxr-xr-x 3 lex staff 102 Jan 5 22:56 META-INF
-rw-r--r-- 1 lex staff 968 Jan 5 22:56 ijtest.jar
drwx------ 4 lex staff 136 Jan 5 22:57 ijtest
-rw-r--r--@ 1 lex staff 6148 Jan 5 22:57 .DS_Store
~/dev/java/ijtest/out/artifacts/ijtest_jar $ find .
.
./.DS_Store
./com
./com/company
./com/company/Main.class
./ijtest
./ijtest/com
./ijtest/com/company
./ijtest/com/company/Main.class
./ijtest/META-INF
./ijtest/META-INF/MANIFEST.MF
./ijtest.jar
./META-INF
./META-INF/MANIFEST.MF
Notice that all files exist in the jar file, including MANIFEST.MF.