Git Commits

GIT Tutorial: Part 3

This is Part 3 of the Git tutorial. To see the rest, click here

Git tracks its history using a tree structure behind the scenes. Each entry in the tree is uniquely identified with a SHA hash. When playing with Git you may have seen something like:

A list of commits, each with one parent

What you are looking at is a stack of commits where the SHA hash uniquely identifies each commit. It can be thought of as a unique identifier like a primary key in a database.

Try the following to see some commits in your repository.

git log --oneline

The oldest is at the bottom and the newest at the top. What is not obvious is that the child item has a pointer to its parent(s). In the above scenario each child has one parent and it looks just like a list. Note: In the diagram below we are using alphabet letters instead of SHA hashes.


B has a pointer to its parent A

A commit can have one or more parents. When the commit has two or more parents it is essentially a merge where we have taken content from A and B and made C.


C has a pointer to its parents, A and B


When we download a Git repository we can think of it being almost two separate items. First there are all the files in your project as then there is as an index that tracks the Sha references and which files they each relate to. This index is called the reflog. Not only does it contain these Sha references, it also can contain Tags (which are human-readable labels applied to a single Sha reference) and pointers that identify the Sha reference of each branch in the repository.

Visualizing Git

GIT Tutorial: Part 2

This is Part 2 of the Git tutorial. To see the rest, click here

 Local and Remote

At a high level we will be discussing two separate instances of Git. Firstly there is the instance on your own machine which we can refer to as the local instance. Then there is the instance on some remote machine which we can refer to as the remote instance. There aren’t really any differences between the local and remote except for being on different machines. The remote instance for example could be the place where the team pushes their code to. Ultimately there usually are at least 2 Git copies, your local and somewhere that you push your copy to. Think of it like Subversion and pushing your changes to a remote repository.

Side note: quite often in documentation you will see the remote referenced as “origin” (a default name). 

Creating a Git Repository

Let’s start off by creating a brand new Git Repository.

mkdir Demo
cd Demo
git init

This creates our Demo folder and then creates a new Git repository. Simple!

Logical sections within Git

Within an instance, Git is best visualized as 3 separate buckets.


Working Directory

The working directory is what you see in your folders and file system. You make code changes here.

In the Working Directory, you might create a new file called A.txt, add some content and save it.

If we run the command

git status

This will ask Git to see what the current status is. Has anything changed? Are there any files waiting to be added to Git?

Screen Shot 2014-09-07 at 4.34.39 PM

You can see in this example, Git isn’t tracking A.txt. It also tells us how to add the file.

Staging Area

Next you would tell Git to track this new file.

git add A.txt

Alternatively you could add all files within the directory by specifying the dot syntax.

git add .

Running Git status again we see:

git status

Screen Shot 2014-09-07 at 4.38.10 PM

The whole point of the Staging Area is to create a candidate for committing to the database. Take for example fixing a bug. It would be far cleaner and easier to read 1 commit to the database e.g. “#424 Bug fixed” rather than 10 piecemeal commits all contributing to the commit. In Git we can stage and manipulate our commits to make history more logical and easier to read.

Once you are happy with your your Staged work you would commit it to the actual repository.


git commit -m "#424 Bug fixed"

Screen Shot 2014-09-07 at 4.40.22 PM

This would move our staged changes into the Repository. Essentially freezing those changes in place.

That’s it! That is a very simple example that touches upon all 3 buckets within Git.


To help visualize the bigger picture, the typical day-to-day workflow that you would use would be something like this:

  • Fetch from the database (this updates your repository and brings down all information from the centralized repository)
  • Pull to update your local copy with what is in the remote repository
  • Make some local changes
  • Add them to the Staging Area
  • Commit your Staging Area to the repository
  • Finally when you are happy with your commits, Push them to the remote instance
  • And go to step 1 again…

Don’t worry about the other verbs that we haven’t discussed yet, we’ll touch upon them in later topics.

Distributed Version Control System


GIT Tutorial: Part 1

This is Part 1 of the Git tutorial. To see the rest, click here

GIT is a Distributed Version Control System (DVCS), which can be seen as a step up from a regular Version Control Systems (VCS). The “Distributed” in DVCS means that multiple copies of a repository are held rather than just the single repository.

This is good because:

  • Enhanced failover capabilities: If you loose your main repository through a catastrophic incident, you still have copies of the repository held on other machines
  • It opens the door for collaboration: In a centralized version control system, everyone has to commit and merge on the single source of truth. With a DVCS, we can have sub teams of people working on a long running sub-project. Reading and writing to each others repositories on different machines from the main repository. They can integrate their work over and over again without disrupting people working on the main repository.
  • Offline work: Means you can take your work home for the night and rather than having to wait till the next morning to commit all of your work, you can commit there and then and then just re-sync your local repository and your company’s main repository the next day.
  • Encourages Forking: GitHub has exploded with popularity and levelled up Open Source software. The reason being is that you can take someone’s work, create a copy (a Fork in GitHub land) and use that as a starting place to improve the original software (or build your own software). A DVCS allows you to download the repository of work and create your own branch and re-submit it back to the original author who can include your additions back into their own work.
Distributed model for Version Control

An example of a distributed system. One central remote supplying 3 repositories. And a side project going on.

Node.js sharing JavaScript code between client and server

I first looked at Node.js because it sounded like they had bridged the gap between client side (browser) and server side (Node.js) aka Isomorphic JavaScript. Code re-use is one of the core principles of developing software and for some reason it has taken us this long to get to a point where we can write one language that can be reused on both server and client side.

My initial impressions of Node.js bridging this gap weren’t great. Node.js does a good job of running stuff server side in JavaScript, but finding documentation on how to share code has been a slow process. The documentation on their website doesn’t mention sharing of code between client and server.

Current state of affairs

Node’s server side code exposes public methods/properties with a module pattern

If you want to expose a public method called doCoolStuff in your Awesome.js file you might do this:

exports.doCoolStuff = function() {

// Cool stuff done here...


However when we go to run Awesome.js on the client – it’s not going to know anything about what an exports is.


There are various ways around this such as checking whether exports exists, if it doesn’t define it etc… (read more here)

My preferred solution is to use Require.js and kill many birds with one stone. From the web browser point of view, Require.js is really helpful in that it:

  • Allows us to asynchronously load JavaScript resources
  • Allows you to get rid of script includes from your HTML
  • Negates the issue of having to order JavaScript files one after the other
  • Greatly improves on the ability to test our code by swapping out concrete implementations with mocks/dummys
  • Cleans up your code a lot
  • Better way of specifying modules

Also we can use the same code on the client and server (Node.js).

Setting up the Server Side (Node.js)

I tried to install via NPM a la – npm install requirejs, however NPM couldn’t download and unpack the files for some reason.

To get around this I downloaded the r.js code for node directly from the Require website, renamed to index.js and placed it in a folder called Requirejs in the node_modules folder. I renamed it to index.js as node will automatically look for a file called index.js in folders in node_modules.

Server Side Code – Server.js

var requirejs = require('requirejs');

// Boiler plate stuff - as per r.js's instructions
    //Pass the top-level main.js/index.js require
    //function to requirejs so that node modules
    //are loaded relative to the top-level JS file.
    nodeRequire: require

// We are requiring Person, instantiating a new Person and then
// reading the name back

// As Person.js is a module, we dont need the .js at the end
requirejs(["/Path/To/Person"], function(Person) {
	var person1 = new Person("Seb");

Code that can live on client or server – Person.js

define(function() {
	return function(firstName) {
		var _firstName = firstName;

		this.getFirstName = function() {
			return _firstName;


<!DOCTYPE html>
        <title>My Sample Project</title>
        <!-- data-main attribute tells require.js to load
             scripts/main.js after require.js loads. -->
        <script data-main="scripts/main" src="scripts/require.js"></script>
        <h1>My Sample Project</h1>

JavaScript for the HTML Page – Main.js

// We are requiring Person, instantiating a new Person and then
// reading the name back

// As Person.js is a module, we dont need the .js at the end
require(["/Path/To/Person"], function(Person) {
	var person1 = new Person("Seb");

More Reading

There are some rules around relative paths for your requires. It’s well worth spending the time and reading over them at the Require.js website.

Require.js talking about what Asynchronous Module Definition is

Tips For Writing Software And General Business Heuristics

  • The most simple route is the best 99% of the time (this might not be the case with life-critical systems such as space shuttle :P)
  • Think about every possible detail before you write one line of code. Plan out all the method names, what they will interact with etc… Its far quicker and cheaper to go through many iterations in your head and on paper than it is to do it on a computer
  • If you use a technology – make sure you go and learn every facet and feature of it
  • When being tasked with something challenging that you don’t fully understand and need to get clarification on – question the Subject Matter Expert (or whoever knows the most about the task at hand) as much as you can early on – till you see the big picture. Personal experience has shown me that by asking a question here or there over a long period of time will aggravate people whereas if you ask the same number of questions in a short period of time, people don’t get wound up 🙂
  • Never ask some one a question without asking yourself first. E.g. you should always have your best answer figured out before hand
  • If you do ask some one a question, don’t include any answers that you think it might be in your question. Compare their answer with yours. Good example: “What is the best way to create a widget?” Bad example: “Should I create a widget by using a wizard? Or should I download a pre-made widget?” The Good example leaves a question open ended. The person replying to your question might give you an answer you had not even thought about. You may even learn something new. The bad example pre-loads the question and narrows the scope of available answers.

  • If something feels too tough – its because you don’t know enough about it. Go away and read as much as you can about the subject.
  • If you have any small doubt when approaching a problem, you are likely taking the wrong approach. Tune into your gut feeling
  • If you see repeating code ignore it the first and second time you see it. Third time – refactor it and get rid of the repeating code
  • Don’t ever follow other people’s opinions. Read all about it then make up your own mind. This leads to confidence
  • Use version control systems to check in as often as possible to a branch. Its way quicker to revert back a few steps than to go and undo code manually
  • Use diff tools such as Beyond Compare to compare two files or two file systems. Don’t ever do it by eye
  • Be dependable. If some one asks you to do something. Make it your absolute goal to get it done for them
  • If you are seriously stuck on something, ask for a colleagues help. A lot of the time it can be something very trivial which you have overlooked
  • If you are stuck on a complex problem try thinking about it pretty hard just before bed. You will fall asleep and wake up with the problem solved
  • Teaching is the best way of learning. Write a blog or teach your colleagues something new
  • Do software projects that you find fun in your own time. Learn new technologies and processes all the time

Highlight the current line in Visual Studio 2010 with Resharper

Highlighting your current line makes it loads easier to see where you are on the page. I first noticed that I liked the idea of line highlighting when I installed the Productivity Power Tools via NuGet. I ended up turning all the options off except for the highlighted line due to slow performance on my machine. Then I realized that I could get the highlight using ReSharper with Visual Studio.


Step 1) Resharper > Options > Editor > Check the Highlight current line checkbox
Step 2) Tools > Options > Environment > Fonts and Colors > ReSharper Current Line Highlight. Choose the foreground and background colors you want to apply.

Building a Website Like a Boss (Part 1)

Links to tutorial parts

  1. Building a Website Like a Boss


This series of blog posts will cover how I am building my side-project web application. I am documenting it because I am using some technologies and methodologies that are really powerful when used together. And hey, it might be useful to someone :P.

It all started off with me learning ASP.NET MVC and then trying to apply Domain Driven Design (DDD) to it. I conceptually got the ideas talked about in DDD, but found it really troublesome applying the theory to a physical implementation. I spent a good deal of time playing around and researching how this could be achieved. I don’t think I was the only one struggling with this – I found a really good series on MVC by Rob Conery where he also struggled with trying to architect a perfect solution.

Soon after my fruitless attempts at creating a text book example of DDD in a web application, I was pointed to CQRS and Event Sourcing in a passing conversation with a colleague. From initially learning these two new words, I went away and did a lot of reading and learning. Learning these new technologies and processes made everything click together and I think that the architecture should really excite you for the following reasons:

What The Architecture Offers

  • Any interactions with the system are persisted. This means you can replay the entire system to any point in time. Say the business people come to you with a hypothesis that website users are 95% likely to buy an item that they removed from their shopping cart within 5 minutes of placing their order. You can include this logic, replay all events and see whether it is true or not. Its Business intelligence on roids! Or if a client reports a weird bug that you cannot see, you can just replay the events and step into the code at any point in time!
  • Data is stored in the database as JSON, which means it can be handed straight to the UI. E.g. a jQuery AJAX call to a URL which MongoDB services the request and returns straight up JSON. No need for request -> back end code -> more back end code -> goes through an repeater -> rendered in html.
  • Data in the website is all de-normalized, which means there are no queries going on for requests 🙂 Yes you read that right. Begone Get.all.stuff.where.blah.isnot(5).insersects(something).on(lame==true); (Disclaimer: That wasn’t code – but you get the idea).
  • Utilizes a NoSQL document database which is seriously fast
  • The system broadcasts all events, which gives you a true SOA foundational architecture
  • True data ignorance – start modelling the domain first, you can ignore what the data model looks like
  • Using BDD you can communicate what the application will do in a format that any one can understand (this really helps with documentation too)
  • The domain objects will not have unnecessary CRUD methods/information in them. They will be purely used for modelling business logic. Nothing more.
  • I’ll never need to worry about transactions (in business logic)
  • The underlying Enterprise Service Bus means that no user intention is lost and everything is transactional
  • You implicitly get a full blown audit log from the previously mentioned fact
  • Fully Asynchronous – which means scalability and speed
  • Software becomes more scalable – Faster reads from data lead to a far more responsive website

Sneak Peak/How Do We Do This?

In the upcoming posts I will be covering the following technologies and approaches (don’t worry if you are not familiar with some of them – all will become clear with time).

  • Domain Driven Design – DDD
  • CQRS
  • MSMQ
  • NServiceBus
  • Event Sourcing
  • Behaviour Driven Development – BDD
  • MongoDB & Fluent Mongo
  • JSON
  • Knockout.js
  • JavaScript and jQuery
  • Some form of caching

Personal Goals for the Web Application

  • Implementation of best practices
  • Implementation of some new technologies (might as well try out some of these new technologies that have been coming out)
  • Robust
  • Logging/Audits
  • Scalable
  • Testable
  • Localization
  • Secure
  • Promote synergy – Like a Boss
  • As cheap licencing costs as possible 😛

We’ll drill down into the details in the second post.