Git Commits

GIT Tutorial: Part 3

This is Part 3 of the Git tutorial. To see the rest, click here

Git tracks its history using a tree structure behind the scenes. Each entry in the tree is uniquely identified with a SHA hash. When playing with Git you may have seen something like:

A list of commits, each with one parent

What you are looking at is a stack of commits where the SHA hash uniquely identifies each commit. It can be thought of as a unique identifier like a primary key in a database.

Try the following to see some commits in your repository.

git log --oneline

The oldest is at the bottom and the newest at the top. What is not obvious is that the child item has a pointer to its parent(s). In the above scenario each child has one parent and it looks just like a list. Note: In the diagram below we are using alphabet letters instead of SHA hashes.

Commits

B has a pointer to its parent A

A commit can have one or more parents. When the commit has two or more parents it is essentially a merge where we have taken content from A and B and made C.

Merge

C has a pointer to its parents, A and B

Reflog

When we download a Git repository we can think of it being almost two separate items. First there are all the files in your project as then there is as an index that tracks the Sha references and which files they each relate to. This index is called the reflog. Not only does it contain these Sha references, it also can contain Tags (which are human-readable labels applied to a single Sha reference) and pointers that identify the Sha reference of each branch in the repository.

Visualizing Git

GIT Tutorial: Part 2

This is Part 2 of the Git tutorial. To see the rest, click here

 Local and Remote

At a high level we will be discussing two separate instances of Git. Firstly there is the instance on your own machine which we can refer to as the local instance. Then there is the instance on some remote machine which we can refer to as the remote instance. There aren’t really any differences between the local and remote except for being on different machines. The remote instance for example could be the place where the team pushes their code to. Ultimately there usually are at least 2 Git copies, your local and somewhere that you push your copy to. Think of it like Subversion and pushing your changes to a remote repository.

Side note: quite often in documentation you will see the remote referenced as “origin” (a default name). 

Creating a Git Repository

Let’s start off by creating a brand new Git Repository.

mkdir Demo
cd Demo
git init

This creates our Demo folder and then creates a new Git repository. Simple!

Logical sections within Git

Within an instance, Git is best visualized as 3 separate buckets.

Buckets

Working Directory

The working directory is what you see in your folders and file system. You make code changes here.

In the Working Directory, you might create a new file called A.txt, add some content and save it.

If we run the command

git status

This will ask Git to see what the current status is. Has anything changed? Are there any files waiting to be added to Git?

Screen Shot 2014-09-07 at 4.34.39 PM

You can see in this example, Git isn’t tracking A.txt. It also tells us how to add the file.

Staging Area

Next you would tell Git to track this new file.

git add A.txt

Alternatively you could add all files within the directory by specifying the dot syntax.

git add .

Running Git status again we see:

git status

Screen Shot 2014-09-07 at 4.38.10 PM

The whole point of the Staging Area is to create a candidate for committing to the database. Take for example fixing a bug. It would be far cleaner and easier to read 1 commit to the database e.g. “#424 Bug fixed” rather than 10 piecemeal commits all contributing to the commit. In Git we can stage and manipulate our commits to make history more logical and easier to read.

Once you are happy with your your Staged work you would commit it to the actual repository.

Repository

git commit -m "#424 Bug fixed"

Screen Shot 2014-09-07 at 4.40.22 PM

This would move our staged changes into the Repository. Essentially freezing those changes in place.

That’s it! That is a very simple example that touches upon all 3 buckets within Git.

Workflow

To help visualize the bigger picture, the typical day-to-day workflow that you would use would be something like this:

  • Fetch from the database (this updates your repository and brings down all information from the centralized repository)
  • Pull to update your local copy with what is in the remote repository
  • Make some local changes
  • Add them to the Staging Area
  • Commit your Staging Area to the repository
  • Finally when you are happy with your commits, Push them to the remote instance
  • And go to step 1 again…

Don’t worry about the other verbs that we haven’t discussed yet, we’ll touch upon them in later topics.

Distributed Version Control System

Git-Logo-1788C

GIT Tutorial: Part 1

This is Part 1 of the Git tutorial. To see the rest, click here

GIT is a Distributed Version Control System (DVCS), which can be seen as a step up from a regular Version Control Systems (VCS). The “Distributed” in DVCS means that multiple copies of a repository are held rather than just the single repository.

This is good because:

  • Enhanced failover capabilities: If you loose your main repository through a catastrophic incident, you still have copies of the repository held on other machines
  • It opens the door for collaboration: In a centralized version control system, everyone has to commit and merge on the single source of truth. With a DVCS, we can have sub teams of people working on a long running sub-project. Reading and writing to each others repositories on different machines from the main repository. They can integrate their work over and over again without disrupting people working on the main repository.
  • Offline work: Means you can take your work home for the night and rather than having to wait till the next morning to commit all of your work, you can commit there and then and then just re-sync your local repository and your company’s main repository the next day.
  • Encourages Forking: GitHub has exploded with popularity and levelled up Open Source software. The reason being is that you can take someone’s work, create a copy (a Fork in GitHub land) and use that as a starting place to improve the original software (or build your own software). A DVCS allows you to download the repository of work and create your own branch and re-submit it back to the original author who can include your additions back into their own work.
Distributed model for Version Control

An example of a distributed system. One central remote supplying 3 repositories. And a side project going on.

Node.js sharing JavaScript code between client and server

I first looked at Node.js because it sounded like they had bridged the gap between client side (browser) and server side (Node.js) aka Isomorphic JavaScript. Code re-use is one of the core principles of developing software and for some reason it has taken us this long to get to a point where we can write one language that can be reused on both server and client side.

My initial impressions of Node.js bridging this gap weren’t great. Node.js does a good job of running stuff server side in JavaScript, but finding documentation on how to share code has been a slow process. The documentation on their website doesn’t mention sharing of code between client and server.

Current state of affairs

Node’s server side code exposes public methods/properties with a module pattern

If you want to expose a public method called doCoolStuff in your Awesome.js file you might do this:

exports.doCoolStuff = function() {

// Cool stuff done here...

};

However when we go to run Awesome.js on the client – it’s not going to know anything about what an exports is.

Solution

There are various ways around this such as checking whether exports exists, if it doesn’t define it etc… (read more here)

My preferred solution is to use Require.js and kill many birds with one stone. From the web browser point of view, Require.js is really helpful in that it:

  • Allows us to asynchronously load JavaScript resources
  • Allows you to get rid of script includes from your HTML
  • Negates the issue of having to order JavaScript files one after the other
  • Greatly improves on the ability to test our code by swapping out concrete implementations with mocks/dummys
  • Cleans up your code a lot
  • Better way of specifying modules

Also we can use the same code on the client and server (Node.js).

Setting up the Server Side (Node.js)

I tried to install via NPM a la – npm install requirejs, however NPM couldn’t download and unpack the files for some reason.

To get around this I downloaded the r.js code for node directly from the Require website, renamed to index.js and placed it in a folder called Requirejs in the node_modules folder. I renamed it to index.js as node will automatically look for a file called index.js in folders in node_modules.

Server Side Code – Server.js

var requirejs = require('requirejs');

// Boiler plate stuff - as per r.js's instructions
requirejs.config({
    //Pass the top-level main.js/index.js require
    //function to requirejs so that node modules
    //are loaded relative to the top-level JS file.
    nodeRequire: require
});

// We are requiring Person, instantiating a new Person and then
// reading the name back

// As Person.js is a module, we dont need the .js at the end
requirejs(["/Path/To/Person"], function(Person) {
	var person1 = new Person("Seb");
	console.log(person1.getFirstName());
});

Code that can live on client or server – Person.js

define(function() {
	return function(firstName) {
		var _firstName = firstName;

		this.getFirstName = function() {
			return _firstName;
		};
	}
});

HTML Page

<!DOCTYPE html>
<html>
    <head>
        <title>My Sample Project</title>
        <!-- data-main attribute tells require.js to load
             scripts/main.js after require.js loads. -->
        <script data-main="scripts/main" src="scripts/require.js"></script>
    </head>
    <body>
        <h1>My Sample Project</h1>
    </body>
</html>

JavaScript for the HTML Page – Main.js

// We are requiring Person, instantiating a new Person and then
// reading the name back

// As Person.js is a module, we dont need the .js at the end
require(["/Path/To/Person"], function(Person) {
	var person1 = new Person("Seb");
	console.log(person1.getFirstName());
});

More Reading

There are some rules around relative paths for your requires. It’s well worth spending the time and reading over them at the Require.js website.

Require.js talking about what Asynchronous Module Definition is

Tips For Writing Software And General Business Heuristics

  • The most simple route is the best 99% of the time (this might not be the case with life-critical systems such as space shuttle :P)
  • Think about every possible detail before you write one line of code. Plan out all the method names, what they will interact with etc… Its far quicker and cheaper to go through many iterations in your head and on paper than it is to do it on a computer
  • If you use a technology – make sure you go and learn every facet and feature of it
  • When being tasked with something challenging that you don’t fully understand and need to get clarification on – question the Subject Matter Expert (or whoever knows the most about the task at hand) as much as you can early on – till you see the big picture. Personal experience has shown me that by asking a question here or there over a long period of time will aggravate people whereas if you ask the same number of questions in a short period of time, people don’t get wound up 🙂
  • Never ask some one a question without asking yourself first. E.g. you should always have your best answer figured out before hand
  • If you do ask some one a question, don’t include any answers that you think it might be in your question. Compare their answer with yours. Good example: “What is the best way to create a widget?” Bad example: “Should I create a widget by using a wizard? Or should I download a pre-made widget?” The Good example leaves a question open ended. The person replying to your question might give you an answer you had not even thought about. You may even learn something new. The bad example pre-loads the question and narrows the scope of available answers.

  • If something feels too tough – its because you don’t know enough about it. Go away and read as much as you can about the subject.
  • If you have any small doubt when approaching a problem, you are likely taking the wrong approach. Tune into your gut feeling
  • If you see repeating code ignore it the first and second time you see it. Third time – refactor it and get rid of the repeating code
  • Don’t ever follow other people’s opinions. Read all about it then make up your own mind. This leads to confidence
  • Use version control systems to check in as often as possible to a branch. Its way quicker to revert back a few steps than to go and undo code manually
  • Use diff tools such as Beyond Compare to compare two files or two file systems. Don’t ever do it by eye
  • Be dependable. If some one asks you to do something. Make it your absolute goal to get it done for them
  • If you are seriously stuck on something, ask for a colleagues help. A lot of the time it can be something very trivial which you have overlooked
  • If you are stuck on a complex problem try thinking about it pretty hard just before bed. You will fall asleep and wake up with the problem solved
  • Teaching is the best way of learning. Write a blog or teach your colleagues something new
  • Do software projects that you find fun in your own time. Learn new technologies and processes all the time

Highlight the current line in Visual Studio 2010 with Resharper

Highlighting your current line makes it loads easier to see where you are on the page. I first noticed that I liked the idea of line highlighting when I installed the Productivity Power Tools via NuGet. I ended up turning all the options off except for the highlighted line due to slow performance on my machine. Then I realized that I could get the highlight using ReSharper with Visual Studio.

Steps

Step 1) Resharper > Options > Editor > Check the Highlight current line checkbox
Step 2) Tools > Options > Environment > Fonts and Colors > ReSharper Current Line Highlight. Choose the foreground and background colors you want to apply.

Building a Website Like a Boss (Part 1)

Links to tutorial parts

  1. Building a Website Like a Boss

Background

This series of blog posts will cover how I am building my side-project web application. I am documenting it because I am using some technologies and methodologies that are really powerful when used together. And hey, it might be useful to someone :P.

It all started off with me learning ASP.NET MVC and then trying to apply Domain Driven Design (DDD) to it. I conceptually got the ideas talked about in DDD, but found it really troublesome applying the theory to a physical implementation. I spent a good deal of time playing around and researching how this could be achieved. I don’t think I was the only one struggling with this – I found a really good series on MVC by Rob Conery where he also struggled with trying to architect a perfect solution.

Soon after my fruitless attempts at creating a text book example of DDD in a web application, I was pointed to CQRS and Event Sourcing in a passing conversation with a colleague. From initially learning these two new words, I went away and did a lot of reading and learning. Learning these new technologies and processes made everything click together and I think that the architecture should really excite you for the following reasons:

What The Architecture Offers

  • Any interactions with the system are persisted. This means you can replay the entire system to any point in time. Say the business people come to you with a hypothesis that website users are 95% likely to buy an item that they removed from their shopping cart within 5 minutes of placing their order. You can include this logic, replay all events and see whether it is true or not. Its Business intelligence on roids! Or if a client reports a weird bug that you cannot see, you can just replay the events and step into the code at any point in time!
  • Data is stored in the database as JSON, which means it can be handed straight to the UI. E.g. a jQuery AJAX call to a URL which MongoDB services the request and returns straight up JSON. No need for request -> back end code -> more back end code -> goes through an asp.net repeater -> rendered in html.
  • Data in the website is all de-normalized, which means there are no queries going on for requests 🙂 Yes you read that right. Begone Get.all.stuff.where.blah.isnot(5).insersects(something).on(lame==true); (Disclaimer: That wasn’t code – but you get the idea).
  • Utilizes a NoSQL document database which is seriously fast
  • The system broadcasts all events, which gives you a true SOA foundational architecture
  • True data ignorance – start modelling the domain first, you can ignore what the data model looks like
  • Using BDD you can communicate what the application will do in a format that any one can understand (this really helps with documentation too)
  • The domain objects will not have unnecessary CRUD methods/information in them. They will be purely used for modelling business logic. Nothing more.
  • I’ll never need to worry about transactions (in business logic)
  • The underlying Enterprise Service Bus means that no user intention is lost and everything is transactional
  • You implicitly get a full blown audit log from the previously mentioned fact
  • Fully Asynchronous – which means scalability and speed
  • Software becomes more scalable – Faster reads from data lead to a far more responsive website

Sneak Peak/How Do We Do This?

In the upcoming posts I will be covering the following technologies and approaches (don’t worry if you are not familiar with some of them – all will become clear with time).

  • ASP.NET MVC
  • Domain Driven Design – DDD
  • CQRS
  • MSMQ
  • NServiceBus
  • Event Sourcing
  • Behaviour Driven Development – BDD
  • MongoDB & Fluent Mongo
  • JSON
  • Knockout.js
  • JavaScript and jQuery
  • Some form of caching

Personal Goals for the Web Application

  • Implementation of best practices
  • Implementation of some new technologies (might as well try out some of these new technologies that have been coming out)
  • Robust
  • Logging/Audits
  • Scalable
  • Testable
  • Localization
  • Secure
  • Promote synergy – Like a Boss
  • As cheap licencing costs as possible 😛

We’ll drill down into the details in the second post.

How to revise for MCTS 70-515 Web Applications with .NET 4.0

Updated: I’ve re-uploaded my flash cards to WordPress. When you download the file, rename it to .anki (see the Anki application below). I wrote them for my personal use, so some might not make perfect sense, however I’m certain you will find them useful.

To prepare for sitting my MCTS the first thing I did was to ask as many of my colleagues (who have all taken many certification exams) about their approach to passing. The feedback was unanimous in that practise tests are the key to passing. This would be the backbone to my studying.

I bought the 70-515 Microsoft Self Paced training kit as a study guide and starting point. I would recommend the training kit as it was concise and covered 95% of the information that you need to know. Buy the book and read it cover to cover a few times.

After I was about half way through the book I asked around again as to which practise test provider I should use. My feedback was that Transcender was good quality and as it happened at the time had a 40% off deal through Microsoft Partners. I bought the practise tests and got to work running over and over them to get up to close to 100% correct. What I like about the Transcender package is that it comes with Flash cards which make you dig a bit deeper into your knowledge.

The actual MCTS 70-515 test is multiple choice, however with flash cards, you will be given a question and no options. Therefore you have to think. This thinking really strengthens those neural pathways and solidifies your memory.

I started off the process taking detailed, hand written notes. I soon found that was slowing me down and that It was slow to go back on them. With the knowledge that writing stuff down further helps your memory retention, and knowing that Flash cards were beneficial I found Anki which is a free piece of software that allows you to create flash cards. Instead of writing hand written notes I converted to writing them into Anki. I wrote my notes as meaningful questions, so that I could replay the flashcards and test my knowledge. Anki also will shuffle the ones that you get constantly correct to the back of the pack and the ones you get wrong more frequently to the front. /Win

I soon found that the book plus the test exams were not enough to pass. Reading MSDN proved to be very helpful as well as various blog posts. In particular the following post is an awesome listing of a huge swathe of items that you could be tested upon.

Part 1
Part 2
Part 3
Part 4
Part 5
Part 6

Make sure you do timed tests so that you have a feeling for how fast you should go. My test was 51 questions and I had 140 minutes to complete the test with the questionnaire and evaluation. It will take you around 10 mins to fill out the questionnaire and evaluation. Also worth noting, you can go forward and backward in the exam, whilst marking questions you want to come back to later. My approach was to go through all the ones I easily knew, marking down the ones I didn’t know or was unsure about. This meant I didn’t get anxious that I was spending too long on a hard question. I came back at the end to tackle all the odds and sods.

I found a decent approach to answering the multiple choice questions is that they usually give you 4 or 5 options. 2 of them will be blatantly wrong and the remaining possibilities will have subtle syntactical differences.

Each question in the exam is weighted differently and some of the questions aren’t even used in your score! The pass mark for the 70-515 was 700 out of 1000.

I hope this helps some one pass!
Good luck!

Difference between ASP.NET Data Controls

Totally lame post, but I’m studying for my MCTS and thought this might help some one. Keep in mind the values in the matrices below are for out of the box ASP.NET – it doesn’t mean you can’t achieve certain functionality, by extending them.

Single Item Display

FormView

Property Value
Multiple Items No
Templated Yes
Create Yes
Read Yes
Update Yes
Delete Yes
Sorting No. It is a single item view, so there is no need.
Pagination Yes, but it’s only one page at a time. Use the AllowPaging=”true” attribute to turn it on. Use the PagerSettings-Mode attribute to state the type (NextPrevious, NextPreviousFirstLast, Numeric, NumericFirstLast). Use the PagerSettings element within the FormView to specify paging options such as the text for the previous and next buttons.

References:
http://msdn.microsoft.com/en-us/library/ms227992.aspx

DetailsView

Property Value
Multiple Items No
Templated No
Create Yes
Read Yes
Update Yes
Delete Yes
Sorting No. It is a single item view, so there is no need.
Pagination Yes, but it’s only one page at a time. Use the AllowPaging=”true” attribute to turn it on. Use the PagerSettings-Mode attribute to state the type (NextPrevious, NextPreviousFirstLast, Numeric, NumericFirstLast). Use the PagerSettings element within the FormView to specify paging options such as the text for the previous and next buttons.

References:
http://msdn.microsoft.com/en-us/library/s3w1w7t4.aspx

Multiple Item Display

Repeater

Property Value
Multiple Items Yes
Templated Yes
Create No
Read Yes
Update No
Delete No
Sorting No
Pagination No

References:
http://msdn.microsoft.com/en-us/library/x8f2zez5.aspx

ListView

Property Value
Multiple Items Yes
Templated Yes
Create Yes
Read Yes
Update Yes
Delete Yes
Sorting Yes. You add a LayoutTemplate and set the button’s command name to “Sort”. Set the CommandArgument to the direction that you want sorting.
Pagination Yes, use the DataPager control

References:
http://msdn.microsoft.com/en-us/library/bb398790.aspx

DataList

Property Value
Multiple Items Yes
Templated Yes, but it does wrap the output in a table.
Create No
Read Yes
Update Yes
Delete Yes
Sorting Yes, but you have to take care of this manually in your code behind.
Pagination Yes, but you have to take care of this manually in your code behind.

References:
http://msdn.microsoft.com/en-us/library/es4e4e0e.aspx

References:
http://msdn.microsoft.com/en-us/library/bb398790.aspx

GridView

Property Value
Multiple Items Yes
Templated No
Create Yes
Read Yes
Update Yes
Delete Yes
Sorting Yes
Pagination Yes

References:
http://msdn.microsoft.com/en-us/library/2s019wc0.aspx

Netduino Transistor Switch

A transistor switch allows you to use a smaller flow of electrons to control a much larger flow of electrons. In this case, we are using the Netduino’s 3.3V output to control turning on and off a 9v supply.

It took me a while to figure out how this should be wired up, but after a lot of looking around I found some arduino examples. This one in particular was very helpful

My take away is that you should have your circuit set up like this. Effectively your transistor is blocking the current from going to ground until some input to the base is given from the Netduino.

Some notes on the switch: There is a formula for figuring out the value of the resistor going to base is, however it is outside the scope of this tutorial. I’ve just used 1k and it works fine. Also be sure to connect both the ground from your power, and the ground from your Netduino. For the longest time I didn’t have my Netduino grounded like in the images above and could not figure out why it was not working!