Archive for December, 2012

DOM and jQuery

Sunday, December 16th, 2012

You may have seen references to a DOM, or Document Object Model.  What this is is a programmatic way of accessing elements within a document.  It most commonly refers to HTML, but there are other DOMs as well.

Traditionally in Javascript, DOM objects are accessed using functions such as document.getElementById. However, these days, script libraries such as jQuery have easier and more flexible ways to access objects. jQuery allows you to use CSS style selectors to identify a set of elements which meet your criteria. The way that the whole library is structured is primarily to focus on dealing with a set of elements instead of a single one. Because the functions generally return the selected elements, you can generally chain together a number of calls into a single line of (surprisingly readable) code very efficiently. For example, to select all anchor tags on a page, give them a CSS class of “test” and make them all fire an alert box when clicked, you would do something like this…

$(‘a’).addClass(‘test’).click(function(){alert(‘Link clicked!’);});

E-commerce frustrations in real life

Sunday, December 16th, 2012

Google have posted a brilliant set of YouTube videos on the Analytics blog showing what kind of problems to look out for when considering usability of your online shop.  Well worth watching!

Do you know what a Nybble is?

Thursday, December 13th, 2012

It isn’t really referred to very often, but a nybble is half a byte, or 4 bits.

Linq and IQueryable

Thursday, December 13th, 2012

An IQueryable (and the Generic IQueryable(Of T)) in .NET is the main type that is used to store a Linq query. It is particularly useful as it is enumerable and has to implement IEnumberable too (and the generic equivalent).

It has 3 members: –

  • ElementType
  • Expression
  • Provider

ElementType is very simply the type of each individual item/record returned by the query.

Expression represents the query as an expression tree.  I have already touched on that in my previous posts.

Provider is the object which is able to take the expression and use it to retrieve data.  Typical providers include Linq to SQL, Linq to Entities (for the Entity Framework), Linq to XML etc.  There are many other implementations of Linq providers for all sorts of uses, including Linq to Amazon, Linq to Google etc.

A little about Linq

Sunday, December 9th, 2012

Linq is a creation of Microsoft.  It is designed to solve a number of problems.  Part of the idea of Linq is to be able to abstract queries from the type of data source.  This enables all sorts of useful things.

So, for example, if we have a query

From c in Customers Where c.Tel <> “”

We don’t actually know what Customers is – it could be from a database, XML file, a set of objects in memory, from a web service, a text file, a third party API or any number of different sources.  The point is that we can write that query without worrying about that, and if we decide later to change what we are querying, or to use that same query for multiple sources, it should work just as well.

So how does that work then?  What happens when you write a Linq query is that the query is built into an Expression tree.  Expressions are the basic building blocks of the query.  An expression may be something like c.Tel <> “”, which would be broken down into…

  1. Access a property on the parameter c called Tel
  2. A constant, which is an empty string
  3. Take 1. and 2. (above) and use as the 2 sides of a binary operator, where the operator is not equals

What then happens is that when the query is executed, the provider for the particular type of data source will take those expressions and convert them into a form that it can execute (or possibly execute them directly).  For example, the Linq to SQL provider will take the parameter c and convert it into a string of “[c]”.  The property access on it will then get tacked on the end as “.[Tel]”, giving “[c].Tel”.  The empty string constant will be another string – “”'” (that’s two single quotes).  The not equals operator would be “<>” with the above stuck on either side, giving “[c].[Tel] <> ””.  The rest of the query would be compiled this way into SQL and then run to get the results.  The results would then be translated back into appropriate objects.

Now, if the same query was run on a set of objects in memory, the Linq provider would probably convert the expression into a lambda function, which would probably look like Function (c as Customer) c.Tel <>””.  It would then run a loop with some code a bit like the following (this is simplified a little)…

Dim WhereFunc = Function (c as Customer) c.Tel <> “”
Dim ret as New List(of Customer)
For Each c in Customers
If WhereFunc(c) Then ret.Add(c)
Next c
Return ret

As I said, this is a little over-simplified.  In actual fact, Linq queries are not executed until you explicitly try to access the results.  This is typically either using one of the extension methods suck as ToArray, ToList, Single, SingleOrDefault, First, FirstOrDefault or just starting to iterate through the data typically with a loop.

One of the more interesting things that you can do with Linq queries is that you can actually build queries on top of queries.  What this does behind the scenes is to actually create a new expression tree, usually containing a copy of the old one.

So, for example…

Dim cs = From c in Customers Where c.Tel<>””
If MobileRequired Then cs = From c in cs Where c.Mobile<>””

This is a very useful way of implementing a search form.  The effective query at the end if MobileRequired=True will be similar to (in SQL)
SELECT * FROM Customer c WHERE c.Tel<>”” AND c.Mobile<>””

It is also possible to manipulate expression trees in order to create new modified versions of them to execute.  This can be useful in all sorts of ways, but it is too large a topic to include in this post, so I will hopefully come back to it at some point in the future.

Another benefit of Linq that I haven’t mentioned is that because it is part of the actual language, the compiler is aware of whether a query is well formed and querying actual existing fields.  This also means that Linq works well with intellisense, so Visual Studio can provide you with suggestions as you type a query, which is a very useful time saving feature.

Anything you think I’ve missed?  Please let me know!

Minimising downtime when transferring websites from one server to another

Friday, December 7th, 2012

Over the years, I’ve needed to do a lot of transfers of websites from one server to another.  I’ve found the following is the best way to do it for critical sites where you want to minimise downtime.  The red section is best done late at night or when the site is quiet and as quickly as possible so as to minimise disruption

  1. Set up space on new server
  2. Copy website files to new server
  3. If necessary, update config files on new server so that the website accesses data from the old one
  4. Update hosts file to test website on new server
  5. Test website on new server thoroughly
  6. Set up database user(s) on the new server and copy the database over and attach to test.  You may need to use this to re-attach users correctly on SQL Server
  7. Update the config file on the new server to point to the new database(s)
  8. Test that it is working again
  9. You may need to redo some of steps 2-8 a couple of times if things get changed on the original server while you are testing
  10. Remove hosts file entry used for testing
  11. Ensure that the firewall(s) allow access to the DB on the new server from the old one
  12. Once you are ready to finalise the transfer: Detach the database(s) on the old server or backup and detach
  13. Copy the database(s) to the new server (you may wish to zip them for transfer)
  14. While the database(s) are transferring, update the config on the site on the old server to connect to the new one
  15. Re-attach or restore the database on the new server.  You may need to re-attach users again
  16. Test that site is working on old server
  17. Re-set up hosts file entry for testing
  18. Re-test that site is working on new server
  19. Update DNS entries/name servers
  20. Remove hosts entry again
  21. If possible, test on another machine and connection which won’t have the DNS settings cached.  If not, wait for propagation and re-test

Whilst I primarily use this these days for IIS/MS SQL Server sites, it does work equally well for php/MySQL or any other platforms.  Also, although I am referring to sites on a single server, the same should also work for transferring between clusters.

jQuery can’t find element

Friday, December 7th, 2012

I keep having an annoying problem with jQuery where it doesn’t appear to be able to find an element.  This is actually a silly mistake on my part.  eg.

<div id=”msg”>Hello World</div>

<script type=”text/javascript”>



This is actually not a bug in jQuery at all – it is functioning correctly.  It should be $(‘#msg’).show(); since jQuery uses CSS style selectors.  $(‘msg’) would be looking for a tag named msg (ie. <msg>).

I figure that a lot of other people make the same mistake as it is easy to think of $() as being like document.getElementById, when it isn’t!  The reason that I’ve bothered posting about it is that I’ve ended up spending quite a bit of time trying to spot what I’ve done wrong!

Tagging and organisation of posts

Friday, December 7th, 2012

I made a conscious decision today to forget bothering to keep the posts organised here.  I feel that I’ve had a long break from the blog because I felt under pressure to tag and cross reference everything neatly.  Whilst it is no doubt better to, I think it is more important that I get around to posting at all, so I apologise for any inconvenience, but you’ve probably found this from Google anyway!

The importance of page speed

Friday, December 7th, 2012

Apparently, one of the factors that affects your Google ranking is how long it takes for a page to load.  I doubt this will make an enormous difference to your ranking unless it is very slow, but it is worth considering, especially if you are in a very competitive industry and you are working hard to claw your way to the top.

Google don’t just use statistics from their bots though, they actually use data on load times from their toolbar that runs inside people’s browsers.  That gives them a better idea of how long the page takes to load.

People do care how long it takes for a page to load.  If your website is responsive and pages load instantly, they are more likely to hang around.  If they have to sit and wait, even for a few seconds, they lose patience easily.  That could make quite a difference to your conversion rate.  Equally, Google monitor your bounce rate and how long people stay on your website.  I’m not sure if they use that for ranking, but it is definitely worth bearing in mind.

In addition to worrying about the server hardware and how heavily it is loaded with other websites/services, bear in mind that the following also affect page load times: –

  • server side code that runs when the page renders, especially if you access external systems including databases
  • HTML size
  • size of images
  • CSS size
  • Javascript size
  • number of images/css files/script references on the page
  • distribution of images/css files/script references across different domains (browsers usually limit how many files they will read from a single domain at a time)
  • speed of data loading from other services
  • caching policies and loading common references from CDNs or locally
  • the position of css and script references on the page

OK, so what can you do about all these? There are plenty of articles on these topics, so I’m just going to go through the basics and leave it to you to Google for the latest info.


If you haven’t already minified your HTML, Javascript and CSS then do so if possible.  This is just a process to remove all unnecessary whitespace to reduce the size.  You can do this dynamically as the page loads, but if you do, make sure your server can handle the load and it is probably sensible to have some sort of caching system.  If you minify code and CSS statically, make sure that you keep a copy of the original for when you need to change it later though!

Javascript packers

These are javascript libraries that pack your code up smaller and unpack it for use.  I’ve not used any myself, but they can save a lot of space in some cases.


Most web servers can compress HTML, CSS and Javascript, including dynamically generated pages.  Make sure you have this configured (usually gzip/deflate)


Have a good long think about whether your HTML is as simple as it can be.  It really shouldn’t need to be a tag soup in order to look good.  You might have to make some minor visual sacrifices to save a lot of complexity.  Think about whether clever use of CSS can save a lot of HTML.

Have a think about whether your CSS is organised well and logically.  Can rules be combined?

Do you need all that javascript still?  Can it be expressed in shorter form?


Can you reduce the number of images?  Is it worth simplifying the design?  Do you actually need all the pixels you have on repeating images (eg. gradients only need to be 1 pixel wide/high, covered areas of images might be croppable).  Make sure you are saving images in an appropriate format and compression level.

Combining files

See if you can combine multiple javascript files together.  Same applies to CSS.  This often has the additional effect of improving overall compression efficiency.

Look into image sprites.

Loading from multiple domains

If you absolutely have to have those 25 images (or javascript/css references) on the page, see if you can spread them across different domains.  Even subdomains work, although a separate domain is a little better as you can make it cookieless and save a few extra bytes on each request.  In fact, loading all references from another domain can save a bit on the cookie overhead.

Local/remote references

Work out whether to have referenced items stored locally (even if on another domain) or loaded from an external source.  Many common libraries, such as jquery are hosted on public CDNs.  There is a reasonable chance that a visitor may have it cached in their browser from there.  On the other hand, some external references may be slow, might not be minified/compressed etc and it may be worth considering whether you can host these yourself (factoring in the pain of having something else to update unless you come up with a clever caching system).  Don’t forget to test this, and don’t forget that your visitors are likely to be coming from all over the world.  If they load a page from the other side of the world, they will be faster accessing a CDN than your local server.


Make sure that your visitors aren’t needlessly reloading files that haven’t changed too much.  Ensure that you have appropriate headers set up to control caching.  This is a complex subject and you might want to spend some time reading up on it.

Position of CSS/Javascript references

It isn’t obvious, but where you place the CSS and javascript references on the page make quite a difference to load time, because of the way that the browser handles rendering.  CSS should probably be near to the top as it will need to be able to start retrieving this ASAP in order to render the page layout correctly.  Javascript can usually be left to nearer the bottom and you may way to even consider deferring and adding the references dynamically after the page has loaded so that users can start reading while the javascript kicks in (depending on what it does).  Again, other people have written a lot more on this subject.


There are loads of things that you should test.  Monitor the sizes of the various different files that load, the numbers of files, the load time etc.  Do this using browser tools (many browsers have it built in), external web services such as and .  Don’t forget that different browsers respond differently, as does different hardware, different platforms (don’t forget mobiles and tablets these days!), different ISPs and different locations.

Some other tools that are worth looking at are YSlow and Google Page Speed.

This can all be pretty time consuming, so the question is whether it is worth it.  It really depends on whether you are having major problems at the moment, how big and important your site is and how much you think it is affecting your visitors.  That’s something that you need to think about and decide on.

Have I missed anything?  Let me know!

MVC, Entity Framework and jQuery

Friday, December 7th, 2012

I’ve just started using ASP.NET MVC4 with Entity Framework 5 and jQuery.  I’m finding this is quite a different way of developing web based apps.  I do like jQuery though – it saves a lot of time with Javascript work.

Personally, I would recommend using Code First development with Entity Framework.  It gives you a lot more control over things than letting the designer generate the code.  The ability to use attributes from the System.ComponentModel.DataAnnotations and System.ComponentModel.DataAnnotations.Schema namespaces to automatically generate validation is invaluable to me.  I’m always looking for time saving measures