Review of a Year with a Linux Laptop

There are many posts across the web about converting to using Linux, either from Windows or a Mac.  This post is similar but really more of a review of what is was like for me to use a Linux laptop primarily after being a long time Mac user.

Most people who would come across the blog and know me already know that my primary work is writing software or managing and estimating such projects or both.  Much of my work has been in the Apple ecosystem over the last 10 years or so but not all.  At times, I work on Windows only projects, backend projects usually deployed on Linux, and the occasional Android project.  Even on projects where I am hired for iOS work, I often end up heavily involved on backend systems running on another platform.

A year ago I was looking to purchase a new laptop.  My MacBook Pro at that time was over 3 year old and, even though it was still fast enough, was beginning to become a constraint from a storage and memory perspective.  The laptop was configured with the max 16gb RAM at the time I purchased it but, given the variety of projects that I work on, I often need to run one or even multiple VMs simultaneously and this becomes an issue when trying to do anything else at the same time.  With 1tb of SSD storage, the space required for many VMs was a large issue and I hated having to carry around an external drive for that purpose.

Like many others, I was disappointed when Apple did not release a new laptop configuration with 32gb RAM as an option.  While there was the option to now have 2tb of storage, the RAM limitation was the larger issue for my work.  The TouchBar feature of the new high end MacBook Pro laptops was not something that I was the least bit interested in either but that has been covered at length by many others.

Because I work on iOS projects, replacing my MacBook Pro completely with another machine was not an option.  I decided that I would look for another laptop, which would be my primary laptop for everything except iOS work, an keep the old laptop which was more than sufficient for Apple platform development work.

The New Laptop

I purchased the Serval WS from System76.  I probably over configured it but I typically buy the highest end laptop that is offered in the hopes that I can keep it longer.  I hate moving to a new laptop so the longer I can use one the better.

The new laptop was configured with 64gb RAM and 3 1tb SSDs.  It also came with a significantly more powerful GPU than the old MacBook Pro but that was not a deciding factor for me in the purchase.  The laptop came with Ubuntu 16.04 LTS installed (long term support).

I chose a Linux laptop (or GNU/Linux if you prefer) because the environment is closer to the Mac than Windows and much of my work is deployed on Linux.  The availability of many open source software alternatives to what I had been using was also attractive.

After using the laptop for a year, I would admin that 32gb RAM was more than enough for my usage but it was nice to have that extra buffer for VMs.  

The New Software

I use VMWare Fusion on the Mac and purchased a VMWare Workstation Pro license for the Linux laptop.  The Workstation UI is not as nice in my opinion when compared to the Fusion UI but I was able to move VMs back and forth between the laptops without an issue.  I use virtual machines quite a bit and my clients appreciate the fact that I can hand them a copy of my complete development environment after the project is over.

Sublime Text is the programmer’s editor that I use and it works with the same license key on Mac, Windows, and Linux so there was no issue at all converting.  Other tools that fall into the same category are Firefox, Chrome, the Slack desktop client, and all of the Jetbrains IDEs that I use for various development platforms.  Hopper disassembler is also available on both Mac and Linux.  I occasionally use that tool in debugging scenarios so it is handy to have available.

Libre Office is the office suite that I used when I went to using Linux full-time. The suite is fine for my use however I am not a heavy user of the advanced features of office suites.  Consulting work means that I have to share and exchange files with people from other companies quite a bit.  I never had any issue converting to or from other file formats but, like I stated previously, I am not an advancer user of office suite functionality.  I do prefer Libre Office over Pages and Numbers on the Mac but I typically use MS Office for the Mac and I prefer that over Libre Office.  This might be because of my long term use of Office on both Windows and Mac though.  Keynote is preferable to me over Libre Office and Powerpoint but I think that is personal preference more than anything.  All three will get the job done if needed.

For email, I chose Thunderbird on Linux.  This is one area where I liked the features better than the Mac alternative that I was using which was  Thunderbird has nicer and more detailed features for handing and displaying of SPAM.  At times, I may have up to 5 mail accounts I need to check for work, personal, and clients.  Both email clients are capable of handling that without an issue.  Thunderbird does lack the threaded email view that provides which was a feature that I did miss.

VPN software is necessary if you are going to work outside of your own office.  I was a user of Cloak (EncryptMe is the name now) VPN on Mac and iOS.  This VPN service was not supported on Linux so I chose to go with an OpenVPN provider called CryptoStorm.  Going with an OpenVPN provider gave me the option to use one service for Mac, iOS, and Linux.  On Linux, I used the OpenVPN tools to configure the connections and on the Mac I used the Tunnelblick open source product for configuration.  On iOS, I used the OpenVPN app to perform the configuration.  The CryptoStorm service and OpenVPN in general is much more complicated to setup than EncryptMe but it is about half the ongoing yearly cost.  A normal user might have given up trying to configure this.

The Software That I Could Not Replace

There were some solutions that I had on the Mac that I never, or at least up to the point, was able to find a good replacement that I was happy with.

Vienna is the product that I use for reading RSS on the Mac.  The product is free to use and integrates with several of the Google Reader replacement services that appears in the years after Reader was shutdown.  I use one of these services called The Old Reader.  I found a product on Linux called Liferea which worked the best of the products I looked at.  The product was almost as good as Vienna with the exception that it did not pull down all articles that were unread in the instance that there were more unread articles than currently appeared in the site’s rss feed.  Vienna handles this issue by reading the list to pull down from The Old Reader rather than consulting the site’s rss feed.  This is an issue for me because I read many high volume feeds but I can’t always keep up consistently and I tend to sort of binge read some of those feeds.  As a result of this, I often used the FeedlerPro app on my iPad to read rss feeds that I had not caught up in a while.  That app has the same unread functionality as Vienna on the Mac desktop.

Calendar app integration on the Mac is outstanding.  I continued to use the calendaring on my iPhone and iPad because my family shares calendars with each other and I was never able to find a solution to viewing this on my Linux laptop.  There is the iCloud website but I found it unstable and unresponsive much of the time so I eventually gave up and used my iPad most of the time.  Fantastical is the app that I use on the Mac and it is outstanding.  This is an area where Linux isn’t close to other platforms on the desktop.

For todo lists, Omnifocus is the app that I used on the Mac and iOS prior to using the Linux laptop full-time.  The app is excellent and a backend server to synchronize across the installs that you use is provided free of charge.  There was nothing like this that I was able to find on Linux.  I ended up using Sublime to edit simple text files stored in DropBox, which works perfectly on Linux as it does on other platforms.  This was an acceptable solution and one that I already employed to take notes on projects but not a complete replacement for the notifications and nice organization of Omnifocus.

For blog posts, which I have sadly been failing to produce very often, I use MarsEdit on the Mac.  I was unable to find anything similar on Linux and resulted in using the online editor provided with WordPress installs.  This is a suitable replacement and I would guess this is what most people use even on the Mac but MarsEdit is quite a nice app to use.  So nice that I am using it now to write this post.

I work in my home office over 80% of the time.  That means that I need an office phone solution and I use Skype for that.  Some clients also prefer to use the Skype chat functionality although that is more rare.  I subscribe to the Skype service that allows you to have a phone number that can be called and appears to others on caller id.  There is a version of Skype available on Linux and I was able to use it with somewhat limited success.  The Mac version is more usable and has more convenient features like integration with iCloud contacts for phone numbers. I also found Skype to be much more unstable with more crashes and issues with headsets plugged into the laptop.

The Laptops Themselves

The System76 laptop weighs a good bit more that the MacBook Pro.  I would estimate for than 4 times the weight of the Mac in fact.  It is closer in weight to an old 17” MacBook Pro with the magnetic drive, which was no lightweight.  The footprint of it reminds me of old Dell Laptops.  It is thicker by 2x at least.  The power supply is equally heavy coming in at least 3x as heavy and 2x the size of a MacBook Pro adapter.

The keyboard on the System76 is very nice.  One of the nicest I have used on a laptop in fact.  It is also a full sized keyboard complete with number pad.  This is possible because it is slightly wider that the Apple and the extra thickness doesn’t hurt either.

The screen on the Serval WS is nice for a laptop, but not as nice as the MacBook Pro even though it is several years newer.  The support for external monitors on both laptops is excellent.  Both handle 2 display port monitors (Dell 26”) but the Apple seems to do this more easily even though the graphics card is not quite a substantial.

The System76 is quite a bit louder.  The fan kicks in much more often, probably due to the heat of the faster processor and most certainly the GPU.

I prefer the System76 having more ports that do not require dongles.  It is painful to carry around the number of dongles that you might need with the MacBook Pro.  Apple is not making this better with the newer models either.  One could argue that the extra weight of the Serval WS more than makes up for the extra dongles that need to be carried with an Apple laptop and it would be difficult to refute that.

The build quality is a hands down victory for Apple.  The MacBook Pro is nicer to use and the aluminum body is nicer to look at and feel.  The MacBook Pro screen is much easier on the eyes although the System76 screen is not bad.  I have read a lot of complaining about the keyboards on the newer MacBook Pros but I do not have any extended usage time with one beyond the Apple Store and Best Buy so I can’t really comment on that.  I do wish that the TouchBar was optional on all models or at least something that appears above the standard function key line so that physical keys were still available.

The Verdict

The System76 laptop is perfectly usable for everything I do except for iOS development work.  Linux has become reasonably easy to use.  Not as easy as Windows or MacOS but easy enough for a power user or even a computer user of many years to get along with and be productive.  I was pleasantly surprised to find it worked flawlessly with my NAT and wireless printer.  About halfway though the last year, my printer gave out and I went down to Office Depot and purchased another wireless printer (HP OfficeJet 8740).  I brought the printer home, unboxed it, plugged it in, and the Linux laptop saw it and was able to use it without an issue.  My daughter thought this was completely normal and could not understand why I was stunned.

I enjoy using the laptop and will continue to use it but the rough edges around a lot of software on Linux still leave something to be desired.  When Apple finally gets around to introducing a laptop configuration with 32gb RAM, I will probably buy one and go back to using a Mac as my full-time laptop.  It is just a little too inconvenient to have two laptops for work (at least for me).  I hope when that happens, Apple will also provide an option to remove the TouchBar or at least make it an additional thing above the function keys that can be ignored for those of us who have no use for it.

Impressions of Win7 Beta Upgrade from Vista Ultimate SP1

Over this past weekend, I upgraded a laptop from Vista Ultimate SP1 to the new Windows 7 Beta (both 64-bit).  I decided to throw caution to the wind and forget the whole VM thing.  The upgrade lasted just under 7 hours but, before you think that might be a long time, the same laptop when upgraded from Vista Home Premium SP1 to Vista Ultimate SP1 took a little over 6 hours 20 minutes.  The laptop is an HP Pavilion dv5 (4gb ram, 2.1ghz AMD dual-core).

There have been no unworkable problems up to this point.  The issues that I have had so far are:

  • Skype 3.8 did not work.  The Win7 installer warned about this and I uninstalled before restarting the upgrade.  After the upgrade, I attempted to install the app (ignoring the compatibility warning) and it did install but crashes without an error dialog shortly after.  The Skype 4 beta 3 release seems to be working properly so far on Windows 7 64-bit so I will see how that goes.
  • Virtual PC did not upgrade properly.  I received a message stating that “Virtual PC could not open the Virtual Machine Network Services driver”.  The VMs did run but without any network access.  Uninstall/Reinstall of VPC fixed this issue for me.

I thought I might have a problem with TortoiseSVN but it has worked after the upgrade without any problems at all.

Visual Studio 2008 and 2005 both work fine after the upgrade.  I was expecting that I might have a problem with the Experimental Hive entries for the VS SDKs after the upgrade, but I didn’t have any problem with those either.

My first impressions of Windows 7 are good but I’m not overwhelmingly impressed in the way that some blogs have expressed.  Memory usage seems slightly better but nothing to bet exited about.  Bootup time has improved for me quite a bit, which is nice.  It is encouraging that performance of the beta appears to me to be at least the same as Vista Ultimate SP1 in the worst case and slightly better in some areas.

Some things have moved around on the Control Panel again so that is a little annoying.  One example is the Startup Applications that used to be found under Programs.  This applet appears to have gone away all together which is puzzling but it is possible that I have overlooked it.  Power users know to go directly to the folder but a regular user would be stuck with whatever installers or the OEMs feel like should run at startup. 

The new taskbar/quickstart toolbar combo is nice but I can’t figure out a way to start a second instance of an application, short of going back to the start menu.  This was a feature of the old quickstart toolbar I would like to have back.  An option on a right-click menu would be good enough.  Hopefully I’m overlooking an option somewhere.  [Update 1/13/09 11:35am – Shift+Click will launch a second instance.  That is about the only combo I didn’t try.  Thanks to Markus Egger on Twitter for that one.]

IconLover – my icon tool of choice

I am absolutely terrible at editing graphics.  There are things in life that I am good at and the task of creating graphics for my applications is simply not one of them.  I’ve been in a continual search for an image editing program that makes the task easy for me.  Most of the time, I either contract with a graphics designer or buy image packs, but often the images are still not the exact size or format I need.

If you’ve done any work with Visual Studio add-ins or VSPackages, you know that just getting the background color correct can be a chore.  The Visual Studio graphics editor doesn’t make the task any easier either.

This search has led me to a tool that I plan on sticking with for quite a while:  IconLover.  The tool is easy to use, even for a no talent designer like myself.  My favorite feature is the ability to create image lists in a quick, easy, fashion. 

VSX developers will appreciate this feature a great deal.

By the way, the creator of IconLover, Aha-Soft, didn’t give me a free license or pay me anything for posting this.  I just like the tool and thought I would pass it along.

D2Sig in Houston

This past Thursday, I attended the first meeting of the D2Sig in Houston.  The D2 stands for "Developer 2 Designer" and the group will be focused on the XAML technologies of WPF and Silverlight as well as any area where developers and designers might need to work more closely than they have in the past.

Markus Egger from EPS was the presenter and, as always, he gave a great presentation which included a general overview of WPF, Silverlight, and some demo video of a Surface table in action.

I would recommend this new group to designers or developers in the Houston area that are interested in the these new up and coming technologies.

I’m guessing around 30 or so people showed up to the first meeting so I think that is a pretty good start. 

J Sawyer has an official announcement here for the first meeting with a little more detail.  The upcoming meetings will be the first Tuesday of each month at the Microsoft offices in Houston.

I have a confession: I sort of like Vista

Apparently Microsoft has revived the marketing tactics of Folders Instant Coffee from back in the 80’s.  According to this article on cnet news, folks from the MS marketing team have been rounding up Vista skeptics under the guise that they will be shown a new OS code named Mojave.  All of the subjects seem to love the new OS and afterwards are told that they have been shown Vista.

This is sort of like dining in a fine restaurant only to find out that you have been drinking Folders Instant instead of the fine coffee normally sold there.

Until the last few months or so, I was one of those skeptics as well.  I had only used Vista a small amount and all of my primary machines were still XP Pro.  In fact, I had really only significantly used Vista in a Virtual PC image which gives a very poor impression.

My latest laptop runs Vista x64 SP1, shipped that way from HP, with a modest AMD 2.1 ghz dual core processor and 4gb ram which is becoming more common on laptops even at a general retailer like Best Buy.  The laptop was reasonable priced, I thought, ringing up at just under $1100 tax included.

I sort of like it.  Your mileage may vary but there are a few key areas that really make a difference for me.  The first is it just plain looks nicer and is more pleasant to use.  I know mac folks are saying that OSX looks better and has for many years.  I agree with that (I have a few macs myself) but my business is based on windows development so macs are not an option right now.

The other noticeable difference is how much better the networking performs under Vista.  I have to often copy a lot of files between desktops and laptops and XP is terrible at this.  I haven’t done any timings but Vista is dramatically faster.

It isn’t all roses, however.  Vista does require significant hardware over XP.  I purchased an even cheaper laptop (~$700 tax included) this last November to run XP Pro.  The machine actually came with Vista Home Basic but the combination of the underpowered machine and the crapware fiesta installed on it was unbearable.  That machine is a 1.8 ghz dual core machine with 2gb ram and seems to run XP SP2 at about the same speed as my new AMD 2.1 ghz with 4gb ram runs Vista SP1.  Also, the Vista machine does take a little longer to boot up but not significantly longer.

Of coarse, there is also that little problem of every setting being moved to a new dialog or a different path to get to the same dialog.  Honestly though, I haven’t been quite as annoyed by that as I thought I would be based on all of the complaining I have heard from other folks.  It takes a minute to find something and I then I know where it is and go on with my work.  I had the same problem when I tried OSX for the first time.

As far as the 64-bit goes, I haven’t had any significant problem yet.  Since the machine is a laptop, it obviously came with drivers for the hardware on it and my external device needs are modest.  The crash I have had was trying to see if I could run the memory analysis tool on, which didn’t say it would work on 64-bit machines.  That was ugly but, so far, has been the only hiccup.

FYI – On XP, I use TortoiseSVN with the VisualSVN package for Visual Studio integration.  This also works on Vista x64.  You will need to install the 64-bit version of TortoiseSVN (I installed version which was the latest) and then the normal and only install for VisualSVN (1.5.1).  I wasn’t sure if I was supposed to install the 32-bit or 64-bit TortoiseSVN client since Visual Studio itself is 32-bit but I finally found a discussion related to this on google groups.  My actual repo is still svn 1.4.x and I haven’t had any problems yet.

I Need to Try Firefox

I was looking over the traffic profile for the last week on this site and was surprised at the share of traffic that Firefox had.

  1. Firefox – 55.4%
  2. Mozilla Compatible Agent – 10.99%
  3. IE (what I have been using) – 8.7%
  4. Safari – 7.18%
  5. Opera – 3.18%
  6. Mozilla – 3.16%
  7. Everything else was bots and various rss readers

I had tried Firefox a few years ago and I didn’t really see any benefit but it clearly has the dominant market share of people who went to the effort of at least visiting my blog so it must be worth looking at again.  Within the Firefox category:  3.0 had about 30% share, 1.5.x had ~1%, and 2.0.0.x had the rest of the share.

Do you really need to know C? I think so.

I’ve been following the podcasts that Jeff Atwood and Joel Spolsky have been doing for StackOverflow.  The podcasts are not really technical in nature, in fact they really do not have anything to do with what will ultimately be the purpose of the site they are building.  They are more documenting the discussions and decisions they are making while creating the site.

I’ve only been through the first few so far, but an interesting discussion has come up in both podcasts about whether or not programmers should know the C programming language.  Jeff does not know C and seems to come down on the side of the argument that this knowledge is not necessary.  Jeff hasn’t specifically said this out loud, that I’ve heard, but I gather this is his opinion based on how the conversation seems to flow.  Joel, on the other hand, is of the opinion that programmers should have knowledge of the lower levels of programming even though it is not part of their daily job.  His thinking on this is that the lower level knowledge gives programmers an edge even when programming with the higher level popular programming languages of today.

I have to agree with Joel on this.  In my experience working with programmers in both categories, those who have a background of knowledge of the lower level programming languages always seem to be quicker at solving more complicated problems.  Of course, there are exceptions to this rule but I would say 98% of the time this is true.

It is interesting also that when this topic comes up with colleagues, it is almost split right down the line in opinion with those who do not know C believing it is not necessary and those who do have experience with C believing this sort of experience and knowledge makes them a much better developer.

One good example supporting my argument (actually Joel’s argument) is garbage collection related issues.  I’ve seen programmers spend a huge amount of time attempting to understand why the runtime memory size of their program is continuing to grow when, in their minds, the garbage collector should be coming to the rescue.  Of course, the problem is usually that they somehow have a reachable reference to a huge collection of objects or something of this nature (usually several in fact).  Programmers with the lower level knowledge seem to pick up on these sorts of problems much quicker.

Another area I have seen many issues with is threading.  Languages like C# and Java make threading a reachable concept for the programming masses.  This is a good thing unless you do not understand the underlying concepts of threading.  I cannot begin to calculate how many conversations I have had with programmers concerning the thread safety of their methods.  I also cannot count the number of blank stares I have received when I ask about the concept of thread safety in interviews.

I know that most will say that I am bringing up edge case problems that are not normal in business programming.  I am willing to concede that.  However, I also agree with Joel’s approach in that I tend not to hire programmers that do not have this knowledge because it happens often enough to be a problem.  There are exceptions, but those programmers are “exceptional”.

Tags: ,

Heading to VSLive

Next week, I’ll be heading to VSLive.  The pre-conference workshops actually begin Sunday, March 30, but I will be arriving a few days early for a short vacation with the family for Spring Break.

If you are going to be there and want to meet up, reply to this post or send me an email.

Also, if you are interested, I can give you a demo of the product I have been working on.  It is called Reference Assistant and is a Visual Studio Add-In that helps resolve issues around project references and type information expressed in configuration files.  Reference Assistant allows you to specify multiple sets of reference directories and dynamically switch between them, make sure that particular reference directories are always present for certain projects, add a reference directory to multiple projects at the same time, and helps resolve conflicts with different versions of assembly references.  It also parses app.config section handler types and configuration files for the major Dependency Injection containers and helps to resolve those references.  Reference Assistant supports its own method of extensibility allowing developers to add functionality that is not supported out of box.  For instance, developers could add parsing for their own custom configuration file formats or support for a custom written plug-in discovery framework. 

This is a brief description of the major features but there are many other smaller nice things as well.  Reference Assistant is in closed beta currently but the public beta is coming very soon along with the launch of the website and much more information.

Even if you aren’t interested in a demo, we can get together and talk about .Net, Visual Studio, the sinking value of the US dollar, or whatever.

Hope to see you there!

Tags: ,

Reasons to like NHibernate and ORM

Ayende Rahien, a contributor to NHibernate along with many other open source projects, has written up a post listing the features he likes about the ORM tool.

I am a fan of ORM and have used NHibernate extensively on a large project over the last few years.  Prior to that, I had used TopLink and Hibernate on java projects.

Ayende’s listing pretty much summarizes why I like NHibernate and also has points that apply to many other ORM products as well:

  • Caching.  NHibernate is specifically flexible in this area but many ORM products excel in this area over sql based DAL implementations.
  • Multi-database support.  Using an ORM insulates your code from the differences in database dialects.  This might sound insignificant, but if you have ever had to migrate any application from Oracle to SQLServer, for instance, you will know what a chore it can be.
  • A dramatic reduction in plumbing code for your application.  Although he does not directly say this in his list, he does mention writing applications with no sql in them.  What I’ve noticed is that there is a ton of code that gets written in your application just to put data in your domain model and pull it back out.  Using an ORM removes this code from your application and puts in the domain of the ORM, which is code someone else writes and tests.

Using an ORM seems to encounter a great deal of resistance from some enterprise developers.  Below are a few of the reasons that are usually given against using an ORM product on a project:

  • Even though you are not writing sql, you still have to perform the mapping.
  • The sql generated by the ORM cannot be optimized for specific cases and will be slower.
  • Using reflection is slow.
  • Using an ORM makes the data access problems more difficult to debug.
  • Everyone knows sql and nobody knows <ORM product>.

Even though are not writing sql, you still have to perform the mapping

This is a good point.  It is true that the work of mapping data to columns in a database is moved from sql to the mapping layer of the ORM.  In the case of NHibernate, this would be XML files.  So in that instance, no work or time is saved.  Where the time savings comes in to the picture is when the actual mappings are used.  NHibernate takes those mapping files and performs the domain object instantiation and population for you. 

Detractors of the ORM approach might counter with the argument that the usage of the data constructs, for instance the typed dataset, also removes the need to write object population code.  Although this might be true, I believe that passing data types around your application leads to poor application architecture, not to mention performance problems.  Typed datasets are notoriously slow when it comes to serialization.

The sql generated by the ORM cannot be optimized for specific cases and will be slower

This might be true for certain ORM products, but has not been my experience with NHibernate.  The generated sql is quite good.  However, I have encountered situations where the sql has been unacceptably slow and in those cases I can almost always place the blame on a poorly designed database model.

NHibernate has the ability to pass sql to the database, if needed.  SQL optimizations can also be placed in the mapping files to help in situations where NHibernate could not generate the optimal queries.  Named Query support, added to NHibernate 1.2, has helped as well for specific query situations that might not be the normal path for searching for a given object type.

Using reflection is slow

This is always an interesting discussion to have with a stubborn developer.  Yes, it is true that a method invocation via reflection would be slower than a method invocation compiled into the IL directly.  The tradeoff you are making is flexibility for a little speed.  The reality of the situation is that the slowdown for reflection will be dwarfed by the IO wait for the database and you will not notice it. 

NHibernate also has functionality called the reflection optimizer which will dynamically generate classes for populating your domain objects at startup time.  This a tradeoff in startup time to remove the cost of reflection for each domain object population.  Early versions of the reflection optimizer were noticeably slower at startup for larger numbers of mapped classes, but this has been corrected in the newer versions of NHibernate.

Performance problems in enterprise applications are caused by poor database design or poor approaches to data retrieval in 99% of cases.  I should also lump screens that have too much data on them in with this category as well.  The other 1% of the time the performance problems are caused by poorly written threaded code, poor algorithms, and poor integration choices with 3rd party applications.  I have never encountered an enterprise application that was too slow because of reflection usage.

Using an ORM makes the data access problems more difficult to debug

I hear this complaint a lot at the beginning of projects specifically from developers that have never used ORM before.  I would say that the real statement that should be made about the subject is:

Using an ORM makes the data access problems different to debug.

Most of time, a developer simply needs to familiarize themselves with the debugging facilities available in the ORM tool they are using.  There is cost to learning what is available, just as there is a cost up front for learning any new tool or concept, but the productivity that ORM gives a project overall will make up for the learning curve.

Everyone knows sql and nobody knows <ORM product>

I also hear this complaint a lot at the beginning of project where I propose using ORM.  It is difficult to convince someone that ORM will payoff after the learning curve is overcome when they have never used an ORM product.  There are many blog posts on the web complaining that ORM does not pay off in the long run but this simply has not been my experience.

The response I usually give is to try the product for a pilot or proof of concept to test the pluses and minuses.  Most clients are won over when they see the benefits  of caching in the ORM and the ability to migrate databases with just minor mapping file changes (or maybe none).  Also, when they actually try using NHibernate or another ORM product, they find that the product is not quite as unapproachable from a technical standpoint as they thought when they first heard of the concept.

Tags: , ,

Beautiful T1ness

This is my first post utilizing my freshly installed T1 line.  It is everything I thought it could be.

The image above is from SpeakEasy.  I also tried another site that uses a java applet and it reported upload speed about the same.

The whole process worked like this:

  • I search google for T1 and my hometown, which got me a list of brokers.  I filled out the online forms for a few and received pricing via email.  I only looked at firms that would give me the pricing without speaking to me first.
  • You can get service either with or without a router included.  I wanted a router included because I know next to nothing about networking hardware, and they configure the whole thing for you if you get one through the provider.  You can also get voice service included in the quote or data only.  I went with data only and also with a full T1.  I saw quotes for 1/2 and 1/3 T1 service.
  • I chose Access2Go as my provider.
  • They fax you paperwork to sign and fax back and then the scheduling process begins.  It took about 1 month from when I signed the paperwork until I had a working line.  I assume it took this long because I live in the middle of nowhere but maybe it always takes this long even if you are in the city.
  • About once every week, Access2Go emailed me some configuration information and a status of what was going on.  In the end, they give you the external ip address for the router, called the serial address, and a block of ip addresses that the router is pre-configured to route inside your network.  The rest is information about the line itself and is only useful to you if something goes wrong.  When you activate, they give you the addresses of the DNS servers and other information like that.
  • The line is actually through Quest, although AT&T is responsible for putting the line in.  Two days ago, the installer showed up and told me that I already had adequate lines run up to my house, which was nice because we have received quite a bit of rain so I was concerned about giant trench marks across my property.  Normally, AT&T only runs the wire to the outside of the building and it is your responsibility to run it to where you need it.  In my case, that was the closet in my office.  Since I also no nothing about pulling wire, I was going to pay them to run wire to my closet but was pleasantly surprised when the installer told me the wire I need was already run to the closet.  He put in a jack, did a bunch of testing and left.  He also told me that some rather well off folks down the road have 2 T1 lines running to their house.  One for data, and one for monitoring their wine cellar.  Not sure why a high speed line is needed to monitor wine but I thought that was interesting.
  • Later the same day that the installer was here, I received the T1 router which is a Cisco 1721 with WAN card, via UPS.  I received no documentation with it so I registered with the Cisco website and downloaded everything.  I read it all and it did help me to understand all of the configuration information that Access2Go had emailed to me.  In reality, you don’t need the documentation except to know what cables plug-in where.  I also tried out the serial port interface to the router just because I thought it was cool.
  • This morning I called Access2Go to hook up everything.  A patch cable (regular ethernet cable) goes from the jack to the WAN port on the T1 router.  A crossover cable goes from the ethernet port on the T1 router out to my network.  In my case, this is a business class router from linksys that supports secure vpn and has a good firewall.  The router on your network gets assigned one of the lan ip addresses from the block they give you.  Access2Go got a Quest rep on the line and we tested everything and I was up and running.  The call lasted less than 10 minutes and was painless.

This is a good option for telecommuters that live somewhat out in the country like I do.  It is much more expensive than DSL or cable that you can get in the city and the download speed is not as good as the higher end of these services.  Upload speed is outstanding, however.  Also, a T1 is dedicated and has an SLA of 4 hours for someone to arrive on the scene if it quits working.  You won’t get that with DSL or cable, at least not at the consumer rate.

Next Page »