Impressions of Win7 Beta Upgrade from Vista Ultimate SP1

Over this past weekend, I upgraded a laptop from Vista Ultimate SP1 to the new Windows 7 Beta (both 64-bit).  I decided to throw caution to the wind and forget the whole VM thing.  The upgrade lasted just under 7 hours but, before you think that might be a long time, the same laptop when upgraded from Vista Home Premium SP1 to Vista Ultimate SP1 took a little over 6 hours 20 minutes.  The laptop is an HP Pavilion dv5 (4gb ram, 2.1ghz AMD dual-core).

There have been no unworkable problems up to this point.  The issues that I have had so far are:

  • Skype 3.8 did not work.  The Win7 installer warned about this and I uninstalled before restarting the upgrade.  After the upgrade, I attempted to install the app (ignoring the compatibility warning) and it did install but crashes without an error dialog shortly after.  The Skype 4 beta 3 release seems to be working properly so far on Windows 7 64-bit so I will see how that goes.
  • Virtual PC did not upgrade properly.  I received a message stating that “Virtual PC could not open the Virtual Machine Network Services driver”.  The VMs did run but without any network access.  Uninstall/Reinstall of VPC fixed this issue for me.

I thought I might have a problem with TortoiseSVN but it has worked after the upgrade without any problems at all.

Visual Studio 2008 and 2005 both work fine after the upgrade.  I was expecting that I might have a problem with the Experimental Hive entries for the VS SDKs after the upgrade, but I didn’t have any problem with those either.

My first impressions of Windows 7 are good but I’m not overwhelmingly impressed in the way that some blogs have expressed.  Memory usage seems slightly better but nothing to bet exited about.  Bootup time has improved for me quite a bit, which is nice.  It is encouraging that performance of the beta appears to me to be at least the same as Vista Ultimate SP1 in the worst case and slightly better in some areas.

Some things have moved around on the Control Panel again so that is a little annoying.  One example is the Startup Applications that used to be found under Programs.  This applet appears to have gone away all together which is puzzling but it is possible that I have overlooked it.  Power users know to go directly to the folder but a regular user would be stuck with whatever installers or the OEMs feel like should run at startup. 

The new taskbar/quickstart toolbar combo is nice but I can’t figure out a way to start a second instance of an application, short of going back to the start menu.  This was a feature of the old quickstart toolbar I would like to have back.  An option on a right-click menu would be good enough.  Hopefully I’m overlooking an option somewhere.  [Update 1/13/09 11:35am – Shift+Click will launch a second instance.  That is about the only combo I didn’t try.  Thanks to Markus Egger on Twitter for that one.]

IconLover – my icon tool of choice

I am absolutely terrible at editing graphics.  There are things in life that I am good at and the task of creating graphics for my applications is simply not one of them.  I’ve been in a continual search for an image editing program that makes the task easy for me.  Most of the time, I either contract with a graphics designer or buy image packs, but often the images are still not the exact size or format I need.

If you’ve done any work with Visual Studio add-ins or VSPackages, you know that just getting the background color correct can be a chore.  The Visual Studio graphics editor doesn’t make the task any easier either.

This search has led me to a tool that I plan on sticking with for quite a while:  IconLover.  The tool is easy to use, even for a no talent designer like myself.  My favorite feature is the ability to create image lists in a quick, easy, fashion. 

VSX developers will appreciate this feature a great deal.

By the way, the creator of IconLover, Aha-Soft, didn’t give me a free license or pay me anything for posting this.  I just like the tool and thought I would pass it along.

D2Sig in Houston

This past Thursday, I attended the first meeting of the D2Sig in Houston.  The D2 stands for "Developer 2 Designer" and the group will be focused on the XAML technologies of WPF and Silverlight as well as any area where developers and designers might need to work more closely than they have in the past.

Markus Egger from EPS was the presenter and, as always, he gave a great presentation which included a general overview of WPF, Silverlight, and some demo video of a Surface table in action.

I would recommend this new group to designers or developers in the Houston area that are interested in the these new up and coming technologies.

I’m guessing around 30 or so people showed up to the first meeting so I think that is a pretty good start. 

J Sawyer has an official announcement here for the first meeting with a little more detail.  The upcoming meetings will be the first Tuesday of each month at the Microsoft offices in Houston.

I have a confession: I sort of like Vista

Apparently Microsoft has revived the marketing tactics of Folders Instant Coffee from back in the 80’s.  According to this article on cnet news, folks from the MS marketing team have been rounding up Vista skeptics under the guise that they will be shown a new OS code named Mojave.  All of the subjects seem to love the new OS and afterwards are told that they have been shown Vista.

This is sort of like dining in a fine restaurant only to find out that you have been drinking Folders Instant instead of the fine coffee normally sold there.

Until the last few months or so, I was one of those skeptics as well.  I had only used Vista a small amount and all of my primary machines were still XP Pro.  In fact, I had really only significantly used Vista in a Virtual PC image which gives a very poor impression.

My latest laptop runs Vista x64 SP1, shipped that way from HP, with a modest AMD 2.1 ghz dual core processor and 4gb ram which is becoming more common on laptops even at a general retailer like Best Buy.  The laptop was reasonable priced, I thought, ringing up at just under $1100 tax included.

I sort of like it.  Your mileage may vary but there are a few key areas that really make a difference for me.  The first is it just plain looks nicer and is more pleasant to use.  I know mac folks are saying that OSX looks better and has for many years.  I agree with that (I have a few macs myself) but my business is based on windows development so macs are not an option right now.

The other noticeable difference is how much better the networking performs under Vista.  I have to often copy a lot of files between desktops and laptops and XP is terrible at this.  I haven’t done any timings but Vista is dramatically faster.

It isn’t all roses, however.  Vista does require significant hardware over XP.  I purchased an even cheaper laptop (~$700 tax included) this last November to run XP Pro.  The machine actually came with Vista Home Basic but the combination of the underpowered machine and the crapware fiesta installed on it was unbearable.  That machine is a 1.8 ghz dual core machine with 2gb ram and seems to run XP SP2 at about the same speed as my new AMD 2.1 ghz with 4gb ram runs Vista SP1.  Also, the Vista machine does take a little longer to boot up but not significantly longer.

Of coarse, there is also that little problem of every setting being moved to a new dialog or a different path to get to the same dialog.  Honestly though, I haven’t been quite as annoyed by that as I thought I would be based on all of the complaining I have heard from other folks.  It takes a minute to find something and I then I know where it is and go on with my work.  I had the same problem when I tried OSX for the first time.

As far as the 64-bit goes, I haven’t had any significant problem yet.  Since the machine is a laptop, it obviously came with drivers for the hardware on it and my external device needs are modest.  The crash I have had was trying to see if I could run the memory analysis tool on Crucial.com, which didn’t say it would work on 64-bit machines.  That was ugly but, so far, has been the only hiccup.

FYI – On XP, I use TortoiseSVN with the VisualSVN package for Visual Studio integration.  This also works on Vista x64.  You will need to install the 64-bit version of TortoiseSVN (I installed version 1.5.0.13316 which was the latest) and then the normal and only install for VisualSVN (1.5.1).  I wasn’t sure if I was supposed to install the 32-bit or 64-bit TortoiseSVN client since Visual Studio itself is 32-bit but I finally found a discussion related to this on google groups.  My actual repo is still svn 1.4.x and I haven’t had any problems yet.

I Need to Try Firefox

I was looking over the traffic profile for the last week on this site and was surprised at the share of traffic that Firefox had.

  1. Firefox – 55.4%
  2. Mozilla Compatible Agent – 10.99%
  3. IE (what I have been using) – 8.7%
  4. Safari – 7.18%
  5. Opera – 3.18%
  6. Mozilla – 3.16%
  7. Everything else was bots and various rss readers

I had tried Firefox a few years ago and I didn’t really see any benefit but it clearly has the dominant market share of people who went to the effort of at least visiting my blog so it must be worth looking at again.  Within the Firefox category:  3.0 had about 30% share, 1.5.x had ~1%, and 2.0.0.x had the rest of the share.

Do you really need to know C? I think so.

I’ve been following the podcasts that Jeff Atwood and Joel Spolsky have been doing for StackOverflow.  The podcasts are not really technical in nature, in fact they really do not have anything to do with what will ultimately be the purpose of the site they are building.  They are more documenting the discussions and decisions they are making while creating the site.

I’ve only been through the first few so far, but an interesting discussion has come up in both podcasts about whether or not programmers should know the C programming language.  Jeff does not know C and seems to come down on the side of the argument that this knowledge is not necessary.  Jeff hasn’t specifically said this out loud, that I’ve heard, but I gather this is his opinion based on how the conversation seems to flow.  Joel, on the other hand, is of the opinion that programmers should have knowledge of the lower levels of programming even though it is not part of their daily job.  His thinking on this is that the lower level knowledge gives programmers an edge even when programming with the higher level popular programming languages of today.

I have to agree with Joel on this.  In my experience working with programmers in both categories, those who have a background of knowledge of the lower level programming languages always seem to be quicker at solving more complicated problems.  Of course, there are exceptions to this rule but I would say 98% of the time this is true.

It is interesting also that when this topic comes up with colleagues, it is almost split right down the line in opinion with those who do not know C believing it is not necessary and those who do have experience with C believing this sort of experience and knowledge makes them a much better developer.

One good example supporting my argument (actually Joel’s argument) is garbage collection related issues.  I’ve seen programmers spend a huge amount of time attempting to understand why the runtime memory size of their program is continuing to grow when, in their minds, the garbage collector should be coming to the rescue.  Of course, the problem is usually that they somehow have a reachable reference to a huge collection of objects or something of this nature (usually several in fact).  Programmers with the lower level knowledge seem to pick up on these sorts of problems much quicker.

Another area I have seen many issues with is threading.  Languages like C# and Java make threading a reachable concept for the programming masses.  This is a good thing unless you do not understand the underlying concepts of threading.  I cannot begin to calculate how many conversations I have had with programmers concerning the thread safety of their methods.  I also cannot count the number of blank stares I have received when I ask about the concept of thread safety in interviews.

I know that most will say that I am bringing up edge case problems that are not normal in business programming.  I am willing to concede that.  However, I also agree with Joel’s approach in that I tend not to hire programmers that do not have this knowledge because it happens often enough to be a problem.  There are exceptions, but those programmers are “exceptional”.

Tags: ,

Heading to VSLive

Next week, I’ll be heading to VSLive.  The pre-conference workshops actually begin Sunday, March 30, but I will be arriving a few days early for a short vacation with the family for Spring Break.

If you are going to be there and want to meet up, reply to this post or send me an email.

Also, if you are interested, I can give you a demo of the product I have been working on.  It is called Reference Assistant and is a Visual Studio Add-In that helps resolve issues around project references and type information expressed in configuration files.  Reference Assistant allows you to specify multiple sets of reference directories and dynamically switch between them, make sure that particular reference directories are always present for certain projects, add a reference directory to multiple projects at the same time, and helps resolve conflicts with different versions of assembly references.  It also parses app.config section handler types and configuration files for the major Dependency Injection containers and helps to resolve those references.  Reference Assistant supports its own method of extensibility allowing developers to add functionality that is not supported out of box.  For instance, developers could add parsing for their own custom configuration file formats or support for a custom written plug-in discovery framework. 

This is a brief description of the major features but there are many other smaller nice things as well.  Reference Assistant is in closed beta currently but the public beta is coming very soon along with the launch of the website and much more information.

Even if you aren’t interested in a demo, we can get together and talk about .Net, Visual Studio, the sinking value of the US dollar, or whatever.

Hope to see you there!

Tags: ,

Reasons to like NHibernate and ORM

Ayende Rahien, a contributor to NHibernate along with many other open source projects, has written up a post listing the features he likes about the ORM tool.

I am a fan of ORM and have used NHibernate extensively on a large project over the last few years.  Prior to that, I had used TopLink and Hibernate on java projects.

Ayende’s listing pretty much summarizes why I like NHibernate and also has points that apply to many other ORM products as well:

  • Caching.  NHibernate is specifically flexible in this area but many ORM products excel in this area over sql based DAL implementations.
  • Multi-database support.  Using an ORM insulates your code from the differences in database dialects.  This might sound insignificant, but if you have ever had to migrate any application from Oracle to SQLServer, for instance, you will know what a chore it can be.
  • A dramatic reduction in plumbing code for your application.  Although he does not directly say this in his list, he does mention writing applications with no sql in them.  What I’ve noticed is that there is a ton of code that gets written in your application just to put data in your domain model and pull it back out.  Using an ORM removes this code from your application and puts in the domain of the ORM, which is code someone else writes and tests.

Using an ORM seems to encounter a great deal of resistance from some enterprise developers.  Below are a few of the reasons that are usually given against using an ORM product on a project:

  • Even though you are not writing sql, you still have to perform the mapping.
  • The sql generated by the ORM cannot be optimized for specific cases and will be slower.
  • Using reflection is slow.
  • Using an ORM makes the data access problems more difficult to debug.
  • Everyone knows sql and nobody knows <ORM product>.

Even though are not writing sql, you still have to perform the mapping

This is a good point.  It is true that the work of mapping data to columns in a database is moved from sql to the mapping layer of the ORM.  In the case of NHibernate, this would be XML files.  So in that instance, no work or time is saved.  Where the time savings comes in to the picture is when the actual mappings are used.  NHibernate takes those mapping files and performs the domain object instantiation and population for you. 

Detractors of the ORM approach might counter with the argument that the usage of the ADO.net data constructs, for instance the typed dataset, also removes the need to write object population code.  Although this might be true, I believe that passing ADO.net data types around your application leads to poor application architecture, not to mention performance problems.  Typed datasets are notoriously slow when it comes to serialization.

The sql generated by the ORM cannot be optimized for specific cases and will be slower

This might be true for certain ORM products, but has not been my experience with NHibernate.  The generated sql is quite good.  However, I have encountered situations where the sql has been unacceptably slow and in those cases I can almost always place the blame on a poorly designed database model.

NHibernate has the ability to pass sql to the database, if needed.  SQL optimizations can also be placed in the mapping files to help in situations where NHibernate could not generate the optimal queries.  Named Query support, added to NHibernate 1.2, has helped as well for specific query situations that might not be the normal path for searching for a given object type.

Using reflection is slow

This is always an interesting discussion to have with a stubborn developer.  Yes, it is true that a method invocation via reflection would be slower than a method invocation compiled into the IL directly.  The tradeoff you are making is flexibility for a little speed.  The reality of the situation is that the slowdown for reflection will be dwarfed by the IO wait for the database and you will not notice it. 

NHibernate also has functionality called the reflection optimizer which will dynamically generate classes for populating your domain objects at startup time.  This a tradeoff in startup time to remove the cost of reflection for each domain object population.  Early versions of the reflection optimizer were noticeably slower at startup for larger numbers of mapped classes, but this has been corrected in the newer versions of NHibernate.

Performance problems in enterprise applications are caused by poor database design or poor approaches to data retrieval in 99% of cases.  I should also lump screens that have too much data on them in with this category as well.  The other 1% of the time the performance problems are caused by poorly written threaded code, poor algorithms, and poor integration choices with 3rd party applications.  I have never encountered an enterprise application that was too slow because of reflection usage.

Using an ORM makes the data access problems more difficult to debug

I hear this complaint a lot at the beginning of projects specifically from developers that have never used ORM before.  I would say that the real statement that should be made about the subject is:

Using an ORM makes the data access problems different to debug.

Most of time, a developer simply needs to familiarize themselves with the debugging facilities available in the ORM tool they are using.  There is cost to learning what is available, just as there is a cost up front for learning any new tool or concept, but the productivity that ORM gives a project overall will make up for the learning curve.

Everyone knows sql and nobody knows <ORM product>

I also hear this complaint a lot at the beginning of project where I propose using ORM.  It is difficult to convince someone that ORM will payoff after the learning curve is overcome when they have never used an ORM product.  There are many blog posts on the web complaining that ORM does not pay off in the long run but this simply has not been my experience.

The response I usually give is to try the product for a pilot or proof of concept to test the pluses and minuses.  Most clients are won over when they see the benefits  of caching in the ORM and the ability to migrate databases with just minor mapping file changes (or maybe none).  Also, when they actually try using NHibernate or another ORM product, they find that the product is not quite as unapproachable from a technical standpoint as they thought when they first heard of the concept.

Tags: , ,

Beautiful T1ness

This is my first post utilizing my freshly installed T1 line.  It is everything I thought it could be.

The image above is from SpeakEasy.  I also tried another site that uses a java applet and it reported upload speed about the same.

The whole process worked like this:

  • I search google for T1 and my hometown, which got me a list of brokers.  I filled out the online forms for a few and received pricing via email.  I only looked at firms that would give me the pricing without speaking to me first.
  • You can get service either with or without a router included.  I wanted a router included because I know next to nothing about networking hardware, and they configure the whole thing for you if you get one through the provider.  You can also get voice service included in the quote or data only.  I went with data only and also with a full T1.  I saw quotes for 1/2 and 1/3 T1 service.
  • I chose Access2Go as my provider.
  • They fax you paperwork to sign and fax back and then the scheduling process begins.  It took about 1 month from when I signed the paperwork until I had a working line.  I assume it took this long because I live in the middle of nowhere but maybe it always takes this long even if you are in the city.
  • About once every week, Access2Go emailed me some configuration information and a status of what was going on.  In the end, they give you the external ip address for the router, called the serial address, and a block of ip addresses that the router is pre-configured to route inside your network.  The rest is information about the line itself and is only useful to you if something goes wrong.  When you activate, they give you the addresses of the DNS servers and other information like that.
  • The line is actually through Quest, although AT&T is responsible for putting the line in.  Two days ago, the installer showed up and told me that I already had adequate lines run up to my house, which was nice because we have received quite a bit of rain so I was concerned about giant trench marks across my property.  Normally, AT&T only runs the wire to the outside of the building and it is your responsibility to run it to where you need it.  In my case, that was the closet in my office.  Since I also no nothing about pulling wire, I was going to pay them to run wire to my closet but was pleasantly surprised when the installer told me the wire I need was already run to the closet.  He put in a jack, did a bunch of testing and left.  He also told me that some rather well off folks down the road have 2 T1 lines running to their house.  One for data, and one for monitoring their wine cellar.  Not sure why a high speed line is needed to monitor wine but I thought that was interesting.
  • Later the same day that the installer was here, I received the T1 router which is a Cisco 1721 with WAN card, via UPS.  I received no documentation with it so I registered with the Cisco website and downloaded everything.  I read it all and it did help me to understand all of the configuration information that Access2Go had emailed to me.  In reality, you don’t need the documentation except to know what cables plug-in where.  I also tried out the serial port interface to the router just because I thought it was cool.
  • This morning I called Access2Go to hook up everything.  A patch cable (regular ethernet cable) goes from the jack to the WAN port on the T1 router.  A crossover cable goes from the ethernet port on the T1 router out to my network.  In my case, this is a business class router from linksys that supports secure vpn and has a good firewall.  The router on your network gets assigned one of the lan ip addresses from the block they give you.  Access2Go got a Quest rep on the line and we tested everything and I was up and running.  The call lasted less than 10 minutes and was painless.

This is a good option for telecommuters that live somewhat out in the country like I do.  It is much more expensive than DSL or cable that you can get in the city and the download speed is not as good as the higher end of these services.  Upload speed is outstanding, however.  Also, a T1 is dedicated and has an SLA of 4 hours for someone to arrive on the scene if it quits working.  You won’t get that with DSL or cable, at least not at the consumer rate.

Having trouble getting out of bed in the morning?

For those having trouble getting out of bed in the morning, when you really should, maybe could give this a try. 

You have to solve a math problem correctly to get the alarm to shutoff.

Next Page »