dotNET Connections

Wednesday, August 27, 2008

Heterogeneous Data Access in .NET - Coding Directly to ADO.NET – Approach #1: Coding Directly to ADO.NET

Guest blogging once again for Jonathan, this is Mike Frost with part 1 of my series on Heterogeneous Data Access in .NET - Coding Directly to ADO.NET

Before I begin though, I should make a comment about the reason for this series. After many conversations with .NET developers and with development organizations using .NET, we have heard a lot of confusion over the different approaches to accessing multiple databases in .NET. In some cases, what we have heard misinformation, in some cases it was a lack of information. In most cases, though, people just had different sets of needs and experiences and those differences often dictated a different set of requirements for everyone we spoke with. So what I'm offering isn't intended to be the "final" answer on this matter - merely a set of advice and guidelines based on our experience with this subject and with others who have wished to learn more about if for themselves.

With that said, the most obvious place to start when discussing how to develop an application that requires access to multiple relational data stores is to talk about ADO.NET. Historically (and even today given the many announcements concerning the ADO.NET Entity Framework), building code that accesses the ADO.NET interfaces directly is easily the most prevalent approach in use today in .NET applications. If you’ve ever built an ASP.NET application that connected to Oracle and used Visual Studio tooling to do it, then the database access code that was generated used ADO.NET under the covers.

The Pros – Coding directly to ADO.NET is perhaps the best option for those experienced developers that have a strong background in ADO.NET, want to maintain fully control of their data and require the power of the database to be at the immediate disposal. Coupled with this, writing to ADO.NET allows for very granular control of database access code which can be leveraged to ensure the most efficient database access for a particular application. Put simply, if you know what you’re doing with ADO.NET, you can probably do a very good job coding your application with this approach.

The Cons – All of that granular control means that a significant amount of application development time will likely be required. In addition, the developer will need to know which ADO.NET providers will be used ahead of time. Finally, unless the developer is a thorough planner and careful coder, he or she can fall into the trap of using provider-specific code. Unfortunately many developers are unaware of these factors during the initial development phase. As a result, it isn’t until after a significant investment of time and effort has been made that the impact of these factors becomes apparent.

The use of Visual Studio tooling during development isn’t inherently bad, nor are most developers coding without some foresight into what they are doing. Unfortunately, there is a natural tendency to take the path of least resistance. This tendency, combined with a certain level of code abstraction that Visual Studio tooling can create, often leads to applications that are mired down in huge masses of provider-specific code.

While this result might be acceptable for single data source data access (e.g. SQL Server only), it does lock the application to the provider it was originally written to work with. Consequently, adding support for additional database providers ultimately becomes a huge headache as application code and SQL statements must all be rewritten to account for differences in provider code, semantics, and SQL statement formatting. Ask anyone who has been faced with this situation and they will tell you that it’s a nightmare to deal with!

It is worth noting that that the use of connection factories (link) can reduce the amount of provider-specific code with this approach. While this does not account for and eliminate all provider-specific coding in this approach, it can help mitigate some of the hassle of trying to support multiple providers and data sources.

So, to summarize:

Coding Directly to ADO.NET

Pros:
  • Available today
  • Allows granular control of database access code
  • Best option for developers with a strong background in coding to ADO.NET spec
Cons:
  • Requires careful coding on the part of the developer to avoid provider-specific code where possible
  • Requires more development time as compared with other approaches
  • Requires prior knowledge of what providers will be used or recoding to add support for additional providers
  • Applications tend to get locked to a specific provider
My next post will cover an approach specifically designed to help architect .NET applications requiring heterogeneous data access – programming with the Microsoft Data Access Application Block.

Labels: ,

AddThis Social Bookmark Button

Tuesday, August 12, 2008

Heterogeneous Data Access in .NET – The Introduction

Things have been fairly quiet around here since Microsoft’s Tech Ed event, so I’ve invited Mike Frost, our Product Marketing Manager for ODBC and ADO.NET technologies, to add some of his thoughts that we’ve been discussing over the past while. This is the first of a series of guest postings – Mike will be sharing his thoughts on this venue over a series of postings. If you have a chance, check out his blog as well.

So, welcome Mike!



Thanks, JB. Comments and descriptions of the advantages and benefits of developing software using Microsoft’s .NET environment are easy to find – there are websites and blogs galore that go into these details ad nauseum. Unfortunately, finding a clear, concise comparison of the different approaches to developing or modifying a .NET application to access multiple data sources has been next to impossible…until now. ^_^

This is part one of a series of posts aimed at untangling the web of options and technologies available for heterogeneous data access in .NET. Each posting I will introduce an approach and explain its benefits and drawbacks as well as its options and limitations. By the time this series is concluded, we will have a nice summary of information all in one place that will allow anyone to make an informed assessment of the right approach for any project or organization no matter how great or small.

Labels:

AddThis Social Bookmark Button

Monday, August 11, 2008

ADO.NET Entity Framework goes RTM

RTM is a big mile stone for any product - especially a predict first release of final bits *and* you are available, by default in the platform. So I tip my hat to everyone in building 35 for what must be a very gratifying day.
It's been quite a journey since Microsoft flew us up to Seattle for a three day immersion in their road map.

We were delighted to be able to participate in today's announcements too. I'll leave it to John Goodson, to sum it up:

""DataDirect Technologies is delighted to see the ADO.NET Entity Framework RTM," said John Goodson, vice president and general manager of DataDirect Technologies. "We are firmly committed to the ADO.NET Entity Framework and look forward to offering Oracle connectivity in the near future."

I expect to able to talk alot more about the ADO.NET Entity Framework and Connect for ADO.NET on this blog, very soon.

AddThis Social Bookmark Button