Thursday, May 12, 2005

Currently, one of my colleagues is working on a small assignment to investigate the possibilities of Microsoft Information Bridge Framework (IBF) when it comes to exposing web service functionality to end users. The main goal of this investigation is to determine if IBF is able to communicate with the web services that apply to our internally used reference architecture.

 

Personally I am not very impressed by IBF yet, but that’s probably because I know very little about this tool. What I do know is that Microsoft is positioning this tool as a key player in its “information worker” marketing strategy so it must be good! For me this tool is just another way to make sure the end user (or should I say Information Worker) continues using the Microsoft Office suite which is probably one of the most important products for Microsoft’s profit. Another thing I know is that many people (especially managersJ) see a lot of opportunities for this tool so we decided to give IBF a fair change.

 

Therefore my colleague was asked to test if IBF is capable in communicating with “secure” web services. In our case this means that IBF has to communicate with web services that use SOAP headers to support sending “user token” information. After some investigation it turned out that IBF cannot handle SOAP headers (or at least not populate the headers in a “flexible” way). Because I found this very hard to believe I did a quick search on the internet and all found was the capability of IBF to handle “transport layer” security. I noticed that there is very little information available on the internet related to IBF and security. In my opinion using SOAP headers is kind of straight forward when it comes to web services, but I might be wrong.

 

Is it true that IBF doesn’t support SOAP headers? If so, will this be supported in a next release? If not, IBF is definitely no option for us (at this moment).

 

If there is anybody out there that can point me to some more information about this, please let me know!

posted on 5/12/2005 12:59:11 PM UTC  #   
 Wednesday, May 11, 2005

It’s been some time since I have written anything on this blog. I have been busy in the start up phase for some projects. For one of the projects I will be assigned to in the near future, interoperability and web service security will be a big issue. To refresh my memory I decided to go through the Basic Profile 1.0 (BP) specification once again. This profile provides implementation guidelines that help you build web services with maximum interoperability. The web services interoperability organization (WS-I) provides sample implementations (build by different vendors) that demonstrate web service interoperability across the different platforms.

 

Currently the WS-I is working on the Basic Security Profile (BSP), which covers building interoperable secure web services. The BSP covers guidance for both transport level security and (soap) message level security. Personally I am more interested in the message level security issues. To understand the BSP better I am also spending some time again on the WS-Security specs which describe in detail how to secure web services on the message level. Of course when thinking of web services and security in a Microsoft world we cannot forget WSE, so that is also on my ToDo list again.

 

At first, reading through these WS-* specs and profiles doesn’t seem very interesting but having a detailed look at the implementation that is available (WSE and BP 1.0 sample application) makes life a little more interesting. After having spent some time on it I even start to like it. So, maybe I’ll be back with more info about web service security soon. (I promise I’ll try to make this blog not more boring than it already is).

posted on 5/11/2005 7:36:47 PM UTC  #   
 Thursday, March 31, 2005

One of the new buzz words in IT is “Software Factories”. Although this methodology is in its very early stage, I think it is important to understand the concepts behind this methodology. Because Software Factories is a “new concept” and the topic covers a lot of new stuff, introducing these concepts can be hard (at least for me). Therefore I am looking for a way to get familiar with Software Factories bit by bit. Maybe, mapping Software Factories to concepts that I am already using at this moment can help. At least it gives me a starting point from were I can think about Software Factories and explore the possibilities of this new methodology. Let me share some of my thoughts about the roadmap I think is best for me to get familiar with Software Factories and start working with it in the near future.

 

Software Factory

In short, a Software Factory can be best described as a fully configured development environment for developing a particular type of software. Based on the fact that currently software development isn’t as successful as we want it to be this new methodology is introduced. The methodology is build on experiences gathered from more successful and much more productive industries like for example the car industry. Within these industries it is very common to use highly standardized processes, predefined designs and reusable components to rapidly manufacture products. This results in a very productive manufacturing process with predictable quality of the manufactured products. Lessons learned from these industries are now used to create a new methodology, called Software Factories. Software Factories targets the software development industry and aims to provide the same kind of advantages achieved in other industries.

 

Although Software Factories are still evolving and the methodology isn’t fully supported by tools yet, I think it is possible to start using some of these concepts in software development. To explain what I think can be done today, let me try to map the concepts of Software Factories to concepts that are more common (for me) within software development. For this purpose, let’s assume we have a logical architecture defined within our organisation that we are using for software development. Let me explain what I mean by a logical architecture (I will not cover this in detail at this moment, just enough to make my point. This is just my imaginary architecture J).

 

Logical Architecture

A standardized logical architecture can help architects break down the requested business functionality to one of the different responsibility domains. Well defined responsibility domains provide guidance to IT project members during the design and implementation phase of a software development project. Some of the keys to success in this area are standardization, patterns and frameworks. Having these in place makes software development more manageable and can guarantee a constant quality. Most of the time there is some guidance available, very often in the form of a Word document or some standard components (framework) that can be used. Unfortunately the usage of these guidelines isn’t embedded in the whole development process and is not supported by development tools at all. Let’s have a look at how Software Factories can help solve this “problem”. To make things more clear let’s assume we have a logical architecture defined within our company. Within this architecture we make a distinction between different domains:

 

·         Presentation domain; this domain hosts all user interaction related features. It hosts features like dialog management, multi channel support, etc.

·         Process domain; this domain is responsible for controlling the actual business process. The business process can be best described as the way an organisation is handling its procedures, workflow, tasks and information. The business process coordinates the tasks and makes sure they get executed by either communicating with the end user (through the presentation domain) or the business information services.

·         Business information domain; this domain maintains all information within the organisation. Information can be grouped together in logical area resulting in one or more business information services. Business information services are autonomous services and deliver information, or perform a business action on behalf of the business process.

·         Infrastructure domain; this domain represents the infrastructure that will be used for the application. This includes the messaging infrastructure and other generic services providing the so called ‘cross-cutting concerns’ (e.g. security, logging, instrumentation, etc.). Services and features included in this domain are not application specific and can serve more applications at a time.

 

From logical architecture to Software Factory

So, now we have our logical architecture defined, what’s next? The reason we defined separate domains within our logical architecture is to group together services with similar responsibilities or capabilities. In the ideal world there should be guidelines available that helps architects and developers to implement services in one of the above described domains. These guidelines can be in the form of a simple word document but can also be represented by a framework or a set of patterns that must be used by the development team. This is exactly where I think Software Factories come in!

 

Software Factories aims to provide a fully described and configured development environment for a specific type of application (read: domain) Thinking a little further and having a look at our logical architecture I can define at least four Software Factories (one for every domain in the logical architecture). Although four fully implemented Software Factories should be sufficient for providing the necessary guidance for implementing services or components in each of the four domains, I think I am not there yet! I still need a standardized way for translating the business needs to components or services in one of our four domains. To solve this issue I can create another Software Factory that provides guidance in mapping the business requirements to one of the described domains in our logical architecture. This overall Software Factory can be my factory that targets web based distributed applications and is composed out of the other four Software Factories. The five Software Factories together cover all concerns for building web based distributed applications.

 

Because I defined five separate Software Factories (instead of one big one) each of the factories exactly targets a group of people in an ordinary IT project. The overall Factory for example, targets the business analyst and the architect who work together to gather the necessary information from the end user. This software factory describes the process how to translate and group together the requirements from the end user to services in one of our four domains. The Factory that targets the presentation services domain provides the web developers in the project with the necessary patterns, frameworks, and guidance to implement the user interface services and so on for the rest of the factories.

 

So, now I defined my Software Factories, what’s next on my to-do list?

  

Software Factory schema and template

One of the important things in Software Factories is a software factory schema. Basically this is a document that summarizes all artifacts we need to produce, to successfully develop a piece of software. This schema doesn’t only tell us what products we have to deliver and how they look like, it also tells us in what order we have to do this and what is the relation between the several products. The software factory schema contains both fixed and variable products which makes the software factory more flexible. Having a well defined software factory schema isn’t actually new (except for the name). Some other methodologies (like RUP) also summarize and describe the artifacts that must be created during a software development project. However, I think there are some differences between these methodologies and the Software Factories methodology. Within Software Factories the list of artifacts and the description of the artifacts are much more targeted at a specific type of software. Other methodologies tend to have a more general way of describing the artifacts, they don’t really target the specific domains in our logical architecture for example. I think the advantage of the software factory is that in the end it provides much more guidance!

 

Having the full list of artifacts (and their description) in place doesn’t do the job. To make it really useful we need the implementation of all these artifacts and more importantly we have to make them available to the development team. This is where the Software Factory template comes in. A Software Factory template is a collection of patterns, frameworks and a Domain Specific Language (DSL) that can be loaded into a tool (for example Visual Studio 2005 Team System). The combination of the software template and the tool helps us automating the software development process. One of the main purposes of the template is to configure the tool and “guide” the development team through the process of building a specific type of software (in our case, software that fits in one of our domains in the logical architecture).

 

So, where are we? If we think Software Factories is promising enough to invest in, I now have a possible approach to start implementing (or at least think about it) this methodology step by step within my daily work. I can simply pick one of the responsibility domains within the logical architecture and start describing the artifacts that we need to successfully deliver software within that domain. Although I think this still is a lot off work (and challenging), focussing on one responsibility domain can help to reduce the complexibility. By the time I defined and implemented the artifacts for the first responsibility domain, Microsoft might be ready to provide me with the (stable) tools that make it (better) possible to write my own Domain Specific Language’s and support our Software Factory from within our development environment (VSTS).

 

The above are my first steps in the world of Software Factories. If any of you have a better approach to get familiar with this, please let me know!

 

posted on 3/31/2005 7:36:18 PM UTC  #   
 Wednesday, March 23, 2005

Team Foundation Server provides the core functionality for integrating all of the Visual Studio 2005 Team System parts/tools. One of the services available in Team Foundation Server is the “Source Code Control Service”. This service makes it possible to interact with the core source control features of Team Foundation Server.

 

A very cool feature of the Team Foundation Source Code Control is the check-in policy framework it provides. This framework makes it possible to set validation rules (policies) that must be satisfied before the developer is allowed to check-in his code. By using this feature you can make sure unit tests are available (and run) for the piece of code or that the code meets the FxCop rules. Also this framework provides us with some extension points that we can use to write our own policies.  

 

Unfortunately the policy can be overridden by the developer but in this case an e-mail is send to our good old project leader. This will give him the possibility to make sure the developer sticks to the policy the next time. The main purpose of the check-in policy framework is to support some sort of a process that ensures a certain level of quality of the code that is checked in by the developers.

 

Two important principles related to this checking policy feature are: policy definition and policy evaluation.

 

Policy definition; (for the control freaks among us) in this process it is decided which rules must be satisfied before the developer is allowed to check-in his code. Policy definition is very likely done by the architect or lead developer on the team.

 

Policy evaluation; this is the process that actually validates a check-in of the developer against the policies that are defined

 

Policy definition can be done from within Visual Studio Team System. Policies can be set on a project base by using the Team Explorer. Policies can be added, deleted or edited for a specific project.

 

The policies that are set for a project will be evaluated while the user is interacting with the Visual Studio IDE. This makes sure that every time a developer checks in a piece of code it validates against the set of rules/policies. Every policy (represented by a plug-in in the policy framework) receives the necessary information to determine if the code meets the policy. The policy plug-in can return information that can be used by the policy framework to display messages the user why the code doesn’t meet the policy. 

If you want more control, it is possible to write your own policies. These policies can be used by the policy framework. Basically all you have to do is implement a few interfaces and there you go. One of the things to remember is that the new policy plug-in has to be installed on all the client machines that must apply to this new policy. Team Foundation Server doesn’t provide a mechanism for distributing policies yet!

 

To create a new policy plug-in you can simply open a new Visual Studio project to create one or more classes that implement the necessary interfaces. To create a policy the following interfaces must be implemented:

 

IPolicyDefinition; this interface exposes some methods that are used in the policy defining process. The information provided by the methods of this interface is displayed in the user interface that helps defining a set of policies (within the Team Explorer) for a project.

 

IPolicyEvaluation; this interface exposes methods that are used to evaluate if the code that is checked in meets the policy. Methods in this interface accept the content of the actual check-in so this can be validated against the rules in the policy.

 

The policy framework makes sure that all classes that implement the IPolicyEvaluation interface receive the following information when a file (fileset) is checked in:

  • The check-in comment
  • The currently selected list of files
  • The currently selected work items
  • The release note information

 

The policy plug-in can use all of this information to decide if the infoset satisfies the policy.

 

Al assemblies that contain classes that implement the above interfaces will be picked up by the check-in policy framework. It is possible to have more that one policy plug-in in one assembly. Before the policy framework can pickup new policies it must know were to look for policy assemblies. Fur this purpose we have the good old registry, that helps Team Foundation Source Control service solve this problem. All we have to do is store a value (location) for the policy plug-in under a special registry key. At this moment the name of the key is “HKEY_LOCAL_MACHINE \Software\Microsoft\Hatteras\Checkin Policies”and HKEY_CURRENT_USER \Software\Microsoft\Hatteras\Checkin Policies. This will probably change because “Hatteras” is the codename Microsoft is using for the source control system!

 

If you are interested in the extensibility possibilities of the Team Foundation Source Control system (or any other Team Foundation Server services) go download the Visual Studio Team System Extensibility Kit and have a look!

posted on 3/23/2005 11:17:26 AM UTC  #   
 Thursday, March 10, 2005

Now I am busy with Visual Studio Team System for a while I suddenly realize that working in IT projects can become real different when working with new tool. This makes me wonder, will IT projects ever be the same?

 

When we have a look at an ordinary IT project there are a few things that often go wrong. First of all we have the project leader, according to the development team the project leader is most of the time a pain in the ass. He isn’t really involved in the development process and most of the time the he isn’t even aware of what the system really looks like. All he cares is his excel sheet (or ms project file) that indicates if the development team is working hard enough to meet the deadline. To gather the necessary information to update his excel sheets he drops by in the project room once ore twice a week to have a quick talk with the developers. When things go wrong all he does is blame the development team. When the project is delivered successfully it’s the project leader that gets all the credits.

 

To stay out of trouble, the development team informs the project leader during their weekly status meeting that development is right on schedule and everything goes as planned. “Just a few more weeks of development before we can start system testing” the lead developer says, just to make the project leader feel good. At the end of the status meeting the project leader reminds the development team about “their” motto: Work Harder!

 

After the project leader left the room, the developers thank the lead developer for not telling the project leader they are a bit behind on schedule. The lead developer says it is no problem but also indicates that starting from now on everybody (including himself of course) in the team has to work a little bit harder to keep the possibility to actually meet the planning. This means a little less playing darts, browsing the internet and writing blog entries during working hours. They all have a laugh while making some remarks that if they don’t meet the deadline they will do some development work during the system testing phase that is about to begin. So, no worries yet!

 

This is actually the moment that the eyes of the tester are opened! Till now the tester was just one of the guys in the team. While writing “his” test scripts using his own set of tools (like word, excel and some SQL scripts) he didn’t really bothered the developers so he was totally accepted. Every time the testers has a question about some functionality he is writing test scripts for, the developers told him this piece of functionality isn’t implemented yet (but, almost done!). Therefore the only option the tester has is to base his test scripts on the functional requirements that the functional designer of the team put in a special folder for him on the server.

 

Having heard the developers say that they will use “his” testing time to continue developing makes the tester pissed. From that moment on he realizes he isn’t a full member of the team anymore and swears to himself that he will do everything necessary to find a lot of bugs. That will force the developers to work harder to keep the all mighty project leader happy.

 

In the meantime, in his constant battle to keep the users happy, the functional designer is working hard on his own set of the functional specifications. Just some weeks before the deadline he sends the final version of the specs to the development team. Of course without notifying that poor tester working hard to create all kind of test cases that will definitely fail during the system test phase that is about to start. When the lead developer reads the new updated specs he starts complaining about the functional specifications include new functionality that is not part of the initial planning. Of course it is possible to include this in the release but this will affect the available testing time. After a small talk with the project leader it is decided not to change the deadline. The project leader learned never to leave his project plan so he thinks changing the deadline is evil! Feeling the pressure of the project leader and the functional designer the lead developers decides to implement the new functionality within this release. Because there is little time left he doesn’t change the class diagrams and all other related models and documentation that were initially created by the architect on the project. There will be plenty of time for this when the project is finished!

 

When the new functionality is finally included in the system, there is almost no time left for system testing. Some quick tests show that there are some bugs in the system but because the lack of time they are all marked as “low priority” so they can be fixed in the next release (of course this frustrated the tester even more). While some of the developers are helping the tester testing the software, others are building the install packages. When the packages are finally ready they are handed over to the operations department.

 

During the installation of the software all kind of problems occur. The architect decided to use some cool new framework and technology that, strange enough isn’t installed on the production environment yet. After some discussion with the operations department and some minor changes in the code the software can be successfully installed.

 

The very proud project leaders informs the user that the software is successfully, and of course, according to the planning, installed. This is the moment the users has his first look at the system. Because of the small delay in the development phase there was no time left for user acceptance testing. Luckily the project leader and the functional designer convinced the user (based on the functional design and some models from the architect) that the team is building exactly what the user wants. After playing a few hours with the system however the users realizes that this software isn’t what he wants. The system doesn't corresponds with his day to day job at all. One of the good things about this is that because the system isn’t very useful for the user, he will not find the bugs in the system and the project leader gets is changes to solve these issues in the next release of this beautiful piece of software.

 

So, what will Visual Studio Team System change in all of the above?

 

Unfortunately VSTS cannot change anything about the project leader. The good thing is however that VSTS brings us a totally integrated environment. This will dramatically improve communication between the different types of team members. The new set of tools will help improving the overall quality of the software.  

 

First of all the architect can use the different modellers available in Visual Studio Team Architect to model his applications. Together with the models provided by the operations department he can test the deployment of his applications in a very early stage of the project, which of course saves a lot of time in the end of the project.

 

After that the architect can use the class designer to model his classes. These models can be used by the developers to generate the code from. If the developers make changes in the structure of the classes VSTS immediately updates the models, initially created by the architect. So now the models and software are always in sink, saving a lot of time updating the models at the end of the project.

 

When using the MSF agile methodology within Visual Studio, all tasks are related to work items, which can be assigned to team members. The Team Foundation Server gives the project leader a way to monitor progress on work items. Also very nice report can be generated from this data giving the project leader the possibility to impress his users even more. Fortunately Microsoft will provide a separate tool for “project leaders”so they don’t need to use Visual Studio for that. This saves them from becoming a real team member.

 

The next good thing from MSF agile is that is provides a more flexible methodology which leaves some space for making changes during the project, for example to react on changes requests of users.

 

The new integrated test facilities and some support for Test Driven Development try to enforce that there is always a “working version” of the software. Maybe only a small piece of functionality is available but it works (because it is tested!). This gives the user the possibility to drop by in the project room any moment and ask for a quick demo of the software. Because the user sees what the system actually looks like in an early stage it is much easier to monitor if this is what he really wants.

 

Because within VSTS testing is fully integrated within the Visual Studio environment, communication between developers and tester (and all other team members) will improve. The tester might even have some .NET experience to validate or write the unit test for the software. Further, VSTS provides tools to force Unit tests to be available before it is possible to store the code in the version control system. The tester can use these unit tests to immediately start testing the software (and not wait till the end of the project).

 

The fully integrated version control system allows the team members to store all project related documents, code, etc. in the “project repository” which might help different team members using different versions of specifications.

 

So where are we?

 

All of these and a lot of other features of VSTS help us improve the overall quality of IT projects we are participating in the very near future. I haven’t yet real experience with this tool but I think is has enough potential.

 

For all the people every worked with me on a project:

Of course all of the above isn’t based on my own experiences. This is just what I heard from colleagues! I do like project leaders! For me Visual Studio doesn’t actually change anything in the projects I am participating in. The only difference is that when using Visual Studio Team System I have a totally new and cool tool to deliver these very successful projects to our clients!

posted on 3/10/2005 1:22:54 PM UTC  #   
 Sunday, March 06, 2005

To help developers (projects) deliver robust, industrial strength code, Microsoft's created her "defence in depth" strategy. This strategy aims to test for bugs at every stage in a project, aiming to catch them as early as possible. Microsoft thinks this can be done by:

•Unit testing, to test the basic functionality of every method

•Code reviews, to analyse code for coding errors prior to checking it in

•Frequent builds, to make sure that no changes have broken the application

The good thing about Visual Studio (Team System) 2005 is that it helps projects delivering quality code by providing a new set of tools.

 Let us look a little bit more to the new testing features of Visual Studio Team System:

Unit testing

Visual Studio 2005 not only includes a set of tools that better supports testing; it also supports the Test Driven Development (TDD) methodology. This methodology lets tests drive the production of code, ensuring that tests reflect what the code is supposed to do and that all methods in the public interface are tested.

Some of the essentials of TTD are:

  • tests reflects the specifications and are written before any code that implements the requirements
  • The aim is to pass the test
  • Once all the tests are passes, the code is complete

One of the good things about unit tests is that it provides the possibility to execute regression tests during the life cycle of the project. This makes it fairly easy to ensure a change to one piece of code didn’t break any other piece of code. Another advantage of writing unit test (before the actual code!) is that this improves the overall design of the software. When using TDD, tests are always written before the code itself. The first time only a stub is created just to satisfy the compiler for the test. After that the body (implementation) is written of the code that is tested just until the test passes.

After the unit test passes, there is time for any refactoring of the code (if necessary) to improve the design.

TDD aims to create tests during development and not at the end of the project (if there is time left)

To make writing unit tests easier and more common, Visual Studio 2005 provides a unit testing framework out of the box. For those of you working with NUnit in the past, this framework will look very familiar. (Of course this is purely based on coincidence and has nothing to do with the inventor of NUnit now working for Microsoft.)

This unit testing framework is available in both the Team Developer and Team Tester of Visual Studio so unit test created by developers can be executed by the testers in the team. The testing framework and testing tools are fully integrated within the Visual Studio IDE so testing can be done without leaving Visual Studio.

When using Team Foundation Server (TFS), all gets even better. TSF integrates activities of the developers and testers; testers are notified when a developer checks in a piece of code (of course with the necessary unit tests!) and developers will be notified when the testers encounter a bug in the code. Without using TFS it is not possible to check in unit tests and test results cannot be stored in the projects repository.

When using the unit test framework within Visual Studio all test classes are stored in a separate assembly (at least this was what I found in the December CTP of VSTS). Personally I like the “strategy” of Enterprise Library a little better. Within Enterprise Library all test classes are provided within the implementation assembly. All in a separate folder within the project called “.Tests”. Maybe there is a very obvious reason to store the unit tests in a separate assembly but I haven’t found one yet.

The good thing is that there is a Test Explorer available within Visual Studio that makes it very easy to run a set of tests. This tool makes it even possible to only execute one or more categories of tests.

 

 

 

Another new and very useful feature of Visual Studio Team System is the integrated code coverage. This makes it very easy to validate if the unit tests for a piece of code cover all code paths. The actual Code coverage can be displayed in a percentage or within the code editor itself.

 

Knowing that Team System makes it possible to reject code that isn’t covered by unit tests for at least ‘x’ percent, it becomes more and more difficult to produce poor code!

 

 

posted on 3/6/2005 2:09:39 PM UTC  #   
 Wednesday, February 23, 2005

During the first day of the Visual Studio 2005 Ascend training a basic overview was given of the Visual Studio Team System and .NET framework 2.0 features. The main purpose of this day was to create a mindset for all attendees to be prepared for the next three days.

 

Although the first day turned out to be pretty interesting, my main focus was on the three other days on which I planned to follow the “Tools and Integration” track, which covered Visual Studio Team System (VSTS) in more detail. 

 

It is my intention to write some more detailed postings about a few of the Visual Studio Team System features in the next coming days. First, a short list of some of the topics (without any detail) that were covered during the “overview day” of the Ascend training:

 

VB.NET/C#

Although it has nothing to do with VSTS or Whidbey I cannot forget the following statement I heard during the training. It is definitely not my intention to start the VB.NET versus C# debate again (and hurt anybodies feelings) but I have to admit, I totally agree with his statement J

 

‘The only reason VB.NET is there, is to make C# shine!”

 

Visual Studio 2005 Team System

Core Tenets:

  • Quality
  • Productivity
  • Connectivity

Software lifecycle silos:

  • Infra architect
  • Testers
  • Project leader
  • Business analyst
  • Developer
  • Etc.

VSTS solves the communication problems between the different silos.

 

VSTS brings you:

  • Design for operations
  • Increased reliability
  • Predictability
  • Quality easily and often

 

Main goal: see how you are doing DURING the project on quality, performance, time, etc.

 

ASP.NET 2.0

The ASP.NET team set a few goals for their 2.0 release:

  • Development productivity; Current claim: reduce 2/3 of lines of code needed to solve common issues compared to ASP.NET 1.1
  • Administrating and Management; provide easy manage and administrating functionality (instrumentation, performance counters, etc.)
  • Totally extensible platform; make it possible to replace an extend all build in functionality (by using provider model)
  • Make it the World fastest app server

 

Smart Client

Benefits the rich clients and thin clients world.

It is NOT a product, it is architecture!

Two main designs:

  • Data centric
  • Service centric

 

Data centric: it is al about replicating the data. For example, replicate the data when laptop is connected using SQL CE.

 

Service centric: it is al about “storing the actions” (instead of data alone). Store the xml messages on smart client and “play” them against the server when connected.

 

System.XML

Design goals for System.XML:

  • Performance improvement
  • Enhanced security; in v 1.0 no security support. Now code access security for System.XML
  • Enhanced schema support; infoset available in v 2.0
  • More XSLT processing; 4 times faster

 

System.Data

Programming model:

  • Provider factories (to write provider independent code)
  • Provider enum

 

DataSet Enhancements:

  • Peformance and scalability improvements
  • Standalone datatable instances
  • XML datatype
  • User defined datatype in DataSet

Binary serialization of content (in v 1.0 always serialized to XML)

 

 

 

Pretty interesting first day but…. will be back with more info on Visual Studio Team System!

posted on 2/23/2005 8:43:19 PM UTC  #   
 Wednesday, February 16, 2005

It’s been very quit on my blog lately. One of the reasons for this is the birth of my (first) daughter. I took some time off to fully enjoy taking care of her and get familiar to the fact that from now on there is another girl in my life J.

 

Next week I start working again and I think it will be a very interesting week. The company I am working for is participating in a Visual Studio 2005 Ascend program. As part of this program I am one of the lucky ones that can attend a (four day long) VS 2005 training. I will follow the “Tools and Integration” track of the course covering a lot of Visual Studio Team System related issues.  

 

I will post some of my experiences, thoughts and other Visual Studio Team System related issues in the next coming week!

posted on 2/16/2005 8:50:05 PM UTC  #   
 Friday, January 21, 2005

This week I had to write a Software Architecture Document for a new project that I am assigned to. The document presents the high level architecture that will be used for the solution created in this project. The high level architecture identifies the services that are needed. These will be described in more detail (interface, messages, and implementation) at a later stage in a separate document.

 

When reading through the functional requirements I noticed the large number of reference tables that were described in the specs. All of this reference data is needed for (at least one of) the services and of course it has to be possible to maintain these reference data. This made me think about the best way to design this reference data within the high level architecture. After a talk with my colleague Gerke, I decided to do it in the following way:

 

Al reference data that is relevant for only one service will be maintained by the service itself. All reference data that is relevant for more that one service will be logically grouped in one service. For example, in the requirements for this project; country, region and sub-region reference data is described. For this, a ‘geography’ service (it’s only a name) is identified that maintains the country, region and sub-region reference data.

 

For this kind of reference data services, I identified two different interfaces types, one ‘query’ interface and an ‘update’ interface. Most of the service consumers of a reference data service are only interested in the query interface. The query interface is the one that will be used by a business information service to access the reference data. The ‘update’ interface will implement the so called CRUD service actions. This interface is only relevant for the one responsible for maintaining the reference data and therefore a little less important.

 

The same applies to reference data that is relevant to only one service (and therefore is maintained as part of the service itself). For this situation also an update interface is defined that implements the CRUD service actions for the reference data part of the service.

 

When implementing the reference data services, I think it is best to implement the query and the update interface in a separate ‘asmx’ file (when using .NET web services). This gives a nice and clean interface that is easy to consume for the service consumers providing them the service actions that they are interested in only. It also simplifies the authorization part of the two different types of service actions provided by the two interfaces.

 

posted on 1/21/2005 10:30:34 PM UTC  #   
 Wednesday, January 12, 2005

During a recent project it became very clear to me that standardization (naming conventions, coding guidelines, etc.) is also a very imported issue within a service oriented environment. Probably because of the service orientation hype, everybody is too busy with all the service oriented related issues within the project that make them forget very basic things (like coding standards and guidelines).

 

Because of the autonomous nature of a service it is very likely that they are developed by different people (or teams). From what I saw in practise, communication between the several build teams of the different services within a solution is very limited, simply because of the fact that the interfaces are well defined (if the designer understands it’s job).  In case there is communication between the different service teams, it is mostly about the service interface and messages definition, rather than internal implementation (coding) issues. In case of a lack of standards, guidelines, etc., the end result of this is a solution (application) that exists out several services all implemented in a very different way, using different naming conventions, exceptions handling strategies, etc.

 

Of course missing standards and guidelines can cause the same issues (poor quality of code) in an “old fashioned” n-tier application but, because of the more tightly integration between the different parts or components in the application, developers tend to communicate more and they are more often (forced to) have a look at each other code. This will result in at least the same “standards and coding techniques” used within the whole application.

 

I always assumed that every developer at least applied the Microsoft naming guidelines (MSDN) and uses FxCop to check the quality of their coding. A bit naïve, I know!
posted on 1/12/2005 2:15:40 PM UTC  #