Last week at Microsofts Architects Forum in Munich Beat Schwegler, architect at Microsft EMEA in London, showed an awesome demo with an internally developed guidance package for web service development named GAT4WS.
It was by far the most convincing demo on GAT I saw up to now. The recipes make use of Christian Weyers design tool WSContractFirst and promote an interesting solution organisation with several logical layers. Main goal is to abstract the transport and web service aspects away from your business logic. It's a nice piece of automated guidance.
The only problem: It's not officially released until now...
(I think they wait for the next GAT release)
Wednesday, October 12, 2005
Guidance Automation Toolkit
I just finished my first experiments with Microsofts Guidance Automation Toolkit a.k.a. GAT. This toolkit is an interesting possibility for architects or chief programmers to offer guidance and best practices to their team - not just as written documentation as it can be found on Microsofts patterns and practices site. Using GAT you can create your own guidance packages containing a set of executable recipes for repetitive design and programming tasks in Visual Studio 2005.
GAT recipes are built to appear in a strictly context sensitive fashion. They offer their functionality just in those situations when they should be used - that's the main idea behind the concept. Recipes can be invoked via context menus or from the task pane and work in several steps. First they collect input data needed for their task. This might happen either through the help of extensible wizard dialogs or without user interaction at all (and rather through your code). Having the required input arguments one or more actions are invoked. These actions might manipulate your current project settings or - more typical - add some pieces of code to your project. Code injection is done through the use of a source file template mechanism. Recipes are defined declaratively as XML templates.
GAT builds on GAX (Guidance Automation eXtensions), which extends the possibilities of the Visual Studio API. I tried the latest Technology Preview from May 2005, which works with Visual Studio 2005 Beta 2 and can be downloaded here. It all works quite nice. My only major problem was editing the code templates. The VS editor isn't yet familiar with the template markup and tries to reformat your code in situations when you don't appriciate it...
Check out the community site and its Hands On Lab from Teched 2005, which uses GAT to automate certain data access tasks and explains the main concepts.
In my opinion this is another step into the right direction - building the foundations for Software Factories.
GAT recipes are built to appear in a strictly context sensitive fashion. They offer their functionality just in those situations when they should be used - that's the main idea behind the concept. Recipes can be invoked via context menus or from the task pane and work in several steps. First they collect input data needed for their task. This might happen either through the help of extensible wizard dialogs or without user interaction at all (and rather through your code). Having the required input arguments one or more actions are invoked. These actions might manipulate your current project settings or - more typical - add some pieces of code to your project. Code injection is done through the use of a source file template mechanism. Recipes are defined declaratively as XML templates.
GAT builds on GAX (Guidance Automation eXtensions), which extends the possibilities of the Visual Studio API. I tried the latest Technology Preview from May 2005, which works with Visual Studio 2005 Beta 2 and can be downloaded here. It all works quite nice. My only major problem was editing the code templates. The VS editor isn't yet familiar with the template markup and tries to reformat your code in situations when you don't appriciate it...
Check out the community site and its Hands On Lab from Teched 2005, which uses GAT to automate certain data access tasks and explains the main concepts.
In my opinion this is another step into the right direction - building the foundations for Software Factories.
Saturday, September 17, 2005
Keep your models in sync!
During my last 3 years working for Zühlke as a consultant and software architect I've learnt to appreciate modelling in different projects. To be honest: There are several situations in which you’d like to express your thoughts in a way your tool doesn’t support. But if you are really lucky and have (one of the few) usable modelling tools, your whole team will gain architectural overviews, that are hard (or maybe impossible) to provide without graphical representations and abstractions of your work.Models are perfect for sketching and documentation, this is a well known fact - but they can do more for you. Typically models provide structural and behavioural views on your project as a whole. Rational defines several views in its Rational Unified Process (logical, process, component, deployment views) – you might need others for your specific project needs. One of the most important issues only few people think and talk about is model integration. Models tend to grow, especially if they’re used efficiently, and every team member uses parts of the model for his/her daily work. Sooner or later you need to split your model in several parts. Or you distinguish analysis and design models. First you create some high level analysis classes or components and add some typical sequences and interactions to get a better understanding of your target domain.
Later you drill down some technical issues in your design model. You literally “dive into” parts of your analysis model. You see errors and change your design because of technical restrictions, optimizations, or simply to correct some errors you made during high level analysis, because you’re recognizing new details about the problem at hand.
This is one of the most critical moments in iterative software design: Most teams forget to update their good old analysis model they used throughout the initial project iterations. Your architectural overview in the analysis model will become unclear - and as time passes by – simply false.
You need to attack this problem during the early days of your project. Don’t divide your team into those using the analysis model and others using the design model and code. Don’t create roles for architects, who never see the design model or code. These guys are known as “PowerPoint architects” – living in their own world. Be aware of them. You certainly need team members who are responsible for the overall architectural issues – usually both technical and business focussed architects. But keep them in touch with your daily development issues. You need them to regularly extract important design information back into their analysis model. The modern idiom for this extraction is called “harvesting” (see IBM/Rational Software Modeller).
Most modelling tools aren’t capable to automate this important harvesting step. And it is no simple task. How should a tool decide which design aspects are architecturally relevant and which are not. Sure – you can use stereotypes or something else to annotate certain classes’ relevance. But it is more difficult for behavioural aspects of your design. Imagine your high level analysis model showed some business-critical component and service interactions. What should happen, if you change certain design aspects of this interaction down in your design model? Could any tool tell with some certainty, if your overview is concerned by those changes? Can we define these rules? Do we have patterns for these decisions? I think we need much more experience in model integration, updating and cross model referencing before we will be able to automate those tasks…
Later you drill down some technical issues in your design model. You literally “dive into” parts of your analysis model. You see errors and change your design because of technical restrictions, optimizations, or simply to correct some errors you made during high level analysis, because you’re recognizing new details about the problem at hand.
This is one of the most critical moments in iterative software design: Most teams forget to update their good old analysis model they used throughout the initial project iterations. Your architectural overview in the analysis model will become unclear - and as time passes by – simply false.
You need to attack this problem during the early days of your project. Don’t divide your team into those using the analysis model and others using the design model and code. Don’t create roles for architects, who never see the design model or code. These guys are known as “PowerPoint architects” – living in their own world. Be aware of them. You certainly need team members who are responsible for the overall architectural issues – usually both technical and business focussed architects. But keep them in touch with your daily development issues. You need them to regularly extract important design information back into their analysis model. The modern idiom for this extraction is called “harvesting” (see IBM/Rational Software Modeller).
Most modelling tools aren’t capable to automate this important harvesting step. And it is no simple task. How should a tool decide which design aspects are architecturally relevant and which are not. Sure – you can use stereotypes or something else to annotate certain classes’ relevance. But it is more difficult for behavioural aspects of your design. Imagine your high level analysis model showed some business-critical component and service interactions. What should happen, if you change certain design aspects of this interaction down in your design model? Could any tool tell with some certainty, if your overview is concerned by those changes? Can we define these rules? Do we have patterns for these decisions? I think we need much more experience in model integration, updating and cross model referencing before we will be able to automate those tasks…
Pragmatic Programming - the way towards your own DSL
I for myself am a passionate coder and usually prefer lightweight tools. It’s always fun to read books from the pragmatic bookshelf (see www.pragmaticprogrammer.com). Dave Thomas and Andy Hunt really hit the point with most of their opinions, which are taken from real life projects. Good programming skills are the most important basis if you want to develop and engineer larger software projects.
You want to automate certain aspects in your design and development process, as it is proposed by the ideas of Software Factories? Well, first you need to know how to accomplish these tasks manually in an elegant way. I think, it’s an important prerequisite, nobody talks about…
Remember these steps:
You want to automate certain aspects in your design and development process, as it is proposed by the ideas of Software Factories? Well, first you need to know how to accomplish these tasks manually in an elegant way. I think, it’s an important prerequisite, nobody talks about…
Remember these steps:
- Learn to program
- Learn to design
- Learn to model
- Learn to automate
- Design your automation through DSLs
Saturday, September 03, 2005
Software Factories
I've finally made it - I'm through more than 600 pages of this interesting book by Jack Greenfield and Keith Short. And it was worth while taking the time to understand the motivation behind (Microsoft's) Software Factory initiative.
The main point behind the idiom Software Factories is to use modern software development environments to industrialize software development for product lines. Software Factories use techniques like modelling with Domain Specific Languages, combined with patterns, frameworks, guidance and code generation to automate tedious, repetitive (!) and error prone hand-crafted code. The book describes the theory and ideas behind Software Factories, while Microsoft tries to deliver the proof of concept: The upcoming versions of Visual Studio Team System will contain numerous tools that give you the opportunity to create and visualize your own domain specific languages, synchronize models through transformations and automate and enforce your software development process through the new Guidance Automation Toolkit.
I’ll use my next blog entries to investigate some of the key points that are touched by Software Factories and try to illustrate them with issues from my own practical experience.
The main point behind the idiom Software Factories is to use modern software development environments to industrialize software development for product lines. Software Factories use techniques like modelling with Domain Specific Languages, combined with patterns, frameworks, guidance and code generation to automate tedious, repetitive (!) and error prone hand-crafted code. The book describes the theory and ideas behind Software Factories, while Microsoft tries to deliver the proof of concept: The upcoming versions of Visual Studio Team System will contain numerous tools that give you the opportunity to create and visualize your own domain specific languages, synchronize models through transformations and automate and enforce your software development process through the new Guidance Automation Toolkit.
I’ll use my next blog entries to investigate some of the key points that are touched by Software Factories and try to illustrate them with issues from my own practical experience.
Wednesday, August 31, 2005
SOA versus component design
Scott Belware published a very interesting article on theserverside.net. Driven by several discussions and presentations at Microsoft’s TechEd in Orlando in June, he discusses today’s difficulties in application architecture, which is drifting between modern SOA ideas and the RAD tooling reality. I share many of his critical thoughts.
Let’s look at an example: Visual Studio .NET is a really powerful development environment with lots of powerful features, but doesn’t guide developers towards solid design decisions.
Many of the decoupling lessons learnt from the good old COM history seem to be lost in modern .NET development. Many .NET developers aren’t even aware of the possibilities and responsibilities that lie in the use of .NET assemblies as a modern component model. It was never as easy as today to create component based applications, using interfaces to decouple implementations, following best practices from Test Driven Development (TDD).
It was so much harder to create the very same effect using COM interfaces, C++ and ATL in the late 90s. The benefit was that though we had to design interfaces in IDL and express shared data types in strange type systems (beware of VARIANT and BSTR), we knew the component boundaries! These boundaries were so extremely and painfully visible, that it became one of the main reasons why Microsoft created .NET. But now that those problems are gone, nobody seems to be interested in components any more. Everybody talks about services.
I think most designers and application architects should take a step back: Design your applications using strictly separated components. Use small and distinct interfaces to establish controlled communication between these components. Define and model an application specific layered architecture that goes beyond those physical deployment tiers that can be found in many articles. The result will be an application that consists of many components which you can develop and test concurrently, understand and maintain more easily.
These applications and their components will be first class citizens in the SOA future of your enterprise, because it will be fairly easy to create coarse grained service interfaces on top of these well defined components.
What I want to say: SOA will not completely change the world of a component developer. It’s just the next evolutionary step to use well designed components in a bigger scenario. Don’t forget the lessons learnt from the days when component design was the modern art. SOA won’t change those principles – but it will change the remote access to your components.
Let’s look at an example: Visual Studio .NET is a really powerful development environment with lots of powerful features, but doesn’t guide developers towards solid design decisions.
Many of the decoupling lessons learnt from the good old COM history seem to be lost in modern .NET development. Many .NET developers aren’t even aware of the possibilities and responsibilities that lie in the use of .NET assemblies as a modern component model. It was never as easy as today to create component based applications, using interfaces to decouple implementations, following best practices from Test Driven Development (TDD).
It was so much harder to create the very same effect using COM interfaces, C++ and ATL in the late 90s. The benefit was that though we had to design interfaces in IDL and express shared data types in strange type systems (beware of VARIANT and BSTR), we knew the component boundaries! These boundaries were so extremely and painfully visible, that it became one of the main reasons why Microsoft created .NET. But now that those problems are gone, nobody seems to be interested in components any more. Everybody talks about services.
I think most designers and application architects should take a step back: Design your applications using strictly separated components. Use small and distinct interfaces to establish controlled communication between these components. Define and model an application specific layered architecture that goes beyond those physical deployment tiers that can be found in many articles. The result will be an application that consists of many components which you can develop and test concurrently, understand and maintain more easily.
These applications and their components will be first class citizens in the SOA future of your enterprise, because it will be fairly easy to create coarse grained service interfaces on top of these well defined components.
What I want to say: SOA will not completely change the world of a component developer. It’s just the next evolutionary step to use well designed components in a bigger scenario. Don’t forget the lessons learnt from the days when component design was the modern art. SOA won’t change those principles – but it will change the remote access to your components.
Risks behind the SOA hype
Hi all,
I just published my first critical article about the SOA hype in the German IT magazine "Computerwoche", which can be found here - caution: it's written in German.
I thought that this event should finally be the trigger to open my own blog and share my personal thoughts on architecture, model driven development and software factories with the rest of the world. Hopefully somebody out there is interested...
I just published my first critical article about the SOA hype in the German IT magazine "Computerwoche", which can be found here - caution: it's written in German.
I thought that this event should finally be the trigger to open my own blog and share my personal thoughts on architecture, model driven development and software factories with the rest of the world. Hopefully somebody out there is interested...
Subscribe to:
Posts (Atom)