Sunday, December 23, 2007

Why MVP/MVC?

There are several good reasons for using these patterns and most projects would benefit from using these and other patterns.

First, is helps to clarify that Model View Presenter (MVP) and Model View Controller (MVC) are two entirely different patterns that solve the same problem .Both patterns have been in use for several years and focus on separating the view (UI) from the model (Business Classes).

Using the MVC pattern developers create controller classes that are responsible for responding to UI events and updating the model according to the action (event) invoked. While using the MVP pattern developers create presenter classes that do basically the same thing; but, also include the use of a .Net interface to talk to the view. The use of this interface is the major differences between the patterns and makes the presenter generally more loosely coupled then a controller. In some advanced scenarios you need to have both presenters and controller. In those cases the classes work together to isolate the view from the model.

Benefits of using either pattern

· Loose coupling – The presenter/controller are an intermediary between the UI code and the model

· Clear separation of concerns/responsibility

o UI (Form or Page) – Responsible for rending UI elements

o Presenter/controller – Responsible for reacting to UI events and interacting with the model

o Model – Responsible for business behaviors and state management

· Test Driven – By isolating each major component (UI, Presenter/controller, and model) it is easier to write unit tests. This is especially true when using the MVP pattern which only interacts with the view using an interface.

· Code Reuse – By using a separation of concerns/responsible design approach you will increase code reuse. This is especially true when using a full blown domain model and keeping all the business/state management logic where it belongs.

· Hide Data Access – Using these patterns forces you to put the data access code where it belongs in a data access layer. There a number of other patterns that typical works with the MVP/MVC pattern for data access. Two of the most common ones are repository and unit of work. (See Martin Fowler – Patterns of Enterprise Application Architecture for more details)

· Adaptable to change – I have been in this business for over 12 years and two things have changed more than any other (UI and Data Access). For example in .Net today we have several UI (WinForm, ASP.Net, AJAX, SilverLight, and WPF), and data access (DataSets, DataReaders, XML, LINQ, and Entity Framework) technologies. By using MVP or MVC it is easier to plug in any of these UI or data access technologies and even possible to support more than one at a time.

To recap the MVP and MVC are two separate patterns that address the same problem. Both patterns focus on isolating the UI from the model and these patterns are more adaptable to change then traditional approaches. I hope you found this helpful.

Wednesday, October 10, 2007

Object Relational Metadata Mapping Patterns

Matadata Mapping

Query Object

Repository

Object Relational Structural Patterns

Identity Field

Foreign Key Mapping

Association Table Mapping

Dependent Mapping

Embedded Value

Serialized LOB

Single Table Inheritance

Class Table Inheritance

Concrete Table Inheritance

Inheritance Mappers

Object Relational Behavioral Patterns

Unit of Work

Identity Map

Lazy Load

Data Source Architectural Patterns

Table Data Gateway

Row Data Gateway

Active Record

Data Mapper

Domain Logic Patterns

Transaction Script

Domain Model

Table Model

Service Layer

Offline Concurrency Patterns

Optimistic Offline Lock ( by David Rice)

Pessimistic Offline Lock ( by David Rice)

Ciarse-Grained Lock ( by David Rice and Matt Foemmel)

Implicit Lock ( by David Rice)

Distribution Patterns

Remote Facade

DTO Data Transfer Object

Web Presentation Patterns

MVC - Model View Controller

Page Controller

Front Controller

Template View

Transform View

Two Step View

Application Controller

Tuesday, October 09, 2007

Book: Patterns of Enterprise Application Architecture (PoEAA).

link for few more books by him

http://martinfowler.com/books.html

Tuesday, October 02, 2007

Application Frameworks

Why Use an Application Framework?


There are five major benefits of using application frameworks: Modularity, Reusability, Extensibility, Simplicity, and Maintainability.



Modularity
Modularity, the division of an application into structural components, or modules, allows developers to use the application framework in a piece-by-piece fashion. Developers who want to use one component of the application framework are shielded from potential changes to other parts of the framework. As they build applications on top of the framework, their development is better insulated from changes occurring in other parts of the application framework, resulting in a significant boost to their productivity and a reduction in the amount of time spent on fixing code affected by other parts of the application. By dividing the framework into modules, we can maximize productivity by assigning a developer the specific part of the application that would benefit most from that developer's expertise. The advantage that accrues from modularity can be seen, for example, in Web applications: developers who are expert in presentation user interfaces can be more productive when assigned to the front-end portion of the application, while developers who are expert in the development of application business logic can be more productive when assigned to the middle tier and back-end portion of the application. Similarly, developers can leverage the framework module related to the user interface during their development of the presentation tier of the application, while other developers can leverage the framework module related to the development of business objects during their development of the middle and back-end tiers of the application.



Reusability
Reusability of code is one of the most important and desirable goals of application development. An application framework provides such reusability to the application built on top of it not only by sharing its classes and code, but by sharing its designs as well. Applications usually contain many tasks that are similar in nature. However, different developers on the team often create their own implementations of these similar tasks. The result of such duplicated implementation is not only the unnecessary waste of resources on the duplicated code, but also the problem of maintainability further down the road, since any change to the task must be duplicated in multiple places throughout the application to ensure its integrity. On top of that, each developer might use a different design approach during implementation. This opens the application to risks of poor software design, which could lead to unforeseen issues down the road. With an application framework, however, we can move much of the duplicated code and commonly used solutions from the application layer to the framework components. This reduces the amount of duplicate code developers have to write and maintain, and significantly boosts their productivity. The application framework is also the place where we can bake many well-tested software designs into the components. Developers may not always be experts in software design, yet as they start using these framework components to build their applications, they unavoidably reuse many good software design approaches, such as design patterns that underlie the framework components.



Extensibility
Extensibility, the ability to add custom functionalities to the existing framework, allows developers not only to use the framework component "out of the box," but also to alter the components to suit a specific business scenario. Extensibility is an important feature for the framework. Each business application is unique in its business requirements, architecture, and implementation. It is impossible for a framework to accommodate such variation by itself, but if a framework is designed in such way that it leaves room for some customization, then different business applications can still use the generic features of the framework, yet at the same time developers will have the freedom to tailor their applications to the unique business requirements by plugging the customized logic into the framework. With a high degree of extensibility, the framework itself can become more applicable to different types of business applications. However, in creating a framework, its extensibility should always be determined in the context and assumptions of the application you are trying to develop. Each time you increase the extensibility of the framework, your developers may need to write more code and require more detailed knowledge about how the framework operates, which will have a negative impact on their productivity. An extreme scenario of a highly extensible framework is Microsoft's .NET framework itself, which is designed for development of a wide variety of applications. Indeed, there are few constraints in developing applications using the .NET framework, but as a result, you lose the benefits of what an application framework can provide. The key is to add the flexibility and extensibility to the places in the framework that are more likely to change in the particular type of application you are developing.



Simplicity


The term "simplicity" here means more than just being simple. Simplicity refers to the way the framework simplifies development by encapsulating much of the control of process flow and hiding it from the developers. Such encapsulation also represents one of the distinctions between a framework and a class library. A class library consists of a number of ready-to-use components that developers can use to build an application. However, developers must understand the relationships between various components and write process flow code to wire many components together in the application. On the other hand, a framework encapsulates the control of such process flow by prewiring many of its components so that developers do not have to write code to control how the various components interact with each other.



Maintainability
Maintainability, the ability to effectively support changes to the application as a result of changes to the business requirements, is a welcome side effect of code reuse. Framework components are commonly shared by multiple applications and multiple areas within a single application. Having a single copy of the framework code base makes the application easy to maintain, because you need to make a change only once when a requirement changes. The application framework may also contain many layers. Each layer makes certain assumptions about the business the application is intended to serve. The bottom layer consists of framework components that make no assumptions about the business. They are also the most generic components in the framework. As you move higher up the stack of the layers, its components depend on more business assumptions than do the previous layers, and hence are more susceptible to change when business requirements and rules change. When changes do occur, only the components at the layer where the business assumption is broken need to be fixed and tested. Therefore, by injecting different levels of business knowledge into different levels of the framework layers, you can reduce the cascade effect of changing business rules and requirements to the application. This also leads to the reduction of maintenance costs, since you need to touch only the code that is affected by the business rule change





To better understand how we can develop an application framework, we need first to understand what goes in an application framework and its relationship to other parts of the system.

Framework Layers








Framework Development Techniques
In order to develop an effective application framework, you need to know some common techniques for framework development. The following list shows some useful techniques and approaches that can help you develop a framework that is both easy to use and extensible.

Common spots

Hot spots

Black-box framework

White-box framework

Gray-box framework

Design patterns

Common spots, hot spots, and design patterns are some of the techniques used in framework development. Black, white, and gray boxes represent the approaches you can take to developing the framework.



.........



Tuesday, September 25, 2007

WCF Architecture

At the heart of WCF is a layered architecture that supports a lot of the distributed application development styles. Figure illustrates the layered architecture of Windows Communication Foundation




Contracts
WCF contracts are much like a contract that you and I would sign in real life. A contract I may sign could contain information such as the type of work I will perform and what information I might make available to the other party. A WCF contract contains very similar information. It contains information that stipulates what a service does and the type of information it will make available.

Given this information, there are three types of contracts: data, message, and service.



Data
A data contract explicitly stipulates the data that will be exchanged by the service. The service and the client do not need to agree on the types, but they do need to agree on the data contract. This includes parameters and return types.

Message
A message contract provides additional control over that of a data contract, in that it controls the SOAP messages sent and received by the service. In other words, a message contract lets you customize the type formatting of parameters in SOAP messages.

Most of the time a data contract is good enough, but there might be occasions when a little extra control is necessary.

Service
A service contract is what informs the clients and the rest of the outside world what the endpoint has to offer and communicate. Think of it as a single declaration that basically states "here are the data types of my messages, here is where I am located, and here are the protocols that I communicate with


Policy and binding

Policy and binding contracts specify important information such as security, protocol, and other information, and these policies are interrogated looking for the things that need to be satisfied before the two services start communicating




Service Runtime
The Service Runtime layer is the layer that specifies and manages the behaviors of the service that occur during service operation, or service runtime (thus "service runtime behaviors"). Service behaviors control service type behaviors. They have no control over endpoint or message behaviors. Likewise, endpoint and message behaviors have no control over service behaviors.

The following lists the various behaviors managed by the Service Runtime layer:

q Throttling Behavior: The Throttling behavior determines the number of processed messages.

q Error Behavior: The Error behavior specifies what action will be taken if an error occurs during service runtime.

q Metadata Behavior: The Metadata behavior controls whether or not metadata is exposed to the outside world.

q Instance Behavior: The Instance behavior drives how many instances of the service will be available to process messages.

q Message Inspection: Message Inspection gives the service the ability to inspect all or parts of a message.

q Transaction Behavior: The Transaction behavior enables transacted operations. That is, if a process fails during the service runtime it has the ability to rollback the transaction.

q Dispatch Behavior: When a message is processed by the WCF infrastructure, the Dispatch Behavior service determines how the message is to be handled and processed.

q Concurrency Behavior: The Concurrency behavior determines how each service, or instance of the service, handles threading. This behavior helps control how many threads can access a given instance of a service.

q Parameter Filtering: When a message is acted upon by the service, certain actions can be taken based on what is in the message headers. Parameter Filtering filters the message headers and executes preset actions based on the filter of the message headers.




Messaging
The Messaging layer defines what formats and data exchange patterns can be used during service communication. Client applications can be developed to access this layer and control messaging details and work directly with messages and channels.

The following lists the channels and components that the Messaging layer is composed of:

q WS Security Channel: The WS Security channel implements the WS-Security specification, which enables message security.

q WS Reliable Messaging Channel: Guaranteed message delivery is provided by the WS Reliable Messaging channel.

q Encoders: Encoders let you pick from a number of encodings for the message.

q HTTP Channel: The HTTP channel tells the service that message delivery will take place via the HTTP protocol.

q TCP Channel: The TCP channel tells the service that message delivery will take place via the TCP protocol.

q Transaction Flow Channel: The Transaction Flow channel governs transacted message patterns.

q NamedPipe Channel: The NamedPipe channel enables inter-process communication.

q MSMQ Channel: If your service needs to interoperate with MSMQ, this is the channel that enables that


Activation and Hosting
The Activation and Hosting layer provides different options in which a service can be started as well as hosted. Services can be hosted within the context of another application, or they can be self-hosted. This layer provides those options.

The following list details the hosting and activation options provided by this layer:

q Windows Activation Service: The Windows Activation Service enables WCF applications to be automatically started when running on a computer that is running the Windows Activation Service.

q .EXE: WCF allows services to be run as executables (.EXE files).

q Windows Services: WCF allows services to be run as a Windows service.

q COM+: WCF allows services to be run as a COM+ application


, this section lists a number of the great focus points that WCF has to offer. Think of it as the personality of WCF:

q Programming model

q Scalability

q Interoperability

q Enhanced communication

q Enterprise enabled



Programming Model
The great thing about WCF is that there is no "right way" to get from point A to point B. If fact, WCF lets users start at point A and go to point B any way they see fit. This is because the programming model in WCF lets developers control how and when they want to code things and yet gives them the ability to do that with a minimum amount of code.

As you have seen from the architecture, there are only a small handful of major components that a developer will need to work with to build high-class services. However, WCF also lets developers drill down to lower-level components if they desire to get more granular with their options. WCF makes this very simple. The WCF programming model lets a developer take whichever approach he or she desires. There is no single "right" way.

The programming model also combines many of the earlier technologies, such as the ones mentioned earlier in the chapter (MSMQ, COM+, WSE, and so on), into a single model.

Scalability
WCF services scale, and they scale in all directions. Not just up or out, but in all directions. They scale out via routing mechanisms and farms. Remember the book publisher example? The Order Process service was scaled out by providing an Order Process router, which routed orders to multiple Order Process services.

Services scale up by not being tied to a single OS or processor. Services scale up by the pure ability to deploy them on bigger and better servers and taking advantage of the new processor technologies that are starting to appear.

Services scale in by way of cross-process transports, meaning on-machine and cross-machine messaging and Object-RPC.

Services scale down by interfacing and communicating with devices such as printers, scanners, faxes, and so on.

Interoperability
How sweet is it to be able to build high-class services using a single programming model and at the same time take advantage of earlier technologies (see "Programming Model"), irrespective of the OS, environment, or platform? WCF services operate independent of all of these.

WCF services also take advantage of the WS architecture utilizing the already established standards as far as communication and protocols are concerned.

Enhanced Communication
Services aren't picky as far as transports, formats, or much else. You as a developer can choose from a handful of transports, different message formats, and surplus of message patterns.

Along these same lines, WCF is like the country of Switzerland (nice segue, heh?), in that services are neutral as far as transports and protocols are concerned. A service can use TCP, HTTP, Named Pipes, or any other protocol to communicate. The same goes for transports. In fact, if you want to build and use your own, feel free to do so.

The reason it is this way is because, as you hopefully have figured out by now, communication is completely separate from the service. They are completely independent from one another.

Enterprise Enabled
A lot of times there is a give-and-take relationship when dealing with web services, interoperability, and other important features geared toward enterprises. As a developer you have to weigh performance versus security, or reliability. At what cost does adding transactional capabilities add to your solution? Up until now, having the best of all worlds was a mere pipe dream.

Well, now it is time to wake up and smell the technology because WCF provides the ability to have security and reliability without sacrificing performance. And you can throw transactions into the mix as well.

A lot of this comes from the standards of the web service architecture, allowing you to build enterprise- class applications.

Now that you know what makes WCF tick, the chapter wraps up by discussing some of the great things you can do with WCF

Service-Oriented Architecture Principles

Streams of information have been flowing from Microsoft in the forms of articles and white papers regarding its commitment to SOA, and in all of this information one of the big areas constantly stressed are the principles behind service orientation:

q Explicit boundaries

q Autonomous services

q Policy-based compatibility

q Shared schemas and contracts


Explicit Boundaries


SOA is all about messaging—sending messages from point A to point B. These messages must be able to cross explicit and formal boundaries regardless of what is behind those boundaries. This allows developers to keep the flexibility of how services are implemented and deployed. Explicit boundaries mean that a service can be deployed anywhere and be easily and freely accessed by other services, regardless of the environment or development language of the other service.

The thing to keep in mind is that there is a cost associated with crossing boundaries. These costs come in a number of forms, such as communication, performance, and processing overhead costs. Services should be called quickly and efficiently.


Autonomous Services


Services are built and deployed independently of other services. Systems, especially distributed systems, must evolve over time and should be built to handle change easily. This SOA principle states that each service must be managed and versioned differently so as to not affect other services in the process.

In the book publisher example, the Order Process service and Order Fulfillment service are completely independent of each other; each is versioned and managed completely independent of the other. In this way, when one changes it should not affect the other. It has been said that services should be built not to fail. In following this concept, if a service is unavailable for whatever reason or should a service depend on another service that is not available, every precaution should be taken to allow for such services to survive, such as redundancy or failover.


Policy-Based Compatibility


When services call each other, it isn't like two friends meeting in the street, exchanging pleasantries, and then talking. Services need to know a little more about each other. Each service may or may not have certain requirements before it will start communicating and handing out information. Each service has its own compatibility level and knows how it will interact with other services. These two friends in fact aren't friends at all. They are complete and total strangers. When these two strangers meet in the street, an interrogation takes place, with each person providing the other with a policy. This policy is an information sheet containing explicit information about the person. Each stranger scours the policy of the other looking for similar interests. If the two services were to talk again, it would be as if they had never met before in their life. The whole interrogation process would start over.

This is how services interact. Services look at each others' policy, looking for similarities so that they can start communicating. If two services can't satisfy each others' policy requirements, all bets are off. These policies exist in the form of machine-readable expressions.

Policies also allow you to move a service from one environment to another without changing the behavior of the service


Shared Schemas and Contracts

Think "schemas = data" and "contracts = behavior." The contract contains information regarding the structure of the message. Services do not pass classes and types; they pass schemas and contracts. This allows for a loosely coupled system where the service does not care what type of environment the other service is executing on. The information being passed is 100 percent platform independent.

In other words

q Services are platform and location independent. A service does not care where the service is located, and it does not care about the environment of another service to be able to communicate with it.

q Services are isolated. A change in one service does not necessitate a change in other services.

q Services are protocol, format, and transport neutral. Service communication information is flexible.

q Services are scalable.

q Service behavior is not constrained. If you want to move the location of the service, you only need to change the policy, not the service itself

Saturday, September 22, 2007

Prototype Pattern

What Is a Prototype Pattern?

The Prototype pattern gives us a way to deal with creating new instances of objects from existing object instances. We basically produce a copy or clone of a class that we already have created and configured. This allows us to capture the present state of the original object in its clone so we can modify the cloned object without introducing those changes to the original. We might do this if we needed to duplicate a class for some reason but creating a new class was not appropriate.

Perhaps the class we wanted to clone had a particular internal state we wanted to duplicate. Creating a new class from scratch would not reproduce the appropriate internal variable values in the new class we desired without violating rules of encapsulation of the class. This might occur because we might not be able to directly access private variables inside the class. Simply constructing a new class would not get us the class in its current state. Making a clone of the existing class would allow the clone to be modified and used without changing the original and would allow us to capture the originating class's state in the new class. This can be accomplished because the prototype method is internal to the originating class, and has access to its class's internal variables. This gives the method direct access to both the originating and the new class's internal state.

Another reason to use a prototype would be because we cannot create a new class in the current scope of the code or because allowing a constructor on the class in the current scope violates the rules of encapsulation of our application. A situation like this could occur if the class's constructor was marked internal to a domain that is not the current domain. We could not call the constructor because of its encapsulation properties. This sometimes happens in cases where a facade is used. Since you cannot call the constructor outside of the facade's domain, the only way to construct a new instance of a class would be to provide a prototype method on the class.

The Prototype pattern has one main component: the Prototype. The prototype is really an ordinary class that has a method implemented to create a copy (or clone) of itself.

This is an interesting and useful pattern. Let's take a look at some examples of how it can be used.





Problem: A class that cannot be constructed outside its assembly or package needs a copy made that keeps its current state intact
For our example, we start with a class that can only be constructed internally to an assembly or package. We need to create a new instance of the class in a scope that is outside of the assembly or package of the class. Since the constructor is marked internal to its domain, assembly, or package, we cannot call it in the current scope.


The Stone class needs to be added to another class outside its package or assembly without sharing the current reference. The only way to do that is to call the factory method again and get a new instance. This might be inappropriate since the Stone class may have changed attributes that we wish to maintain in the new class. We have tried to fix this problem by creating a new class and filling its attributes with the values of the original:

Stone stone = StoneFactory.GetGranite();
stone.Color = System.Drawing.Color.DimGray;
stone.Hardness = 5;
stone.Shape = System.Drawing.Drawing2D.WarpMode.Bilinear;
Calling the factory to get a new class will give us a new instance, but we have to be careful to write the code so we can get an exact replica of the original:

Stone nonClonedStone = StoneFactory.GetGranite();
nonClonedStone.Color = stone.Color;
nonClonedStone.Hardness = stone.Hardness;
nonClonedStone.Shape = stone.Shape;

This will only work as long as we can set the internal variables of the class through methods providing external access to these variables. If we had attributes that we could not set inside the new class, this method would not work. Our problem is that we do have such variables; we just cannot modify the internal state of the class easily from outside the class. We need a way to get a new instance of the class with its complete state maintained in the new class.

Solution: Create a prototype method on the class to return a cloned instance with its state copied to the needed depth
Our solution to this dilemma is to build a method on the class that will produce a prototype of the original. In other words, we clone or copy the class with a method that has internal access to the class without violating the class's encapsulation rules.





Figure 2-12: UML for Prototype pattern example
We start by looking at the abstract Stone class. We provide an abstract method on the class named Clone(). This method will be implemented on the concrete implementations of the class to provide a way to return the particular instance of the class with its current state at the time of the call to the method.

abstract class Stone
{
public abstract Stone Clone();
}
Now let's look at the implementation class Granite and its Clone() method. We use the .NET MemberwiseClone() method to render a shallow copy of the attributes of the class in its current state:

class Granite : Stone
{
public override Stone Clone()
{
return (Granite)this.MemberwiseClone();
}
}

In the case of having data that lived deeper in the object, we might have to capture the internal state directly. This could occur, for instance, in an object containing collections of object instances, and the collections would not necessarily get cloned because the objects in them were reference types instead of value types. In this case, you might have to add each object manually. This is referred to as a deep copy. A deep copy occurs when you have copied new reference types from existing ones, in addition to using MemberwiseClone() to copy the value types, making a completely disconnected new class instance. This results in an object whose internal reference types are not shared but are new instances of the original reference types. This ensures changes to the cloned object's reference types do not affect the object from which it was cloned. In the following example, we are copying all the value types from the current object to a new object, then looping through the current object's internal collection and calling a clone object on the value type in the collection:

public override Stone Clone()
{
Stone ret = (Granite)this.MemberwiseClone();
foreach(object obj in _collection)
ret.Add(obj.Clone()); //Reference Type is also cloned
return ret;
}
Now when we call the method it produces an exact copy with the same internal state as the original class:

Stone clonedStone = stone.Clone();
Our test of the new class confirms this:

Cloned
Color:Color [DimGray]
Hardness:5
Shape:Bilinear
Comparison to Similar Patterns
Depending on the scope and purpose of the creational methods, either a Factory or a Singleton pattern might be a better solution than the Prototype pattern. If you need a global instance of a class that cannot be instanced more than once, then a Singleton might be more appropriate. A Factory might also be another option for a more global management site for the object's state. The Factory could retain created objects and their states, and render them as needed.

What We Have Learned
The Prototype pattern gives us another way to deal with creating a class when copying the original object's state is important. It is also useful when the object cannot be created in its current context without violating the object's encapsulation rules. The pattern basically provides a clone of the original object, maintaining all of the original object's current state.

Related Patterns

Factory pattern

Singleton pattern

Template pattern

Design Your Soccer Engine, and Learn How To Apply Design Patterns (Observer, Decorator, Strategy and Builder Patterns) - Part I and II

http://www.codeproject.com/gen/design/applyingpatterns.asp

What Is Extreme Programming?

http://www.codeproject.com/gen/design/XP.asp

SCRUM

Introduction
SCRUM is a loose set of guidelines that govern the development process of a product, from its design stages to its completion. It aims to cure some common failures of the typical development process, such as:

Chaos due to changing requirements - the real or perceived requirements of a project usually change drastically from the time the product is designed to when it is released. Under most product development methods, all design is done at the beginning of the project, and then no changes are allowed for or made when the requirements change.
Unrealistic estimates of time, cost, and quality of the product - the project management and the developers tend to underestimate how much time and resources a project will take, and how much functionality can be produced within those constraints. In actuality, this usually cannot be accurately predicted at the beginning of the development cycle.
Developers are forced to lie about how the project is progressing - When management underestimates the time and cost needed to reach a certain level of quality, the developers must either lie about how much progress has been made on the product, or face the indignation of the management.
SCRUM has been successfully employed by hundreds of different companies in many different fields, with outstanding results.

You will find many similarities between SCRUM and Extreme Programming, but one of the major differences is that SCRUM is a fairly general set of guidelines that govern the development process of a product. For this reason, it is often used as a "wrapper" for other methodologies, such as XP or CMM (Capability Maturity Model) - that is, it is used to guide the overall process of development when using these other methodologies





http://www.codeproject.com/gen/design/scrum.asp

Introduction to Test Driven Design (TDD)

Test-driven design (TDD) (Beck 2003; Astels 2003), is an evolutionary approach to development which combines test-first development where you write a test before you write just enough production code to fulfill that test and refactoring. What is the primary goal of TDD? One view is the goal of TDD is specification and not validation (Martin, Newkirk, and Kess 2003). In other words, it’s one way to think through your design before your write your functional code. Another view is that TDD is a programming technique. As Ron Jeffries likes to say, the goal of TDD is to write clean code that works. I think that there is merit in both arguments, although I lean towards the specification view, but I leave it for you to decide.

Model-View-Controller Pattern

Model-View-Controller (MVC) is a classic design pattern often used by applications that need the ability to maintain multiple views of the same data. The MVC pattern hinges on a clean separation of objects into one of three categories — models for maintaining data, views for displaying all or a portion of the data, and controllers for handling events that affect the model or view(s).
Because of this separation, multiple views and controllers can interface with the same model. Even new types of views and controllers that never existed before can interface with a model without forcing a change in the model design.
How It Works
The MVC abstraction can be graphically represented as follows.




Events typically cause a controller to change a model, or view, or both. Whenever a controller changes a model’s data or properties, all dependent views are automatically updated. Similarly, whenever a controller changes a view, for example, by revealing areas that were previously hidden, the view gets data from the underlying model to refresh itself.

Front Controller

Context

You have decided to use the Model-View-Controller(MVC) pattern to separate the user interface logic from the business logic of your dynamic Web application. You have reviewed the Page Controller pattern, but your page controller classes have complicated logic, are part of a deep inheritance hierarchy, or your application determines the navigation between pages dynamically based on configurable rules.

Problem

How do you best structure the controller for very complex Web applications so that you can achieve reuse and flexibility while avoiding code duplication?

Forces

The following are specific aspects of the forces from Model-View-Controller that apply to the Front Controller pattern.

If common logic is replicated in different views in the system, you need to centralize this logic to reduce the amount of code duplication. Removing the duplicated code is critical to improving the overall maintainability of the system.

The retrieval of data is also best handled in one location. A series of views that use the same data from the database is a good example. It is better to implement the retrieval of this data in one place as opposed to having each view retrieve the data and duplicate the database access code.

As described in MVC, testing user interface code tends to be time-consuming and tedious. Separating the individual roles enhances overall testability. This is true not only for the model code, which was described in MVC, but also applies to the controller code.


The following forces might persuade you to use Front Controller as opposed to Page Controller:

A common implementation of Page Controller involves creating a base class for behavior shared among individual pages. However, over time these base classes can grow with code that is not common to all pages. It requires discipline to periodically refactor this base class to ensure that only common behavior is included. For example, you do not want a page to examine a request and decide (based on request parameters) to transfer control to a different page, because this type of decision is more specific to a particular function, rather than common among all the pages.

To avoid adding excessive conditional logic in the base class, you could create a deeper inheritance hierarchy to remove the conditional logic. For example, in an application that has three functional areas, it might be useful to have a single base class that has common functionality for the application. There might also be another class for each functional area, which inherits from the overall application base class. This type of structure, at first glance, is straightforward, but often leads to a very brittle design and implementation, and to a morass of code.

The Page Controller solution describes a single object per logical page. This solution breaks down when you need to control or coordinate processing across multiple pages. For example, suppose that you have complex configurable navigation, which is stored in XML, in your Web application. When a request comes in, the application must look up where to go next, based on its current state.

Because Page Controller is implemented with a single object per logical page, it is difficult to consistently apply a particular action across all the pages in a Web application. Security, for example, is best implemented in a coordinated fashion. Having security handled by each view or page controller object is problematic because it can be inconsistently applied and can lead to security breaches. An additional solution to this problem is also discussed in Intercepting Filter.

The association of the URL to the particular controller object can be constraining for Web applications. For example, suppose your site has a wizard-like interface for gathering information. This wizard consists of a number of mandatory pages and a number of optional pages based on user input. When implemented with Page Controller, the optional pages would have to be implemented with conditional logic in the base class to select the next page.



http://msdn2.microsoft.com/en-us/library/ms978723.aspx

Model-driven architecture (MDA™)

Model-driven architecture (MDA™) is a software design approach launched by the Object Management Group (OMG)[1] in 2001.
MDA supports model-driven engineering of software systems. MDA provides a set of guidelines for structuring specifications expressed as models. The MDA approach defines system functionality using a platform-independent model (PIM) using an appropriate domain-specific language. Then, given a platform definition model (PDM) corresponding to CORBA, .NET, the Web, etc., the PIM is translated to one or more platform-specific models (PSMs) that computers can run. The PSM may use different Domain Specific Languages, or a General Purpose Language like Java, C#, Python, etc.[citation needed]
Automated tools generally perform these translations, for example tools compliant to the new OMG standard named QVT. The OMG documents the overall process in a document called the MDA Guide. MDA principles can also apply to other areas such as business process modeling where the PIM is translated to either automated or manual processes[citation needed].
The MDA model is related to multiple standards, including the Unified Modeling Language (UML), the Meta-Object Facility (MOF), XML Metadata Interchange (XMI), Enterprise Distributed Object Computing (EDOC), the Software Process Engineering Metamodel (SPEM), and the Common Warehouse Metamodel (CWM). Note that the term “architecture” in Model-driven architecture does not refer to the architecture of the system being modeled, but rather to the architecture of the various standards and model forms that serve as the technology basis for MDA.
The Object Management Group holds trademarks on MDA, as well as several similar terms including Model Driven Development (MDD), Model Driven Application Development, Model Based Application Development, Model Based Programming, and others. The main acronym that has not yet been deposited by OMG until now is MDE. As a consequence, the research community uses MDE to refer to general model engineering ideas, without committing to strict OMG standards.[citation needed]
OMG focuses Model-driven architecture on forward engineering, i.e. producing code from abstract, human-elaborated specifications[citation needed]. OMG's ADTF (Analysis and Design Task Force) group leads this effort. With some humour, the group chose ADM (MDA backwards) to name the study of reverse engineering. ADM decodes to Architecture-Driven Modernization. The objective of ADM is to produce standards for model-based reverse engineering of legacy systems, see [1]. Knowledge Discovery Metamodel (KDM) is the furthest along of these efforts, and describes information systems in terms of various assets (programs, specifications, data, test files, database schemas, etc.).

The Model-Driven Architecture (MDA)

The Model-Driven Architecture (MDA) defines an approach to modeling that separates the specification of system functionality from the specification of its implementation on a specific technology platform. In short it defines a guidelines for structuring specifications expressed as models. The MDA promotes an approach where the same model specifying system functionality can be realized on multiple platforms through auxiliary mapping standards, or through point mappings to specific platforms. It also supports the concept of explicitly relating the models of different applications, enabling integration and interoperability and supporting system evolution as platform technologies come and go.

in other words

Model Driven Architecture (MDA). The MDA is based on the idea that a system or component can be modeled via two categories of models: Platform Independent Models (PIMs) and Platform Specific Models (PSMs). PIMs are further divided into Computation Independent Business Models (CIBMs) and Platform Independent Component Models (PICMs). As the name implies PIMs do not take technology-specific considerations into account, a concept that is equivalent to logical models in the structured methodologies and to essential models within usage-centered techniques. The CIBMs represent the business requirements and processes that the system or component supports and the PICMs are used to model the logical business components, also called domain components. PSMs bring technology issues into account and are effectively transformations of PIMs to reflect platform-specific considerations.

Architecture Tradeoff Analysis MethodSM (ATAM)

Abstract


If a software architecture is a key business asset for an organization, then architectural analysis must also be a key practice for that organization. Why? Because architectures are complex and involve many design tradeoffs. Without undertaking a formal analysis process, the organization cannot ensure that the architectural decisions made—particularly those which affect the
achievement of quality attribute such as performance, vailability, security, and modifiability— are advisable ones that appropriately mitigate risks. In this article, we will discuss some of the technical and organizational foundations for performing architectural analysis, and will present the Architecture Tradeoff Analysis MethodSM (ATAM)—a technique for analyzing
software architectures

Tuesday, September 04, 2007

Microsoft® Connected Services Framework (CSF)

Microsoft® Connected Services Framework (CSF) is an integrated, server-based software product which provides common service capabilities needed to connect and manage content services and networks. It builds, delivers, and manages services using a service-oriented architecture (SOA). For telecommunications operators and service providers, Connected Services Framework allows them to aggregate, provision and manage converged communications services for their subscribers across multiple networks and a range of device types. For media and entertainment organizations, Connected Services Framework provides a service-oriented infrastructure to manage how disparate applications work together to create, manipulate, share and distribute digital content