Overview
Now that you have designed your application, you will need to evaluate that design. Design evaluation involves examining the properties of your application to determine whether design decisions meet non-functional requirements. In other words, design evaluation requires you to review your design to determine whether it meets common system requirements such as performance, maintainability, scalability, and so on. Evaluating the design at this early phase will enable you to find and correct design mistakes before they become expensive to fix.
objectives
Evaluate the logical design of an application.
Evaluate the logical design for performance.
Evaluate the logical design for maintainability.
Evaluate the logical design for extensibility.
Evaluate the logical design for scalability.
Evaluate the logical design for availability.
Evaluate the logical design for security.
Evaluate the logical design against use cases.
Evaluate the logical design for recoverability.
Evaluate the logical design for data integrity.
Evaluate the physical design of an application. Considerations include the design of the project structure, the number of files, the number of assemblies, and the location of these resources on the server.
Evaluate the physical design for performance.
Evaluate the physical design for maintainability.
Evaluate how the physical location of files affects the extensibility of the application.
Evaluate the physical design for scalability.
Evaluate the physical design for availability.
Evaluate the physical design for security.
Evaluate the physical design for recoverability.
Evaluate the physical design for data integrity.
-------------
Evaluation of the Logical Design
Once you have a logical design for a proposed system, you need to evaluate the design based on a set of standard evaluation criteria. In general, this means evaluating the design for performance, maintainability, extensibility, scalability, availability, recovery, data integrity, and use-case correctness. Typically, you can group these evaluations into run-time evaluation (performance, scalability, availability, recoverability, and security), architectural evaluation (maintainability and extensibility), and requirements evaluation (business use case).
Performance Evaluation
Although the logical design of a system is high-level, there are performance considerations that can be evaluated. For instance, you need to evaluate the system tiers and abstraction layers in the design.
As you review the logical design, ensure also that the design is not over-designed into too many tiers. Typically, designing an enterprise application into three logical tiers is sufficient. Creating additional logical tiers is usually an indication of a poor design unless there is very wellthought-out reasoning for the additional tiers.
The levels of abstraction for particular entities should be reviewed to ensure that there are very specific reasons to abstract out particular parts of the design. In particular, you should be looking for extraneous levels of abstraction. Additional levels of abstraction can affect performance by forcing the flow of data across too many objects. By removing unnecessary levels of abstraction, you can ensure that the design has a high level of performance.
The level at which you can do a performance evaluation of the logical design is generally limited to finding redundancies. The level of detail required to determine other performance problems is just not available in the logical design.
Tip Creating multiple levels of abstraction beyond the three tiers of an application is usually an indication of a performance issue in a logical design.
Scalability Evaluation
The logical design review is also when you should be evaluating the design for scalability. Scalability simply refers to the ability to adapt to an increasing load on the system as the number of users increases. In a logical design, the most important piece of handling scalability is to ensure that you have a separate, logical middle (or data) tier. Remember, the logical design does not specify how you will actually deploy an application on a physical computer but, instead, is a robust design that can accommodate scalability concerns. You can address these concerns by ensuring that the logical design keeps the entire middle (or data) tier as a discrete part of the design.
Availability and Recoverability Evaluations
Your logical design should also take into account the availability and recoverability of your project. High availability is the characteristic of a design that allows for failover and recovery from catastrophic failures. This includes failover ability, reliable transactions, data synchronization, and disaster preparedness. Because the logical design is a fairly high-level view, not all of these availability concerns can be dealt with at the logical level. Instead, try to ensure that your entities can deal with availability solutions.
Your entities should be able to handle high-availability solutions in several ways:
By using reliable transactions (for example, database transactions, Microsoft Message Queue [MSMQ] transactional messaging, or distributed transactions such as Enterprise Services or distributed transaction coordinator [DTC] transactions)
By dealing with catastrophic failure by supporting rebuilding corrupted files and configurations in case of a failure to save data outside a transaction
By allowing for failover to different databases or other data services (for instance, Web services) centers in case of catastrophic hardware failure
Security Evaluation
In evaluating the security of an enterprise application’s logical design, you will need to ensure that the application will be able to protect its secrets. Enterprise applications typically will need to access security information to do their work. If you have an enterprise application that uses a database, you will need to ensure that the connection information the application uses is securely placed. This is important because you do not want ill-intentioned people to have access to your application’s data. For example, let’s say that you have an enterprise application that is used to access sales information for sales people. A copy of this application might be installed on the laptop of a salesperson. If this laptop is subsequently stolen, how do you ensure that the data the application uses is secure?
You can secure data in your enterprise application in several ways: by encrypting sensitive data such as configuration, by limiting local caches of data to only when absolutely necessary, and by using authentication to prevent unauthorized access to the software itself.
For almost every application, sensitive data such as logon information to a Web service or database server that allows access to sensitive data can be dangerous in the wrong hands. By encrypting this data based on Microsoft Windows Data Protection (more often called the data protection application programming interface [DPAPI]), you can encrypt data so that it is decipherable only by a specific user. DPAPI allows you to encrypt data without having shared secrets, as is common with System.Security.Cryptography.
More Info The data protection API
To learn more about the data protection API, visit Microsoft.com at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsecure/html/windataprotection-dpapi.asp.
In addition to encrypting data, allow for local caches of data to be minimized on computers. Keeping local copies of data is dangerous because those caches are susceptible to access by people who do not have permission through your application. Caches can also be copied out to easily hidden pieces of hardware, such as to USB memory devices, for use outside of your control. If you do need to keep local caches, such as for an application that is not always connected to a company’s servers, then protecting any sensitive data in the cache becomes paramount. Let’s say you have a cache of data that contains medical record information. Protecting that data from inappropriate users is your responsibility.
Finally, review the logical design to ensure that unauthorized usage of your application is not possible. Depending on what kind of application you are designing, this can be very important. Suppose you are writing a medical application. Even if you encrypt the sensitive data and never keep a local cache, if you allow people to use your application without verifying their identity, your application will be open to divulging information to which some people should not have access.
Quick Check
How should you handle securing sensitive configuration data?
What kind of authentication should you use in enterprise applications?
What should be your policy about caching sensitive data?
Quick Check Answers
You should use encryption to make configuration data such as database connection strings protected from unauthorized access. Your applications should allow decryption of the data but should not allow users to view the configuration information directly.
Using Microsoft Windows authentication (that is, domain authentication) in your applications is preferable to manual authentication methods because the security of identities and passwords can be maintained in a single place.
Sensitive data should not be cached if at all possible because getting access to those caches—through a variety of means—might compromise your data.
Maintainability Evaluation
Ninety cents out of every development dollar are used to maintain a system, not to build it. That makes the maintainability of a system crucial to its long-term success. Evaluation for maintainability starts here, in the logical design.
The maintainability in the logical design is based on segmenting elements of the logical design into specific tiers. Specifically, each element of the logical design should belong to one—and only one—tier of logical design. The most common problem with a logical design in the realm of maintainability is with entities that cross the data–user interface boundary. For example, if you have an entity in the logical design that is meant to store data about a customer, that same component should not also know how to expose the customer as a Windows Forms control. Separating each of those pieces of functionality will ensure that changes to the user interface and the data layer are discrete. Intermingling user interface and data tiers inevitably creates code that becomes increasingly difficult to maintain.
Extensibility Evaluation
While reviewing the logical design, it is important for you to determine extensibility in your design from two distinct perspectives: Can my design be extended with other components? Are my components extensible?
You should evaluate the logical design and determine which entities in your design can be built on top of other components. Usually, this means determining which classes to extend from the Microsoft .NET Framework itself. Look also at what classes you could use in your own code to build these new objects upon. For example, if you look at a customer entity in the logical design, you might have a common base class that performs data access for your entity objects. On the other hand, you might derive those classes from a class in the .NET Framework (for instance, the Component class) to get built-in behaviors.
It is important for you to look for ways to reuse existing code to complete your design instead of rebuilding everything from scratch. Finding components inside the .NET Framework, as well as in your existing code base (if any) to use as the basis of your components, will improve the quality of your project (that is, old code usually means better code) as well as decrease development costs.
In your logical design, look for ways to ensure that the code you write is extensible. One of the reasons you write object-oriented code is to enable code reuse. The more of your application that can be designed to be reused, the better the investment in the technology you are going to make.
Data Integrity Evaluation
Your logical design should also suggest how the data that the enterprise application will work with remains integral during the full life cycle of the application. This means that you will need to ensure that the database not only has a full set of schema, including primary keys, foreign keys, and data constraints, but that the client code determines the correct type of data concurrency to use for your application.
The decision that you make about what type of concurrency to use (optimistic versus pessimistic) will affect the overall safety and performance of your data tier. Typically, optimistic concurrency will perform better but will increase the chance that data will change between updates. Optimistic concurrency implies that data will remain unchanged between retrieving data and saving changes. If data has changed during that time, you will need to determine the best way of handling those changes. Optimistic concurrency generally performs better because there are fewer database and logical locks on the data, so more clients can access data concurrently.
Alternatively, choosing pessimistic concurrency ensures that the data a client is changing cannot be changed by other clients during the time that the client is working with that data. In all but the most severe case, optimistic concurrency is the right decision because it scales out better and performs well.
Business Use-Case Evaluation
At the point of the logical design review, you will need to review the business use cases to ensure that what you have designed continues to meet those needs. You might assume that, because the design was initiated from the use cases, this evaluation is not necessary, but that would be wrong. Much like a conversation that communicates across a room is changed by each listener, it is very common and easy for a designer to make assumptions about what the use cases are. This review of the use cases against the design will almost always find inconsistencies (or ambiguities) that need to be addressed in the logical design.
Evaluation of the Physical Design
The physical design of an enterprise application includes how the project is going to look when deployed. The big difference in this evaluation and the evaluation of the logical design is the number of concrete details of the design, which include the client deployment (for instance, how the application will be delivered to the client computers) as well as the network topology of what code and data will exist on which type of computer.
Much like the logical design, the evaluation of the physical design is broken up into a series of evaluation categories. These include performance, maintainability, extensibility, scalability, availability, and security evaluations.
Performance Evaluation
The performance evaluation of the physical design starts with a review of the different aspects of the design. These aspects include content, network, and database implementations. Each of these design aspects can adversely affect the final performance of the enterprise application.
You should review the network implementation of the project. The performance of an enterprise application can be greatly improved or destroyed based on how the data tier is implemented in the physical design. Look at how the middle tier is implemented to determine whether access to the tier is helping or hurting performance. There are no firm rules about the right implementation, but separating your data tier into a separate class of computer in the physical design is not always necessary. Typically, you would choose to separate the user interface tier and the middle tier into separate computers if the middle tier will tax the application by using a lot of memory or processor cycles. Because of the added expense of remotely accessing the data across computer boundaries, it is often more economical performance-wise to keep the middle tier on the same computer as the application.
You need to review the database implementation to ensure that it is performing adequately. As part of your performance evaluation, check the database operations to make sure they are performing as expected, both in isolation and under load testing. If this evaluation finds fault, there are many solutions for tuning the database, too numerous to explain in this training kit.
Scalability Evaluation
The scalability evaluation of the physical design is much like the evaluation of the logical design; you need to determine whether the system can adapt to handling higher-sized loads. You do this by reviewing the physical design to ensure that all components—custom components written for the project as well as first and third-party components used in the project—are compatible with moving from an in-process usage to a middle-tier scenario. This usually entails ensuring that all components can handle being moved across process boundaries in an efficient manner.
Availability and Recoverability Evaluation
In reviewing the availability of your enterprise application, you will need to determine what level of availability is required for the application. For example, if you are running a critical customer relationship management system (CRM), it becomes very important for you to handle failover to different computers and even data centers if you have a catastrophic failure such as hardware failure, interruption of Internet access, and so on. Alternatively, if you are building a small internal application, availability is not crucial to your success. The actual availability requirements should be evaluated at this time. This includes more than just ensuring that the deployment strategy takes this into account; you should also consider support for how to have backup databases and Web servers available with the correct version of the code and data. There are different strategies, but usually you will want to use a failover clustered database server for local availability.
The other side of availability is recoverability. Even if you do not need to support failover to new computers, data centers, and so on, you will likely need to support recovering from a failure. This means you need a strategy for backing up any data in the system. This includes database data, event data such as MSMQ and Event Logs, and any other transient data that is crucial to your business.
Security Evaluation
When reviewing the security of your physical design, be aware of the physical design of any communication between your enterprise application and any servers. For example, if your application will access a database server, you will need to ensure that access to that server is secure. Within your organization, this might not be a problem because firewalls and other security measures should keep unapproved people out. But as people are becoming more and more mobile, you will need to deal with an application that can be run outside of your network. In that case, you should use a virtual private network (VPN) to provide safe access to your internal servers. You should never expose your servers to the bare Internet just to allow these remote applications to work. If using a VPN isn’t possible, creating proxies to the server, for instance through Web services, is acceptable but often incurs a performance penalty and the expense of securing the Web servers.
Real World
Shawn Wildermuth
Securing enterprise applications in this world of mobile professionals is becoming increasingly difficult. Many organizations I have dealt with have tried to avoid creating a VPN so that mobile professionals could work by using all sorts of other solutions such as Web services, terminal services, and so on. In almost every case, it was easier simply to support a VPN.
If your design uses ClickOnce deployment, you will need to ensure that the Web servers exposing the application are secured like any other Internet-facing server. If you think that a server with a ClickOnce application is not apt to be hacked, you are just inviting trouble.
Maintainability Evaluation
In reviewing the maintainability of the physical design, pay attention to the common-sense approach of the code base. This means that components should use common directory structures and have directory structures of the project mirror namespace usage as much as possible. The key to maintainability is making the code base easy to navigate.
Extensibility Evaluation
The physical makeup of any enterprise application can strongly affect how extensible it is. In general, your review of the extensibility should include a review of what controls and other components are written as part of the project. The location of these controls and components should be as centrally located as possible, so they can be used in applications, if necessary. Writing the same control for different applications is just tedious. Alternatively, copying components from one project to the other breaks the ability for each application that uses a particular component to get the benefits of bug fixes and improvements to the control.
Data Integrity Evaluation
Finally, do an evaluation of the data integrity for your physical design. Unlike the evaluation of the logical design, this evaluation should include an appraisal from the user interface down to the database. Data constraints that are included in the database schema will ensure that the data stays consistent, but you should also include that same constraint higher in the code to reduce the need to go to the database just to find a data inconsistency. For example, if you have a check constraint in the database to ensure that social security numbers are nine digits, your user interface should have validation of that fact so that if an invalid social security number is entered, it is easier to report that to the user to fix than to wait for the data to be sent to the database just to receive a failure message.
Tip Data validation is not the same as data integrity. Whereas data validation might be required in the user interface of an application, the integrity of relationships between different entities in your data should be maintained both in the data tier of your application and at the database.
Lesson Summary
Evaluating the physical design should ensure that the deployed project meets the full requirements of a project.
This physical design evaluation will review the performance, scalability, availability, recoverability, security, maintainability, extensibility, and data integrity of the designed system.
Thursday, July 26, 2007
talk with Microsoft Architect Pat Helland
AJ: As an architect for many years, what kind of advice would you
give to someone who wants to become an architect?
PH: Architecture is a very interesting area. I liken it to building
architecture. If you stand back and think about what a building
architect has to do, they first of all have to think about a business need
and understand how to create a building that fulfills that business need.
Even more, the building needs have a feel to it. When someone
walks up and looks at the building, some emotion is conveyed.
At the same time, we have to deal with a myriad of pragmatics. How
is the air going to flow through the building? How will the people move
through the elevators? How do you make it comfortable and meet all
other environmental considerations? You have the functioning of the
building—the utilitarian object—combined with the fulfillment of the
business needs, combined with the effect that particular structure will
have on human beings.
Looking at that from the standpoint of software, you see exact
parallels. The primary goal of a software architect is to understand
and work out how to fulfill the business need. At the same time, you
want the software to relate to people and their feelings about using it.
While you are doing that, you have to deal with all of the transcendent
aspects of the darn thing functioning correctly and pragmatically.
A building architect dealing with a large building may not be an
absolute expert on elevators or airflow. But he has to know enough
about those areas to interact with the experts and bring it together into
a cohesive whole. Similarly, a software architect needs to know enough
about the different aspects of the larger system that they are putting
together to interact with the respective specialists.
Pat Helland has almost 30 years experience in
scalable transaction and database systems.
In 1978, Pat worked at BTI Computer
Systems where he built an implementation
language, parser generator, Btree subsystem,
transaction recovery, and ISAM (Indexed
Sequential Access Method).
Starting in 1982, Pat was chief architect
and senior implementor for TMF (Transaction Monitoring Facility)
which implemented the database logging/recovery and distributed
transactions for Tandem’s NonStop Guardian system. This provided
scalable and highly-available access to data with a fault-tolerant
message based system including distributed transactions and
replication for datacenter failures.
In 1991, Pat moved to HaL Computer Systems where he
discovered he could work on hardware architecture. He drove the
design and implementation of a 64-MMU (Memory Management
Unit) and a CC-NUMA (Cache Coherent Non-Uniform Memory
Architecture) multi-processor.
By 1994, Microsoft called and asked Pat to work on building
middleware for the enterprise. He came and drove the architecture
for MS-DTC (Distributed Transaction Coordinator) and MTS (Microsoft
Transaction Server). Later, Pat became interested in what today is called
SOA (Service Oriented Architecture) and started a project to provide
high-performance exactly-once-in-order messaging deeply integrated
with the SQL database. This shipped in SQL Server 2005 as SQL Service
Broker. Later, Pat worked on WinFS and did a stint in DPE evangelizing
to our largest enterprise customers.
For the past two years, Pat had been working at Amazon on the
catalog, buyability, search, and browse areas. He also started and
ran a weekly internal seminar series. He is excited to return home to
Microsoft and to work in the Visual Studio team!
give to someone who wants to become an architect?
PH: Architecture is a very interesting area. I liken it to building
architecture. If you stand back and think about what a building
architect has to do, they first of all have to think about a business need
and understand how to create a building that fulfills that business need.
Even more, the building needs have a feel to it. When someone
walks up and looks at the building, some emotion is conveyed.
At the same time, we have to deal with a myriad of pragmatics. How
is the air going to flow through the building? How will the people move
through the elevators? How do you make it comfortable and meet all
other environmental considerations? You have the functioning of the
building—the utilitarian object—combined with the fulfillment of the
business needs, combined with the effect that particular structure will
have on human beings.
Looking at that from the standpoint of software, you see exact
parallels. The primary goal of a software architect is to understand
and work out how to fulfill the business need. At the same time, you
want the software to relate to people and their feelings about using it.
While you are doing that, you have to deal with all of the transcendent
aspects of the darn thing functioning correctly and pragmatically.
A building architect dealing with a large building may not be an
absolute expert on elevators or airflow. But he has to know enough
about those areas to interact with the experts and bring it together into
a cohesive whole. Similarly, a software architect needs to know enough
about the different aspects of the larger system that they are putting
together to interact with the respective specialists.
Pat Helland has almost 30 years experience in
scalable transaction and database systems.
In 1978, Pat worked at BTI Computer
Systems where he built an implementation
language, parser generator, Btree subsystem,
transaction recovery, and ISAM (Indexed
Sequential Access Method).
Starting in 1982, Pat was chief architect
and senior implementor for TMF (Transaction Monitoring Facility)
which implemented the database logging/recovery and distributed
transactions for Tandem’s NonStop Guardian system. This provided
scalable and highly-available access to data with a fault-tolerant
message based system including distributed transactions and
replication for datacenter failures.
In 1991, Pat moved to HaL Computer Systems where he
discovered he could work on hardware architecture. He drove the
design and implementation of a 64-MMU (Memory Management
Unit) and a CC-NUMA (Cache Coherent Non-Uniform Memory
Architecture) multi-processor.
By 1994, Microsoft called and asked Pat to work on building
middleware for the enterprise. He came and drove the architecture
for MS-DTC (Distributed Transaction Coordinator) and MTS (Microsoft
Transaction Server). Later, Pat became interested in what today is called
SOA (Service Oriented Architecture) and started a project to provide
high-performance exactly-once-in-order messaging deeply integrated
with the SQL database. This shipped in SQL Server 2005 as SQL Service
Broker. Later, Pat worked on WinFS and did a stint in DPE evangelizing
to our largest enterprise customers.
For the past two years, Pat had been working at Amazon on the
catalog, buyability, search, and browse areas. He also started and
ran a weekly internal seminar series. He is excited to return home to
Microsoft and to work in the Visual Studio team!
Subscribe to:
Posts (Atom)