Let's rokk! [Tudor Cret's blog]

January 25, 2010

Behind ciripescu.ro, a Windows Azure cloud application (part 1)

Filed under: Technologies — Tudor Cret @ 2:18 pm
Tags: , , , , , , ,


January 4th marks an important step towards enabling Microsoft’s customers and partners, to build and grow businesses on the Windows Azure platform. Microsoft announced that upgrade from Community Technology Preview (CTP) accounts of the Windows® Azure™ platform (i.e., Windows Azure, SQL Azure and/or Windows Azure platform AppFabric) to paid commercial subscriptions is available. At the moment just for a list of countries, list in which Romania is not included. At least until February 1, 2010 the CTP accounts can be used. After this date the storage accounts will become read-only.      

The good news is that finally we don’t have a CTP version anymore. The bad news is that it is no longer free. And it costs. How much? Just watch the pricing list here. And before knowing the total…take a pen and a paper or try TCO and ROI calculator. In our case for ciripescu.ro, one worker role and one web role we’ll start from a minimum of $50 per month.      

But what services we get? In three words: Availability, Durability, Scalability. That means:      

  • Data from the storage (I’m not speaking about database, I’m talking about storage…) will always be available. Data is replicated across 3 nodes, at least. No request to the storage should fail. Also the uptime for my workers and web roles is around 99.99%. And this because at least 3 machines serves a worker or web role.
  • Data from storage is never lost. As I said data is replicated over 3 nodes. At least one node Is available and durable to serve the requested data.
  • I can scale my application as easy as clicking on a button and using a pay-as-you-go service. Et voila, I can add a new role to my application. With a click I have a new machine working for me, which means no hardware, no administrative tasks, etc.

Of course, these are not all the benefits. More can be found here.      

The purpose of the current post is to make a summary of what is behind ciripescu.ro a simple micro-blogging application developed around the concept of twitter. This platform can be extended and customized for any domain.      

So our intention is to develop a simple one-way communication and content delivery network using Windows Azure. Let’s define some entities:      

  • User –  of course we can do nothing without usernames, passwords and  emails J Additional info can be whatever you want, just like an avatar, full name etc.
  • Cirip – defines an entry of maximum 255 characters, visible to everybody. Users content is delivered through this entity
  • Private Message – a private message between two users
  • Invitation – represents an invitation a user can send to invite more other users to use the application. 

I won’t enter in more details about what the application does, you can watch it in action (at least until the end of January L) at www.ciripescu.ro. For the next sections I will concentrate on how the application does all its functionalities. Configurations and deployment aren’t so interesting since they are exposed to any hello world azure application video or tutorial.      

High level view

First let’s take a look at the architecture:      

High level design of Ciripescu.ro


The webrole is what people traditionally view as a web application. VS project contains everything a normal ASP.NET web application would: aspx pages, user controls, web services, code behind (including business layer or data access layer), etc. In fact it can be converted to a normal web application with two clicks from the context menu. It may be even an ASP MVC application. All HTTP requests are served by Windows Azure to one of these Web Roles. I’m saying “one of them”, because scaling up such an application involves running for Web Roles, and possibly more Worker Roles. If one web role has “enough” requests to process then a new incoming request will be handled by one of the other web roles.  With just one click you can have more web roles and more worker roles. Just one click….and a bigger pay bill at the end of the month. Yes, but…if you have your own datacenter? More web roles and worker roles means…electricity, hardware and human force.  And hardware is part from the companie’s actives…      

The worker role can be best compare to a windows service. It’s an application that runs in the cloud and performs tasks that are not suitable for a web application. Anything that takes too long to process and would unreasonably delay the loading of a page should clearly execute asynchronously and out of the normal IIS pipeline. For instance ciripescu.ro sends all its emails using such a worker Role. Why? Well, as most of the other social networking sites, ciripescu.ro has the ability to import contacts from a yahoo messenger account for instance and then lets the user select which of them he wishes to invited. This means that with one click a user could make ciripescu.ro send out a few hundred emails. Such a task could take between a few minutes, up to an hour. Obviously the user can’t wait that long for the page to load. Another example: let’s say I have to send a SMS message from my web application and sending SMS through a SMS gateway takes more than 10 seconds. Et voila, I already have the worker; it should do the job, while the user is free to “walk” to the rest of the application. So we can have as web roles any web application(website, ASP.NET MVC application) and as worker roles  any class library project(C# or VB). I don’t see a reason for which we wouldn’t have multiple worker or web roles written in different languages. Now it would be interesting to see a mix of worker and web roles each one developed using different platforms, PHP, Java, .NET that works together in the same application in the cloud. What would mean all that? That Azure is flexibile and not dedicated only for .NET ? It’s not tested yet that a ASP.NET MVC web role can use a Java worker role, but it’s a challenge for the future time. Anyway Azure is a programatically flexibile. You can easily integrate the existing web applications with new ones. Also integration with “on-premises” applications can be easily done via Bus Services. More about Bus services and Windows Azure flexibility can be found on Windows Azure training kit or just watching videos from PDC  2009. 

More available in part 2, here.

Behind ciripescu.ro, a Windows Azure cloud application (part 2)

Filed under: Technologies — Tudor Cret @ 2:14 pm
Tags: , , , , , , ,


Let’s go deeper(1) – web role and storage

As the high level design diagram suggests, the communication between web roles and worker roles is done through the various features of the Storage account, such as Queues. Queues are normally used to send messages/tasks between roles, as they implement all the safety features needed for asynchronous communication between multiple applications, running of different servers. This includes locking and recovery in case one of the roles crashes while still processing a message. Roles are automatically restarted when they crash, and items popped from the queue are made visible again. In fact this is the beauty of Windows Azure, from the perspective of the software developer: you write the code as if you only run one role and the cloud will make it work automatically with any number of roles, to infinitely scale it up. The beast is that Azure Tables are somehow limited, because there are no sorting or grouping options. But of course, there is SQL Azure (the classical SQL relational database – cloud version). Yes but sometime even at a relation database you make some denormalization for speed purposes. So it would have been nice to have at least sort option for Azure Tables.

While message passing between roles is mainly done using Azure Queues, other information are still passed by the traditional way: shared access to resources such as a database. Azure Store offers two different types of non-relational cloud based storage: Blobs (that store entire files) and Azure Tables (that store entities).

Ciripescu.ro uses Azure Tables to store all its business entities, instead of a classic SQL Database. We are still talking of business objects, a business layer and a data access layer, as you would in any other application. The only difference is that the data access layer stores the objects in a non-relational storage, designed to be using in a cloud environment. What makes Azure Tables so special is that it allows you to have very large tables that are still searchable very fast. In Ciripescu’s case, the table containing messages sent between users (Cirips) could have millions of items. Querying a sql table with a few million items can take several seconds. If we think of Twitter we would realize that a few million is more like a joke. How do you search among a table with a billion entities? The answer is simple: split that table on a cloud of servers, by using a relevant partition key and either only search that one partition where the object is, or search all them, but in parallel. This is exactly what Azure Tables does: for each object you define a partition key that will allow Windows Azure to transparently split the table, and a row key that allows fast searching of data in a single partition.

User entity

A business object would have to define those two fields and map relevant properties to them. Take note of the User class from ciripescu.ro, that inherits from TableStorageEntity (that defines the said properties) and maps the username as PatitionKey and String.Empty as row key, through its constructor. The TableStorageEntity class is defines in the Windows Azure SDK.  

In order to query an entity from Azure Tables, one must first create a DataServiceContext. This is a class that must inherit TableStorageDataServiceContext, which in turn inherits from the LINQ class DataServiceContext. Here, the conceptual object model looks like the one from NHibernate, but it’s not the same thing.  

User entity objects retrieval

Everything else is simple LINQ: the programmer creates a DataServiceQuery object for each Table and makes LINQ queries on it:  

var users = from u in Users select u;  

foreach (User u in users) { ….}   

This is how a data access layer class would query items from the storage:  

Quering items from the storage

Inserting, updating and deleting objects is done in the same way LINQ programmers are used  to. Let’s not forget that the data access clases from Azure SDK are build on LINQ classes:  

CRUD operations on User entity

The other entities in the application are built using the same object model.  

Now let us go to the main advantage of a social networking platform made on cloud based technology: the table that holds all the messages (Cirip). Social networking websites can get very popular. Think of tweeter: millions of tweets per day, and they have to not only be stored somewhere but also queried. How do you query a table with 1 billion entries? Azure Tables is the answer. Lets take a look at our Cirip table. Let’s assume two users, Tudor and Pitagora, had the following message exchange:  

Tudor: This is the first post on ciripescu   

Pitagora: hello world  

Pitagora: Windows Azure rocks  

Pitagora: What a good day for a Microsoft presentation  

Tudor: A barbeque would go better 🙂  

Tudor: I want a vacation  

PartitionKey RowKey Content
pitagora 9 What a good day for a Microsoft presentation
pitagora 12 Windows Azure rocks!
pitagora 20 Hello world
tudor 7 I want a vacation
tudor 8 A barbeque would go better 🙂
tudor 25 This is the first post on ciripescu

The PartitionKey and RowKey are two mandatory fields for any entity stores in Azure Tables. The programmer has to chose what he wants to store there carefully because all the power of the cloud depends on these two fields. Windows Azure splits tables in different partitions and stores them on different nodes, using the PartitionKey field. For redundancy and speed, each partition has 3 copies in the cloud. All partitions and copies of them are managed transparently by the cloud. The user doesn’t have to know of them are there. His only control over this process is the choice of PartitionKey. Entities with the same PartitionKey will belong to the same partition and be stored on the same node. The fastest possible query is the one that only searches in a single partition, so this choice depends from application to application. One has to ask: what is the most frequent query my application will do? In the case of ciripescu.ro, which is a microblogging application, our answer layed of the blog profile of each user. We decided that displaying all messages of a single user has to be the fastest query, so we chose the sender’s username as PartitionKey. In the example above messages from pitagora, can be stored on a different node then those from tudor. The order in which they are listed in the table also suggests that. Windows Azure sorted those messages by PartitionKey and RowKey.  

The second field, RowKey, is similar to the primary key in a SQL Table. Entities are index by it, and the fastest query done in a partition is one where the PartitionKey is the search criteria. In the context of Windows Azure, the Rowkey has another very important function: sorting. You have to remember that Azure Tables is not a relation database. It’s just of storage of entities. That means that you can’t run complex queries that involve counting, grouping and sorting. The entities are sorted ascendant by RowKey, and there is no way to change that. So in the case of the Cirip table, we want the latest Cirip to be displayed first. We chose DateTime.Max – DateTime.Now as a RowKey. The reason why I didn’t use real values in the example is because these are 11 digit numbers. The only thing I kept in the example is their order: the lastest message will always have the smallest RowKey.  

Using a non-relational database has some disadvantages too. Besides the lack of sorting and grouping capabilities sometimes you feel that you need relational entities. Because you will write more code at DAL layer. You can’t build :  

Select c from Cirip c, User u, Urmarire urm Where c.PartitionKey = u.PartitionKey and  

u.RowKey = f.RowKey and urm.PartitionKey = ‘current user’  

Anyway why Azure Tables and not SQL Azure? Because in our case is faster when searching. Also at the beginnings of Windows Azure there was not an SQL Azure. There was some kind of storage but not-relational. After some months and based on the feedback from developers Microsoft announced SQL Azure(summer 2009) which officially was released in the fall of 2009.

Worker roles and queues exposed here.

Behind ciripescu.ro, a Windows Azure cloud application (part 3)

Filed under: Technologies — Tudor Cret @ 2:13 pm
Tags: , , , , , , ,

Let’s go deeper(2) – worker role and queues

As mentioned above a worker role can be compared with a windows service. A little thing that sits quiet and it waits for its job. You tell it what to do, it knows how, you give it some input or not and it does its job. Also a worker can be seen as a background unit ready to execute a task. It depends on each one imagination and needs on how worker roles are used. Currently ciripescu.ro uses a worker role for email sending. It represents a “smart” email sender. Why smart? First of all we import a list of yahoo contacts to which an invitation has to be send. A person has 200 or 300 contact emails in his/her list. You can’t block the user until you send those emails. Also if you have online 10 users then you’ll have to send 2000-3000 emails in the same time. But what are doing if you have 500 users in the same time?  Also another problem appears. Yahoo for example banns you, if you send a large amount of emails per minute from the same email addres, even if you sign that emails. It sees that emails as spam. To avoid all these in the worker role we’ve built a scheduler for emails sending, of course giving priority to urgent mails such as ‘Welcome email’  or ‘Password reset. We’ve used queues to “tell” the worker the recepient(s) and the body of that email. More about Queues and communication can be found in Windows Azure training kit and Windows Azure SDK.

Azure Queues are a wonderful mechanism. They allow the communication between process threads or roles and they are based on a producer-consumer system. I can say that somehow they resolve the concurency problems of a distributed system. Ciripescu.ro uses one queue for sending urgent emails and another one for the rest of the emails, not urgent – that are sheduled upon a timetable. In the figure below you can see defined queues and how a new item (invitation message) is added to the queue, in web role:

Adding a new item to Azure Queue

Now the worker will consume the item recently added and it will deliver an invitation mail to ciripescu.ro upon a well known delivery timetable, like in the figure below:

Consuming a queue item


First we get the queue used for sending invitations emails (service.GetQueue(…)), we check if the queue exists/has items(queue.DoesQueueExist()) and then we consume the item by sending and email and deleting the item from queue(queue.DeleteMessage(..)).

In the beginnings of Windows Azure(winter 2009) a worker once started it would entered in  sleep state if it has nothing to process, or we needed to have an up and running worker all the time in order to process emails. For this an infinite loop was created (‘while(true)’), and the loop was triggered when the worker starts. In the last release of Azure this is resolved and all code inside the infinite loop should be passed to Run()  method defined by the role.

Azure costs…a lot of money from a developer point of view. And because at the beginnings of Azure the number of workers was limited to one, we’ve found a trick to limit the numbers of workers –  now is good to save some money.  What if when a worker starts you start more threads, each thread processing what a worker would normally do? Of course that you lose in performance, but it is a solution….

Outro or instead of conclusions

 Microsoft announced that upgrade from Community Technology Preview (CTP) accounts of the Windows® Azure™ platform (i.e., Windows Azure, SQL Azure and/or Windows Azure platform AppFabric) to paid commercial subscriptions is available, starting with January 4th. And IT COSTS.  The problems about the costs are relative. How much can an Azure application costs you can be calculated here using TCO and ROI.  Probably it’s a long discussion about which companies are going to pay for Azure, who is going to develop applications using Azure (since testing in the real cloud costs…etc) and more pages are required for this analysis. But for sure Azure is dedicated to enterprise applications,  to large companies which don’t want to invest more money in their own hardware and datacenters, which doesn’t want to invest more in sys admins. It’s sure that those may watch Azure as a storm in their activity. It’s sure that in the next years the biggest applications of the world will be in the cloud. Not only Micro$osft cloud…the others too. It’s sure that for small companies it will be hard to develop Azure applications.  We want to see Microsoft that comes and says: “No….it’s not sure…because…at least for the pricing…”. On the other hand Microsoft invested a lot in data centers and they must recover their money. Also you don’t need the cloud to host a simple web application with a small database in backend. There are enough companies to do this almost for nothing.

 Also it’s sure that www.ciripescu.ro will live as long as Azure will be in CTP. It’s sure that we liked to work with azure and to develop this application. I say “WE” because we are two of us who plays with Azure for almost one year. And here I want to thanks to my  colleague and friend Tudor Carean for his work at this project. 

In the end I would point what we liked most at windows azure, but also what we don’t like so much, when building http://www.ciripescu.ro:

  • Azure costs….a lot. For Ciripescu.ro it costs too much to pay after the CTP account will be disabled. It’s better to develop a non-cloud version and to host it on-premises.
  • We don’t like Azure Tables storage model(used at ciripescu.ro), because sometimes it is too rigid.  It is a good model, but it could have been better.
  • When deploying the application to the cloud it takes some time. The whole deployment mechanism is a black box process. You are not sure how much time to estimate with deployment.
  • Azure is  a new paradigm (at least for us),  it introduces the concept of cloud and cloud computing.
  • We like the idea of “worker role” and “web role” and the possibility of building hybrid applications .
  • Azure adoption was easy, it happened for just one time to reinstall windows because of Azure CTP kits 🙂
  • We like the idea of scalability and how it is implemented – “Azure is one click scalable”.
  • Availability and durability are also strong points of Azure.

Probably there are more others too…

Source code of the entire application is available on codeplex at http://ciripescu.codeplex.com/ .  For additional informations please contact me using contact details from the about page or send an email at contact [at] bitstarsolutions.com.

November 13, 2009

Developing ASP.NET web applications with SharePoint 2007 and CSLA.NET(part2)

Filed under: Technologies — Tudor Cret @ 3:50 pm
Tags: , , , , ,

In the first part we have seen which are the requirements for developing an ASP.NET application for SharePoint 2007 and we build a demo application using CSLA.NET. We’ll continue by customizing and deploying the application we’ve created.

Customize ASP.NET application for SharePoint 2007

Before we start it is good to know what is our purpose, what we have until now and what we are going to do :

  • We want to deploy our ASP.NET application created earlier using CSLA.NET on SharePoint 2007
  • We’ll have to use Visual Studio extenstions for Windows SharePoint Services in order to create an Empty project that will help with automatic packaging and deployment to SharePoint Server
  • Link the pages from the Web Application we already have with the newly created SharePoint project
  • Do some configurations on the projects and config files to work with SharePoint

 A good resource about developing custom ASP.NET pages with SharePoint 2007 is Jamil Haddadin’s blog.

We’ll start by creating a new Empty project that I named CSLADemo:


Figure 5 Creating VSeWSS project

 In this moment the solution contains applications projects and the empty VSeWSS project:


Figure 6 Solution explorer before customization

The next step requires adding a new web form to the web application MyShop2. I called this web form MyDemo.aspx. It will implement demo functionalities plus some special configurations specific to SharePoint.


Figure 7 Adding a new web form

 Having created our demo web form a new Module  project item must be added to the CSLADemo project. Also the recent created MyDemo.aspx web will be added to this module. In order to avoid any duplications we’ll add this using Add as link feature. This module is specific to SharePoint and it represents the element through which our custom web forms will be deployed to the server. Be sure that you modify the namespace in MyDemo.aspx.cs, MyDemo.aspx.Designer.cs and MyDemo.aspx to CSLADemo.


Figure 8 Creating a new module


Figure 9 Add MyDemo.aspx as link

Now our solution looks like:


Figure 10 Solution state

 As you it can be observed I  used a key to sign dlls we’ve created earlier: libkey.snk, dallibkey.snk. If not, SharePoint will throw a runtime error. Csla.dll is already signed.  Also these dlls must be added at CSLADemo project, as link, if not they will not be copied to the deployment directory targets.

In  Module.xml a File element has to be included. Update the Path and Url attributes to the file name of the .aspx file. This deploys the page when the SharePoint solution is deployed, in my simple case; the Module.xml will look like:

Path attribute refers to the reference path of the file in visual studio project and URL refers to the URL of the page in the SharePoint site, in this case our page will be deployed here http://ServerName:port/MyDemo.aspx


Figure 11 Module.xml editing

 In the Configuration Manager of the solution, clear the Build for theCSLADemo check box, see the picture below..


Figure 12 Build configuration

Add Safe control  entry for each deployed dll in manifest.xml. The file can be found in solution’s WSPview .


Figure 13 Adding safe controls

 To deploy the solution change the Start browser with URL in the CSLADemo properties page in the debug pane with your SharePoint root site, in my case http://vpc2003/


Figure 14 Project starting settings

 Before deploying the application and testing it we must comment the SqlDataSource tagMapping in web.config  of the SharePoint application, the web.config associated to http://vpc2003/. It can be found in the virtual directory root.


Figure 15 Comment data source tag mapping

More about customizing asp.net pages can be found on Jamil Haddadin’ blog.

 Hoping that no errors occur, deploy the solution and test it calling http://vpc2003/MyDemo.aspx


Figure 16 Running application(1)


Figure 17 Running application(2)

 The solution is available for download here: http://bitstarsolutions.com/FirstCSLA.rar


Developing ASP.NET web applications with SharePoint 2007 and CSLA.NET(part1)

Filed under: Technologies — Tudor Cret @ 3:49 pm
Tags: , , , , ,

I will try to summarize my experience with SharePoint development. We’ll see how we can develop a custom ASP.NET web application that is built using CSLA.NET. CSLA.NET is an application development framework  intended to help in reducing the cost of building and maintaining applications, but it depends where  and when it is used. With CSLA.NET we easily can use the same business tier for different presentation tiers (windows, web, web services ).

 SharePoint Development

 SharePoint is nothing but a smart ASP.NET 2.0 application which extends the ASP.NET framework by virtue of installing couple of custom components such as HTTP module, HTTP handler, Virtual Path provider, etc.

The end result? We have a provisioning engine that could be used for creating web sites based on the templates that come with it. Of course SharePoint offers more features but those are for some other day.

 So what do we exactly mean by “SharePoint Development”?  That means we want to write some code to extend the out-of-the-box functionality that comes with SharePoint. We’ll focus on adding  custom ASPX pages that help users accomplish a business function. In other words we’ll focus on integrating our custom business pages into SharePoint. Behind those pages we have business rules and processes. For example, a set of Order Entry pages which walk the user through a series of custom ASPX pages for gathering some data. In the example I choose to implement, I simulated the minimal functionality of a shopping cart, concentrating more on the code behind and the business processes.

 CSLA Development

 CSLA stands for Component-based, Scalable Logical Architecture and it is a software development framework that can help in building a maintainable business logic layer for Windows, Web, service-oriented and workflow applications. CSLA.NET enables you to create an object-oriented business layer that abstracts and encapsulates your business logic and data. The business objects supports binding for all major technologies like WPF, Web Forms, Windows Forms, Silverlight etc. Also by using CSLA.NET gives you the possibility to use easily the low coupling principle for the business, presentation and data access tiers. CSLA.NET is perfect to use when you need the same business tier for multiple presentation tiers, or when your code runs against multiple machines, you work in a distributed environment. But all these benefits are  time costing. The adoption of CSLA.NET is not easily made and requires some time. Becoming efficient with CSLA requires more time, time in which you can use alternatives which depends on the business process and environment you are going to work with. More about CSLA.NET can be found on the official site, here. The best resource to learn CSLA development is Rockford Lhotka’s (creator of CSLA.NET) book Expert C# 2008 Business Objects, also available in VB.NET version. Demo application presented in these book is available in CSLA.NET distribution.

Tools, technologies and environment preparations:

Before start proceeding with development be sure that SharePoint is up and running on your server and also Windows SharePoint Service works fine. You can do this by running your Central Administration Panel from SharePoint start menu item and access your installed sites.

Also be sure that you installed CSLA.NET Visual Studio templates. They come with the CSLA distribution and it helps in creating CSLA projects and entities.

  Developing ASP.NET applications using CSLA.NET

 In the next section I will focus on developing an ASP.Net application that uses CSLA.NET and that will be deployed on SharePoint. The application simulates the functionality of a shopping cart (adding products, ordering and submitting orders) for an online store. We’ll start up by building our database:


Figure 1 Tables diagram

 Product table contains data about the available products, size, price, name and description. A product can be ordered by an user and it will be associated to an order, represented by Order table. Products are associated to an order using the OrderDetail table. The shopping cart is represented by an Order with status “N”. In Order table we can not have more than one order having the status set to “N”, since we have only one shopping cart. When the products from shopping cart are submitted we’ll just change the status of the order to “P”.

I did not included here any reference to users and user roles because I just wanted to simplify the business model, in order to understand better the new paradigms provided by CSLA.NET, but the model can be easily extended at any time.

Having the database already done, we’ll continue by creating a new Web Application within a new solution using Visual Studio 2008. Let’s call the solution FirstCSLA  and the web application MyShop2. Also we’ll add two more class library MyShop.Lib  and MyShop.DalLinq which represents the business layer, respectively the data access layer. We’ll try to obtain a low coupling relationship between those layers. In the figure below you can see the solution and the projects we’ve created until now:


Figure 2 Created solution and projects

Data Access Layer(DAL)

At this time I’ve chosen to implement DAL  using Linq, by simply adding a new Linq To SQL Classes  item to MyShop.DalLinq project as in the figure below:


Figure 3 DAL Linq implementation

 To generate MyShop.dbml  create a new database connection from Server Explorer and  just  drag and drop tables and stored procedures into the designer window.

Business Layer(BL)

This layer contains one of the more interesting parts and it uses the new CSLA.NET elements and paradigms. Until now we have done DAL and database and we know that we need a low coupling relationship between  BL, DAL and Presentation Layer. This problem will be resolved using CSLA.NET. This framework is created in a such way that will help developers and architects to split the architecture of the entire application  to accomplish their needs and requirements with minimum costs.  As an example let’s suppose that the BL  it’s so complex that we need to run it on multiple machines, in a distributed environment. But in the same time we can keep our DAL and Presentation Layer on the same or different machines. More about the applicability of CSLA.NET can be found in the first chapters of  Expert C# 2008 Business Objects book.

So, I started to create the classes that I needed in dealing with ordering process and shopping cart. First of all we need to retrieve the list of products. Without CSLA.NET probably we would have been using the MVC pattern and we would have been created a controller to retrieve this list. Or more easily using a SQLDataSource or LinQDataSource in the Presentation Layer. With CSLA we’ll create a read only list that will contain the business objects, in our case the products that exists in database. I choose to implement a read only list since I had no intention to implement the edit feature for the products. Also we can use a simple read only business object in this case. So, I’ve created ProductLIst  class that handles ProductInfo business objects, as you can see below.


GetProductList() method returns all the existing products in database and watching the fetch method we can see how DAL is used to retrieve the products. Also we observe that  in class definition

“public class ProductList : ReadOnlyListBase<ProductList, ProductInfo>”

  we have specified the type of the business objects – ProductInfo, which contains the properties associated for a product.


I continued by creating the Order class and the list that handles the order business ojects. This time we have to edit an Order because we have to implement the submission process that suppose an order update. For this reasons it is necessary to register  its properties, in order to let CSLA to maintain the object’s state:


More details about property registration can be found in Expert C# 2008 Business Objects book.

In the end I implemented the classes necessary to hold the relation between an order and its products. I named the business class OrderProduct and the list class OrderProducts,  which in fact logically maps on the many to many relation between Product and Order tables in database.

The complete class diagram for the BL is shown below:


Figure 4 BL classes


Presentation Layer(PL)

I will concentrate on data bindings and how we use classes from BL to display and operate with data. First of all it’s important to mention that CSLA.NET provides a built-in data source object, CSLADataSource that extends the functionality of .NET built-in ObjectDataSource. More about CSLADataSource object can be found in the Expert C# 2008 Business Objects book.

But why CSLA provides this built-in data source object ? As you could have seen I have implemented some lists in BL that handles different business objects. Now using CSLADataSource is very easy to bind these lists to controls : DataGrids, DataViews, DataLists. I choose to use a DataGrid. For example the binding of the products to a grid view looks like:


Binding a list of business objects to a grid is as simple as making two drag and drops and setting up the type name and the assembly of the list for the data source. If we watch closer to the datasource definition we observe that  TypeName=”MyShop.Lib.ProductList, MyShop.Lib”  “tells” what type of objects will have to handle with. OnSelectObject  event binds the list with the business objects to the data source. Behind the design, the only thing we must assure is that during the  page load we call grid’s databind method and that we select the business objects datasource.


 I used the CslaDataSource for the other bindings too. The rest of the code, with the entire solution is available for download here.

In the second part we’ll see how to customize and how to deploy the application we’ve just created.

Create a free website or blog at WordPress.com.