Modern Software Architecture: Domain Models, CQRS, and Event Sourcing
by Dino Esposito
Hi everybody and welcome to Pluralsight. My name is Dino Esposito. Today I'll be guiding you through the maze of Modern Software Architecture, and hopefully will provide insights and some good advice for you to build solutions for your company and your customers. The world in which we live often doesn't give us the opportunity or even the time to build a rock solid knowledge base out of experience. More often than not, all that executives want from solution architects and lead developers is the right architecture right away. So, what about this course? I don't wear any white coat, I'm not a doctor, so this course is not a prescription, but my goal here is aggregating various positions, adding my own annotations and comments to those positions up to generating a summary of facts and perspectives. The key takeaway is therefore a ready-made software architects digest that serves as the starting point for further investigation and to build up your own judgment. This course has three main purposes. The first is repositioning domain-driven design, DDD for short, as the ideal methodology to conduct domain analysis. The second purpose is exploring a few supporting architectures that today help with the actual implementation of software systems. I'm talking about things like domain models, CQRS, and even sourcing. Finally, the course presents a relatively new approach to the overall process of designing a software system. I call it UX-first, but honestly, in spite of the fancy name, it's a plain, top-down approach where some big design effort is made up front, even though it's limited to the user experience rather than spanning over the entire domain. There are a couple of fundamental references that I want to provide to take your understanding farther. I suggest in particular you keep an eye on the developments being done at http://naa4e.codeplex.com, where Andrea Saltarello and I will be posting bits and pieces of what we consider a CQRS based reference application. And if you are looking for a more traditional offline resource for modern software architecture, then I can only recommend our book, Microsoft .NET Architecting Applications for the Enterprise Second Edition from Microsoft Press.
DDD in History
It turns out that software is subject to two opposed forces: the force of just doing things and the force of doing things right. As long as the final works, there's nearly no difference, but any code that just works may derive from the wrong approach. If this happens, it's a sort of a technical debt that you are going to pay at some point. Domain-driven design is a particular approach to software design and development that Eric Evans introduced over a decade ago, and since, details of DDD are captured in the blue book that Evans wrote back in 2003. The book's subtitle transmits a much clearer message about the intended purpose of DDD. It says something like Tackling Complexity in the Heart of Software. In the beginning, DDD was perceived as an all or nothing approach to application design. It gave you a method and quite innovative guidelines and overall the promise that it would work. The complexity that software is called to address is just the complexity part of the real world, the complexity of describing what's before our eyes just using the formal terms of a programming language. A team that knows nothing about DDD will probably write software using the following flowchart. First, they will try to acknowledge user requirements, and then build a data model, mostly and possibly a relational data model. Then, the team would try to identify tasks the system has to accomplish and build a user interface for each task. The steps of identified tasks will be detailed to form pieces of what usually goes under the all-encompassing umbrella name of business logic. In the end, the team may find out that the artifact is close to what users wanted, but not so close to pass acceptance tests. When this happens, the cycle repeats over and over. In a world that doesn't know DDD, architects typically identify relevant database tables and then relevant functional tasks to implement and algorithms that mixing together user input with stored information produce the effect that users expect. Any software addresses a business domain that is how a company does its own business. So a business domain, it's about processes, it's about practices, people, even language. Software development comes easy if you crunched enough knowledge about the domain, but it may become painful, really painful, if you lack knowledge and maybe missed some relevant tasks, and even more if you missed important business rules. Lack of domain knowledge makes writing software an unmanageable process, at least beyond a certain level of complexity. Past a given threshold, even the smallest change has significant costs to be implemented. A common expression to reference such problematic software projects is the Big Ball of Mud, BBM for short, a system that's largely unstructured, padded with hidden dependencies between parts, with a lot of data and code duplication, and an unclear identification of layers and concerns. To use the words of the authors, people who coined the term Big Ball of Mud, a spaghetti code jungle. Software architects have recognized domain analysis and modeling as a critical topic since the very early days of the software industry. But only a decade ago, DDD came out, and the DDD has been the first attempt to organize and systematize a set of principles and practices to build software systems, and especially sophisticated software systems, reliably and even hopefully more easily. DDD has been inspired by ordinary development stories, some successful, some not.
Welcome back to Modern Software Architecture. This is still the first module title, the DDD at a glance. Since the very early days, DDD sounded intriguing to people because it captures well common elements of the design process, things like domain analysis and modeling, for example. At the same time, those elements are in DDD organized in a set principles and techniques. Domain modeling is central to the process of software design, and overall, DDD results in a new way of doing the same thing, that is, organizing the business logic of a software system. So, DDD is still about business logic. It just suggests you follow a different approach, more focused on a domain, which is data and behavior, instead of just mostly data. Here is the typical DDD flowchart. First, learn as much as possible about the domain, not simply a flat list of rules and not simply a bunch of entity names, but try to capture the essence of the domain, the internal dynamics, how things go in that segment of the real world. You learn this from domain experts, but you won't simply get specification and requirements from domain experts. In doing so, you will likely see the big domain split into multiple subdomains, and you should not be afraid of splitting the big thing in smaller and more manageable pieces. For each recognized subdomain, you then design a rich object model that, regardless of concerns like persistence and databases, describe how involved entities behave and our action by users. Remember the popular principle tell, don't ask? Well, this is what you should be doing once the model is in place. You should write your code by telling objects in the domain just what to do. In a way, DDD inspired and represented that secret dream of any developer, working to create an all-encompassing model to describe the whole domain of the system. Now, let's see how the DDD approach compares to the more traditional and database-centric way of building software. Imagine you have a complexity on the X axis and time and costs on the Y axis. This is the curve that according to Martin Fowler and his Patterns of Enterprise Application Architecture, the popular book he wrote about a decade ago, the curve you see onscreen now shows that at some point beyond a certain level of complexity following a data-centric design patterns, even a small increase of complexity results in a significant peak of costs. On the other end, time and cost of a project designed according to a domain-centric view, tended to have a linear growth with complexity, but, and this is not a secondary point at all, if you take a domain-centric approach, you have to face quite high startup costs. The supreme goal of DDD inventors, and specifically Eric Evans, was just providing tools, better tools, to tackle complexity. A wonderful idea, not just a mere promise, a doable idea. Well, over the years, it proved not really hard to do right, but just easier to do wrong. At any rate, I would say that it's just like theory goes in the real world, and theory usually goes differently from practice. Like it or not, however, DDD represents a turning point in software development. It started taking root in the Java space and only in recent years it conquered the .NET world. DDD offers techniques with a strong theoretical background, but if DDD is just another way of organizing the business logic, why should I change my consolidated way of doing things? This is what many developers still think and wonder. Sure, DDD code is elegant, but it's code in the end, and code exists to fulfill specs. If my architecture works and my customers are happy, why should I ever consider DDD? Now, suppose that I'm in trouble with my architecture. In which way using objects instead of, say, stored procedures would help me out? If the problem is, I don't know, performance or scalability, well, that's nothing that objects can fix just because they're objects. And if the problem is requirements, again, why would I have called them if I were using objects instead of stored procedures? I would say that DDD was the right hammer, but bundled and sold with the wrong set of nails. The original made model to express the business logic of the application is not the most relevant part of DDD, or rather, it's still relevant, but other parts of it are more crucial for software design. DDD is more about analysis than it is about actual coding and coding strategies. In 2009, five years later, the book_____, that perception of DDD started changing. DDD is now clearly considered more a tool for discovering the domain architecture than a pattern for architecting systems. The Domain Model vision remains valid, but it's just one of many other patterns that can be used. So, object-oriented models are okay, but the same can be said for functional models, for CQRS architectures, for classic 3-tier design, and for even more classic client/server 2-tiers. If you want to hear Eric Evans in person to talk about the shift of focus in the DDD, well, then have a look at the talk he gave at QCON 2009 in London.
If you ask around about DDD, you'd probably come to the conclusion that DDD is all about building an object model for the business domain called a domain model, and consuming the model in a context of a multilayered architecture. However, it has the same order in some way recognized over the years, the best part of DDD is in the tools it provides to understand and make sense of the domain. On the other hand, DDD just stands for Domain-driven Design, that is, you, the architect, designed a system, after having understood it, driven by your knowledge of the domain. It turns out then that there should be two distinct parts in DDD. You always need one and may sometimes happily ignore the other. DDD has an analytical part that essentially sets an approach to express the top level architecture ideal for the business domain you are considering. The top level architecture is expressed in the terms of constituent elements subdomains that are referred to as bounded context in accordance with the specific DDD jargon. In addition, DDD has a strategic part that instead relates to the definition of a supporting architecture for each of the identified bounded contexts. The analytical part is valuable to everybody and every project. The strategic part, instead, the domain model pattern that is one all encompassing object model for the entire system, is instead just one of many other possible supporting architectures. So, in the end, that shift of focus that occurred in DDD in the past years is all about in inverting the role of the analytical part and the strategic part. In the beginning, the primary aspect of DDD was considered to be the multilayered architecture centered on the domain model vision. This particular aspect became secondary compared to the predominant role of the analytic part in DDD, the real value of DDD. This is what people essentially found out after years of DDD experience, the analytical part is where most of the value lies. In the end, we could say DDD is just what its name suggests, design driven by the nature of the domain. Now it all boils down to the following core question: How do you do design driven by the domain? Which tools and patterns should you use to conduct effectively the analysis of a business domain? Well, this is just what we will find out in the next module of the course.
Discovering the Domain Architecture through DDD
Welcome back to Pluralsight. It's module two of Modern Software Architecture and this is Dino Esposito. As discussed in the first module, today most of the value of DDD is in the tools it provides for a domain analysis. Domain-driven design is about tackling the complexity in the very heart of software and it's about organizing the business logic. To achieve this goal, it's crucial to focus first on the analysis of the domain and keep things like layers, object models, and persistence for later. This module provides a walk-through of the DDD analysis practices. In particular, we'll focus on concepts like the ubiquitous language, the bounded context, and context mapping, the pillars of domain-driven design. The ubiquitous language defines a common vocabulary to express models and processes. Bounded contexts are segments of the domain that is preferable to treat separately, and finally, all identified contexts and their explicit relationships form a map, which is where your domain analysis efforts come to an end.
The ubiquitous language is the part of domain-driven design that aims at building a common and business oriented language. The primary goal of the language is avoiding misunderstandings and bad assumptions. As a language, it's expected to give people the certainty that what's being said if part of the language is well understood by everybody. Concretely, the ubiquitous language is a vocabulary of terms shared by all parties involved in the project, developers, project managers, the users, the domain experts, analysts, ____ coders. The language is made of things like nouns, verbs, adjectives, idiomatic expressions, and even adverbs. Once defined, the language should be used regularly in all forms of spoken and written communication, thus becoming the universal language of the business as that business is done in that organization. It's a matter of fact that each business domain has its own lingo, so domain experts speak a language that people outside of that business domain may have a hard time to understand. Have you ever spoken to a doctor or a lawyer? Well, in this case, you know how convoluted and thorny their language can be if you are a patient or a client. A common terminology, strictly business oriented, that acts as the official language of the project makes translation between the jargon unnecessary and mitigates the risks of misunderstandings. In other words, a common terminology helps make better sense of user requirements. The ubiquitous language is the language of the domain model being built, but it's very close to the natural language of the business domain. It's not artificially created, but it just comes out of interviews and brainstorming sessions. It's iteratively composed along the way and changed over time to better reflect the understanding of the mechanics of the domain. It is rigorous, fluent, and unambiguous so that it meets expectations of both domain experts and technical people. To be really effective, the language must be used in all forms of communication and interaction. First and foremost, user stories are written and then edited and revised in the language and so is for request-for-changes, meetings, and emails. Finally, the ubiquitous language supplies the vocabulary used for a technical documentation, scheduled plans, and last, but not least, source code. The glossary is typically saved to some office document, and it becomes an official part of the project documentation. The language, the ubiquitous language, is not static, and is continuously refined by the development team as more is learned about the domain. Changes to the ubiquitous language are not minimal changes. The scope of the language is so big that a change to the language often implies changes to the source code as well.
Defining the Ubiquitous Language
The initial focus of DDD was on the model that describes the business domain and the ubiquitous language was mostly seen as a language to describe an understood and kind of known model. Over the years, this perspective has changed. The common understanding today is that discovering the ubiquitous language out of user requirements and interviews helps build the knowledge you need to better craft the domain model. Notice that a domain model is just any model that works. It doesn't have to be necessarily an object-oriented model. It can be that, of course, but it can also be, say, a functional model where no classes are used at all, but functions and stored procedures to deal with data. Discovery of the terms that make up the ubiquitous language starts from user requirements. Nouns and verbs are found along the way. Let's try to read a plausible user story. As a registered customer of the I-Buy-Stuff online store, I can redeem a voucher for an order I place so that I don't actually pay for the ordered items myself. Now, in the example, it turns out that voucher is the official name used to refer to a prepaid token useful to buy things. Synonyms, like a coupon or a gift card, are just not allowed in this business context. In summary, the ubiquitous language is made of words and verbs that truly reflect the semantics of the business domain. In a meeting scheduled to share ideas and purposes of the project, there are typically two groups of people, domain experts and software experts. Software experts tend to speak using technical concepts like submitting an order, delete the booking, update the job order, create the invoice, or even set the state of the game. Domain experts, possibly coming from various departments in the organization, would rather use different wording for the same concepts, and maybe say things like cancel the booking, or checkout, or extend the job order, or maybe register, or accept the invoice, or start and pause the game. Ambiguity and synonyms should be promptly removed as discovered along the way. Consider for a moment that following expression, extra threshold costs should be emphasized in the user account. Now, what's the intended purpose? Is it show costs when the user logs in? Or list costs in a detail page? Or maybe mark extra costs in the bill? Other? In summary, different concepts should be named differently; at the same time, matching concepts should be named equally. The ubiquitous language is neither the role language of the business nor the language of development. Both are forms of jargon, and both, if taken literally, may lack essential concepts, generate misunderstandings, and communication bottleneck. This is where the ubiquitous language comes in. It's a combination of business and technical jargon. The ubiquitous language is expected to contain mostly business terms, but some technical concepts may be allowed just to make sure that the final language faithfully expresses the system behavior.
Ubiquitous Language Tips
Sometimes it happens that just the language that domain experts use among themselves is ambiguous and unclear. Yet, it's their lingo. A good example of this is acronyms. In some business scenarios, most notably the medical industry, acronyms are very, very popular and widely used. Acronyms, however, are hard to remember and understand. You should avoid acronyms in the ubiquitous language and introduce new words that retain the original meaning acronyms transmit. This is not a violation of the spirit of the ubiquitous language; more simply, it makes the language easier to use and understand. The ubiquitous language is a glossary, but from which language? Is it English or the language of the customer? If the customer speaks, say, Italian, then the ubiquitous language can hardly be English. This is not a secondary point, especially in case of international teams working on the same project. The ubiquitous language is written in the official, natural language of the project. If translations are necessary for development purposes, then a word-to-word table can be created. It definitely helps, but the risk of introducing some ambiguity remains. Hiring programmers with expertise in a given business domain reduces the troubles with building and maintenance of the ubiquitous language, but won't eliminate the problem at the root, because out there in the real world, you will never find twice the same business domain. And finally, keep in mind that a change in the ubiquitous language is a change to the model, and subsequently a change to the code. Software experts are responsible for keeping the ubiquitous language and the code in sync. There are two main scenarios where the analytical part of DDD really excels. One is when there is really a lot of domain logic to deal with, that is tricky to digest _____ and organize. An ubiquitous language here is key as it ensures that all terms used are understood and no other terms are used to express requirements, discuss features, and write code. Another scenario is when the business logic is not completely clear. Because the language of the business is being built, the business itself is being built, and the software is just part of the initial effort. Startups are an excellent example of this scenario. In this case, the domain logic is being discovered and refined along the way, making the availability of a ubiquitous language a great benefit to understand where one is and where it can move forward. When it comes to turning the ubiquitous language in code, naming convention is absolutely critical. Names of classes and methods and namespaces should reflect terms in the vocabulary. Using extension methods in languages that support them, such as C# and Kotlin, a Java dialect, helps immensely to keep the code fluent and strictly domain specific. Tools also help because they enforce coding rules and conventions. I'm thinking about code assistants like ReSharper, but also other refactoring tools, and even gated check-ins like those you find in TFS. A gated check-in is a feature of TFS and similar products that allow you to only check in code that passed validation. And finally, the ubiquitous language is agnostic to technologies and paradigms. In just one sentence, in the ubiquitous language, missing a domain-specific point is likely creating a bug.
The ubiquitous language in sync with model and code changes constantly to represent both the evolving business and the growing architect's understanding of the domain. The ubiquitous language, however, cannot change indefinitely. You cannot just keep on extending the language and stretch it to include just any new concepts you discover. If you do so, you risk to have an ambiguous language with duplicates, bloated, and less rigorous than it should be. This is where domain-driven design introduces a brand-new concept, the bounded context. The bounded context is the delimited space within the domain that gives any elements in the ubiquitous language a unique and unambiguous, well-defined meaning. A bounded context represents a subdomain with its own ubiquitous language, and subsequently, outside the boundaries of the bounded context, the language is slightly different. On the other hand, if the language is the same, the context is the same. The business domain is then split in a web of interconnected contexts. Each context may deserve its own architecture and implementation. Bounded contexts in domain-driven design serve three main purposes. First, they fight ambiguity and duplication of concepts. Second, by dividing the domain in smaller pieces, you simplify design of software modules. And finally, a bounded context is the ideal tool to integrate in the system legacy code and external components. Sometimes you find out that the same term has different meanings when used by different people, especially in a large organization. When this happens, you probably crossed the invisible boundaries of a different context. So the business domain you assumed invisible, then at some point, needs to be split. The constituent contexts are not isolated and totally independent. They are likely connected in some way, and in some way communicate and interact. Subtly related terms are sometimes used to describe bounded contexts. Let's try to clarify. So let's say that you have a business domain, and therefore you are in the problem space. When you work out a software solution for the business domain, you create a model for the domain. Again, it doesn't have to be an object-oriented model, just a software model. Hence, you are in the solution space. When you discover subdomains in the problem space, then you split the software model for the domain into bounded context.
Discovering Bounded Contexts
A bounded context is an area of the domain model that has its own ubiquitous language, its own independent implementation based on a supporting architecture, such as CQRS, and a public documented interface to interact with other bounded contexts. Imagine a large organization and a large project. Imagine a model for the domain that grows bigger and bigger and multiple development teams at work, each with its own area of influence. It may happen at some point that the same word, say, account, is used with different meanings by domain experts from different departments. When two or more models overlap, each can be stretched to some extent with the purpose of hitting just one more flexible model that serves multiple purposes. More often than not, however, overlapping contexts just denote the risk of breaking the conceptual integrity of the overall model for the domain. There are a few aspects that put the model at risk of losing conceptual integrity. The most common scenario is when the same term means slightly different things to different people. Different experts may use the word payment, but both would describe the entity payment in a different way with a different set of properties and different behavior. Another scenario is when the same term is used to refer to completely different things. A good example here is account. Some experts may mean the user account for log in, others may intend the financial account of a given customer. This is one of those cases in which renaming entities in some way helps. Dependency on external subsystems, including blocks of legacy code, is another scenario that suggests the use of a bounded context. More in general, a bounded context can be identified in those situations in which you see a functional area of the application that is better treated separately. As an example, let's consider the booking application for a sporting club. Distinct functional areas may be the club website, the log in subsystem, the backoffice, and maybe the reporting module. In addition, you can have some external systems, for example, PayPal for online payments, and other modules, other systems for weather forecasts, and text, email messaging. A term like account may be used with different meanings in the log in and club site context. For example, in the log in scenario, it will mean something related to authentication, it will be something related to billing from within the club site perspective. Similarly, payment is a shared term between the club site and PayPal contexts. What each module means may be different and touch on different functional areas. For example, internal booking and money transactions. Overlapping concepts that will lead to ambiguity, duplication, and increased complexity can be managed in a domain-driven design by introducing bounded contexts. Defining the contours of each bounded context, however, may not be trivial____. You can manage to have a single, all-encompassing model. In this case, you only have one version of potentially ambiguous concepts like member or payment, and the implementation is then responsible to handle and manage differences. Typically this happens at the cost of more complexity. In some cases, you can isolate shared elements in a separate kernel and load the right one on demand. Complexity is still present, but now is limited to smaller components. Yet another solution, probably the most reliable, is giving each complex the implementation and the names it needs. In this case, member and payment unambiguously mean just what they are supposed to mean in the context in which they are used. Is this a form of code duplication? No, I wouldn't say so. I wouldn't call it duplication. I would rather say this is clarification. On the other hand, forcing abstraction and thus having different functionalities full under the umbrella of the same class, that would likely be a violation of the single responsibility principle.
It is quite common that the whole functionality of a software system results from the composition of multiple modules. One dithat we'll cover, bounded context. So, for example, in an ecommerce website, you can have the web store, accountability, delivery and shipment designed as distinct modules. The number of bounded contexts often reflects the physical structure of the organization that commissions the project. And typically you have one bounded context for each business department. In domain-driven design, a context map is just a diagram that provides a comprehensive view of the system being designed. Each bounded context may have its own independent implementation, which means that each bounded context has its own supporting architecture and implements that using its own set of technologies and programming languages. This means that, for example, you can have a bounded context built following the CQRS architecture, another one building following the guidelines of the domain model pattern and entity framework, and another one built using plain two-tier data binding via ADO.NET and old-fashioned web forms. Bounded contexts are independent from an implementation perspective, but are related to one another in some way and communicate. This means that the next challenge to face is just recognizing which type of relationship is set between bounded contexts. There are quite a few aspects to discuss in the context map you see now onscreen. First and foremost, what's the role of those weird letters U and D in the map? U and D indicate the direction of the relationship. So, U stands for upstream context and D stands for downstream context. An upstream context influences downstream, but the vice versa is not true. Influence may take various forms. For sure it means that the code in the upstream context is available as a reference to the downstream, but it also means that the schedule of work in the upstream context cannot be changed and won't be changed from within the downstream context, but also the responsiveness of the team behind the upstream context to request-for-changes may not be as prompt as desired. Domain-driven design defines quite a few types of relationships for the architect to set between bounded contexts. One type of relationship is conformist. A conformist relationship indicates that the downstream context totally depends on the upstream and that no negotiation is possible between the parties. This typically happens when the upstream context is some sort of legacy code or an external service. Similar to conformists is the customer/supplier relationship. The customer context depends on the supplier context, which is upstream. However, in this case, some negotiation between the teams managing the involved context is possible. So the customer team can share concerns and expect the supplier team to address them in some way. Essentially the difference between conformist and customer/supplier is all in the margin that exists for negotiation and talk between involved parties. No negotiation in conformist; some room left for negotiation in customer/supplier. The partner relationship instead refers to a mutual dependency set between two involved contexts, which depend on each other for the actual delivery of the code. Shared kernel refers instead to a piece of the model shared by multiple contexts, and this means that no parts of the shared kernel can be changed by owners, so without first consulting teams in charge of contexts that depend on it. And finally, the anti-corruption layer is just an additional layer that gives the downstream context a fixed interface to deal with no matter what happens in the upstream context. Having an anti-corruption layer is essentially like having a proxy that provides translation between distinct, but interfacing models.
Event storming is an emerging practice whose primary purpose is exploring the business domain to understand how it works and identify key events and commands. Originally developed by Alberto Brandolini, the technique consists in getting developers and main experts in a room to ask questions and find answers all together. The classic two-pizza rule sets the right number of invited people. The two-pizza rule says that there should never be in a meeting more people than one could feed with two pizzas. Generally this limits the number of participants to less than eight. The meeting room must have special equipment, in particular a very large whiteboard or at least a very long paper roll. Even an empty wall works. The point here is having a lot of modeling space to build a long enough timeline of events and commands, draw sketches, and jot down notes. But the most characterizing aspect of event storming is the use of some colorful tape and, more importantly, sticky notes. An event storming session consists in talking about observable events in the business domain and listing them to the wall or whiteboard. A sticky note of a given color is appointed on the modeling surface when an event is identified, as well as other domain-specific information, such as critical entities that aggregate comments and events together. Let's see how it works. The primary goal of event storming is identifying relevant domain events. So, for example, in an ecommerce scenario, you will put a sticky note on the wall, say, for the order created event. And you will iteratively explore the domain trying to name all events that are required to process in the implemented system. The next step is figure out what caused each discovered event. A domain event, for example, can be caused by a command requested by some user, and this is the case for the checkout command, which generates the order created event. But the event can also be caused by some asynchronous happening, including a timeout or an absolute time. In both cases, you have the sticky note for the cause. The event can also be the follower of a previously raised event. In this case, you just add another event note close to that of the originating event. In the end, the modeling surface works as a timeline, and quite a bit of space is necessary to contain sticky notes for all identified events and commands that caused events, following events, user interfaces, catches_____ notes and more. Business people in the meeting room should not be left alone. An external figure is helpful and event storming calls this figure the facilitator. The facilitator leads the session, stops long discussions, ensures that focus is not lost, and advances the meeting through the various phases. The facilitator doesn't have to be a domain expert. He is there to understand and, more importantly, to guide others to understand and formalize the domain. The facilitator asks the first questions to kick off the meeting and sticks the first notes on the wall. Along the way, the facilitator ensures that the emerging model is appropriate and accurate for everyone involved, and at the end, the facilitator might even want to take a picture of the final work. Event storming brings several benefits. First and foremost, it represents a quick way to gain a comprehensive vision of the business domain. A valuable output of event storming is the list of bounded contexts and aggregates found in each context. Here an aggregate is essentially the software component that handles commands and events and controls persistence. Event storming helps also forming an idea about the types of users in the system, and where the user experience is particularly important, even more important than the actual tasks performed. In this case, sketches of the critical screens from user experience perspective can be added to the event storming output. Event storming is a surprising experience, sometimes even a mind-blowing experience, but it's a new thing, so you may not be able to find detailed instructions out there, at least not in the amount that you may expect. In case, however, just point your favorite search engine to event storming, make a search, and as always, make sense of what you find.
The DDD Layered Architecture
Hi everybody, and welcome to module three of the Pluralsight course on Modern Software Architecture, Dino Esposito speaking. This module is essentially another view of the layered architecture, the multilayer segmentation of a software system that Eric Evans first introduced in a book that presented domain-driven design into the world. At the end of the day, the layered architecture is a revision of the classic, and in a certain way, iconic, 3-tier architecture, the one made of presentation, business, and data access. In this module, I'll explain ____ that segments of a layered architecture and how they relate to the aforementioned, but so far quite indefinite concept of a domain model. So we'll first talk about layers and tiers, and the separation of concerns. Next, we focus on common patterns for organizing the business logic. And finally, we'll discuss the four layers of the architecture, presentation layer, application layer, domain layer, and infrastructure.
The Layers of a Software System
Hi everybody, and welcome back. It was about the late 60s when the complexity of the average program crossed the threshold that marked the need for a more systematic approach to software development. Messy code was belittled and infamously labeled as just spaghetti code. So we learned to use subroutine to break code into more reusable pieces. In full terms, we evolved from spaghetti to lasagna. And the analogy makes sense. Spaghetti is notably a long type of pasta and requires special tools to be served effectively and also some experience to be consumed. Lasagna is a layered block of noodles and toppings, so easy to cut in pieces and consume. Trying to excerpt the essence of spaghetti and lasagna code, one could say that spaghetti code is a messy tangle of instructions leading nowhere near to solid software. At the same time, lasagna is about layers, relatively easy to cut vertically and/or horizontally, and so easy to deploy. So far, I use the two terms layers and tiers almost interchangeably. This is not unusual, and really many people out there just use these terms nearly as synonyms. To be precise instead, the term layer should be used to identify a logical container for a portion of code. The term tier instead should indicate some physical container for code. A tier refers to its own process space or its own machine. All layers are actually deployed into a physical tier, but different layers can go to different tiers. In the 90s, most computing scenarios consisted of one insanely powerful server, well, insanely powerful for the time it was, and then a few slower personal computers. Software architecture was essentially a client server thing. Any business logic was stuffed in stored procedures to further leverage the capacity of that insanely powerful server machine. Over the years, we all managed to take larger and larger chunks of the business logic out of stored procedures and placed them within some new components. This originated the classic 3-tier model. I'd say, however, that the largest share of systems inspired by the 3-tier architecture is actually implemented as a layered system deployed over 2 physical and distinct tiers. For a website, for example, one tier is represented by the ASP.NET application running within the IIS crossspace, and another tier is the tier that hosts the SQL Server service that provides data. Domain-driven design _____ to a slight variation of the classic 3-tier model. That presentation layer is the same in both architectures, and the infrastructure layer includes the data layer, but it's not limited to that. The infrastructure layer in general includes anything related to any concrete technologies, and also cross-cutting concerns, such as security, logging, caching, and even dependency injection. The business layer exploded into application and domain layer, so in the end, the layered architecture results from a better application of the separation of concerns principles. In 3-tier systems, often the actual business logic is sprinkled everywhere. Business, but also presentation and data layers. The layered architecture just attempts to clear up such gray areas. In light of DDD, we can then summarize application architecture of today through the following diagram. First, you build a model for the domain, and you do that leveraging the strategic patterns of DDD, such as ubiquitous language and bounded context that we examined in module two. Next, you plan to have some layered architecture with standard segments, presentation, application, domain, and infrastructure. However, at a deeper look, you find out that you can map each of the layers to a specific, very specific, architecture of concern, which is user experience for the presentation, ad hoc use-cases for the application layer, business logic for the domain layer, and persistence for the infrastructure layer. The business logic concern is then resolved in an appropriate design pattern to organize and process business rules. The persistence concern by contrast is resolved by picking an appropriate medium and format for storing data.
The Presentation Layer
Welcome back to Pluralsight. Let's just now start going through the various pieces that form a layered architecture and just find out more. We'll start with the presentation layer. The presentation layer is responsible for providing some user interface to accomplish any tasks. Whatever command is issued from the presentation layer hits the application layer and from there is routed through the various remaining layers of the system. Presentation can be seen as a collection of screens and each screen is populated by a set of data and any action that starts from the screen forwards another well-designed set of data back to the screen, the originating screen, or even some other screen. Generally speaking, we'll refer to any data that populates the presentation layer as the view model, and we'll refer to any data that goes out of the screen and triggers a ____ action as the input model. Even though a logical difference exists between the two models, most of the time view model and input model just coincides. Today that presentation is the most critical part of an application. And notice that this is a key change that is taking place under our eyes. We are evolving, slowly, but still evolving from a bottom-up way of building applications towards a more effective top-down approach. Presentation has always been the part of the system directly exposed to users, but however, for many years we just had a mass of quiet users basically accepting any enforced user interface. Today is different. Today we have a mass of users actively requiring a much more interactive and effective experience while working with applications. So today an effective experience is really crucial for the success of any software. The presentation layer is then responsible for providing the user interface to accomplish any required tasks. At the same time, it is responsible for providing an effective, smooth, and even pleasant user experience. User experience is then defined as the experience the user goes through when she interacts with the application screens. Good attributes of a presentation layer are essentially a task-based nature, but also the presentation layer has to be device-sensitive and friendly, it has to be user-friendly, and above all, it has to be faithful to the real-world processes being implemented by the application itself.
The Application Layer
Welcome back to Pluralsight, Dino Esposito speaking. The application layer is an excellent way to separate interfacing layers, such as the presentation layer and the domain layer. In doing so, the application layer gives an incredible contribution of clarity to the entire design of the application. The application layer is just where you orchestrate the implementation of the applications use cases. In the past, a typical gray area of many architectures was just the placement of that part of the business code that, for some reason, needed to be aware of the presentation. So, let's consider for a moment that code that formats data for presentation purposes. Where does it really belong? Many developers think that the answer to this question is fairly obvious. You said that. It's clearly a presentation concern. To other developers instead, it sounds more like a business logic aspect, and yet another group of developers reckon that the data is just data and then it's the database to return that data. In summary, the application layer has to report to the presentation and he has to serve ready-to-use data in just the required format. The application layer orchestrates the tasks triggered by all actionable elements we find in the presentation screen, just the use-cases of the application's front-end. And finally, the application layer, or subsequently the application layer is doubly-linked with the presentation, and it may need to be extended and even rewritten from scratch whenever a new front-end is added to the system. For example, this happens when you add a mobile application to an existing website. Let's see what this means in terms of code. In an ASP.NET MVC scenario, the application layer is made of classes, which are directly invoked from controllers, and in ASP.NET MVC, the controller is part of the presentation layer, though running server side. Application layer classes are then hosted within the controller and receive data from the context of the request, and process this data in total isolation from the HTTP context, which, by the way, is a factor that ensures full testability for all the classes in the application layer. The link between the presentation and application layer should be established right when you design the user interface, because each form you display has an underlying data model that becomes the input of the methods invoked in the application layer. In addition, the result of each application layer method is just the content that you use the fill up the next screen displayed to the user. Application layer is absolutely necessary, but it strictly depends on what you actually display to users and what you display to users has to guarantee a fantastic user experience. This is why a top-down approach to software development is today absolutely crucial.
The Business Logic
Hi everybody, and welcome back to Pluralsight and Modern Software Architecture. Dino Esposito is always speaking here. As we discussed so far in the course, in the design of a software system, initially you go through three key phases, getting to know as much as possible about the business domain, splitting the domain into simpler subdomains, and learn the business language that domain experts use in their everyday communication. Yes, great, but what is the next step? The next step is all about implementing all business rules you know and organizing business logic in software components. Abstractly speaking, the business logic of a software system is made of two main parts: application logic and domain logic. The application logic is the part of the code that strictly depends on the use-cases being implemented. The domain logic instead is that part of the code which is invariant to use-cases. In general terms, both application and domain logic are made of entities to hold data and the workflows to orchestrate behavior. In domain-driven design, the definition of the business logic is only a bit more specific, but only as far as the terminology is concerned. In the application logic, we call data transfer objects containers of data being moved around to and from presentation screens, and we call application services the components that coordinate tasks and workflows. In the realm of domain logic, we have instead a domain model for the entities that hold data in some behavior and domain services for any remaining behavior and data that for some reason don't fit in the entities. Overall, the role and purpose of the application logic are easy to grab. It's orchestration of all tasks required by implemented use-cases, and in general, a use-case is any possible way for the user to interact with the system through a user interface or other forms of input. In domain-drive design, the application logic is implemented in the application layer. But what is the point of domain logic? What is domain logic exactly? Domain logic is all about how you bake business rules into the actual code. A business rule is any statement that explains in detail the implementation of a process or describes a business related policy to be taken into account. The customer expects that this sentence, when heard, likely indicates a business rule to care about. There are various ways for architects to process, to organize business rules in software. You can implement business rules in workflow or transactional components, or especially when rules change frequently, you can consider building a distinct business rule engine to be implemented separately from the core of the system, or you can even encapsulate business rules in a set of entity objects in some sort of a domain model. Each option listed here represents quite simply a different pattern for organizing the business logic, and our next point now in the class is finding out more about these patterns.
Patterns for Organizing the Business Logic
Hi everybody and welcome back to Pluralsight and Modern Software Architecture. As usual, Dino Esposito is speaking here. Common patterns for organizing the business logic are three, transaction script, table module, and domain model. There is virtually no logic to implement. In a simple archiving system, that barely has some visual forms on top of a database. This scenario is commonly referred to as CRUD and you find it in the folds of too many tutorials on .NET architecture. Conversely, there is quite complex logic to deal with in a financial application, or more in general, in any application that is modeled after some real world business process. So the question is where do you start designing the business logic of the real-world system? The basic decision is just one. Would you go with an object-oriented design? Maybe a functional design? Or just a procedural approach? The transaction script pattern is probably the simplest possible pattern for business logic, and it is entirely procedural. It owes name to Martin Fowler, as well as all of the other patterns I'll be presenting here. The word script indicates that you logically want to associate a sequence of system carried actions, namely a script, with each user action. The word transaction in this context has very little to do with database transactions, and it generically indicates a business transaction you carry out from start to finish within the boundaries of the same software procedure. As is, transaction script as a pattern has some potential for code duplication. However, this aspect can be easily mitigated identifying common subtasks and implementing them through reusable routines. In terms of architectural design, the transaction script pattern leads to a design in which actionable UI elements in the presentation layer invoke application layers end points and these end points run a transaction script for just each task. As the name suggests, the table module pattern heralds a more database-centric way of organizing the business logic. The core idea here is that the logic of the system is closely related to persistence and databases. So, the table module pattern suggests you have one business component for each primary database table. The component exposes end points through which the application layer can execute commands and queries against a table, say orders, and the link or just related tables say order details. In terms of architectural design, the table module pattern leads to a design in which presentation calls into the application layer and then the application layer for each step of the workflow identifies the table involved, finds the appropriate module component, and works with that. The term domain model is often used in DDD. However, in DDD, domain model is quite a generic term that refers to having a software model for the domain. More or less in the same years in which Eric Evans was using the term domain model in the context of the new DDD approach, Martin Fowler was using the same term, domain model, to indicate a specific pattern for the business logic. Fact is, the pattern, the main model is often using DDD, though it's not strictly part of the DDD theory. The domain model pattern suggests that architects focus on the expected behavior of the system and on the data flows that make it work. When implementing the pattern, at the end of the day you build an object model, but the domain model pattern doesn't simply tell you to code a bunch of C# or Java classes. The whole point of the domain model pattern is hitting an object-oriented model that fully represents the behavior and the processes of the business domain. When implementing the pattern, you have classes that represent live entities in the domain. These classes expose properties and methods, and methods refer to the actual behavior and the business rules for the entity. Aggregate model is a term used in domain-driven design to refer to the core object of a domain model, and we'll see more about aggregates in the next module. The classes in the domain model should be agnostic to persistence and paired with service classes that contain just the logic to materialize instances of classes to and from the persistence layer. A graphical schema of a domain model has two elements, a model of aggregated objects and services to carry out specific workflows that span across multiple aggregates or deal directly with persistence.
The Domain Layer
Hi everybody, welcome back to Pluralsight, Dino Esposito here. In the domain-driven design of a software application, the business logic falls in the segment of the architecture called domain layer. In the domain layer, an architect places all the logic that is invariant to use-cases. This means a software model for the business domain, just the domain model, and related and complementary set of domain-specific services. It should be noted that the domain model is not necessarily an implementation of the aforementioned domain model pattern, and it should also be noted that the primary responsibility of domain services is persistence. In the implementation of a domain layer, as far as the model is concerned, we have essentially two possible flavors. We have an object-oriented model of entities, also known as an entity model, and a functional model where tasks are expressed as functions. An entity model has two main characteristics. Classes in first place follow strict DDD conventions, which means that for the most part these classes are expected not to have constructors, but factories, use value types over primitive types, and avoid private setters on properties. In addition, these classes are expected to expose both data and behavior. Often considered an anti-pattern, the anemic domain model is yet another possibility for coding business logic. An object-oriented domain model is commonly defined anemic if it's only made of data container classes with only data and no behavior. In other words, an anemic class just contains data and the implementation of workflows and business rules is moved to external components, such as domain services. In this regard, functions are just relatively new, but a quite intriguing possibility. Domain services complement the domain model and contain those pieces of domain logic that just don't fit into any of the other created entities. This covers essentially two scenarios. One is classes that group logically related behaviors that span over multiple entities. The other is the implementation of processes that require access to the persistence layer for reading and writing and access to external services, including legacy code.
The Infrastructure Layer
Hi everybody, welcome back to Pluralsight. If we think of, say, an office, then infrastructure is what brings things like internet connection and connects the building to external sources of power and water. The building doesn't produce power or water, but it can consume both, as long as it provides any necessary plumbing. In software, in full analogy, infrastructure is defined as the set of those fundamental facilities needed for the operation of the application. What are then the fundamental facilities of a software application? First and foremost, it is databases, and more in general, persistence. Persistence is crucial, but modern systems need more than just a database, so next in the list of infrastructure facilities comes security, and then logging and tracing, perhaps inversion of control, and for sure, caching and networking. However, the most relevant facility of just any application, the one that dwarfs anything else is persistence. When it comes to the infrastructure layer, I like to call it as the place down where the technologies belong. So, necessary to fuel the entire system, but not binding the system to any specific products, the infrastructure layer is where you start dealing with configuration details, things like connection strings, file paths, TCP addresses, or URLs. To keep the application decoupled from specific products, you sometimes want to introduce facades to hide technology details while keeping the system resilient enough to be able to replace technologies at any time in the future with limited effort and costs. So much for the infrastructure layer and the entire layered architecture. What's next? Starting with the next module, we'll begin exploring some supporting architecture for DDD design. So we'll first look into an architecture that uses the domain model pattern and setup of the domain layer. In doing so, we'll be using just one model to serve both commands and queries. After that, in the coming modules, we'll look into more effective ways to separate command and query stacks and even to add support for domain events. Stay tuned.
The "Domain Model" Supporting Architecture
Hi everybody. Welcome to module four of Pluralsight's course on Modern Software Architecture, Dino Esposito speaking. With module four, the course gets into the part that analyzes in detail a few architectural approaches to effective design and implementation of the domain layer, and the first supporting architecture we consider is domain model. The domain model supporting architecture was originally introduced in Evan's book about domain-driven design as a sort of prescription for implementing the domain layer, and the prescription revolved around two main elements, an object-oriented model and a set of services. This module discusses in the end the foundation of an object-oriented model for describing the business domain, domain services, and then it moves farther to look into more recent developments, such as the use of events, and in general, a more task focused view of the business domain.
Holistic Model for the Business Domain
In a domain model architecture, the domain layer is made of two main components, the domain model and domain services. In the context of domain-driven design, a domain model is simply a software model that fully represents the business domain. Most of the time a domain model is an object-oriented model, and if so, it is characterized by classes with very specific roles and very specific features, aggregates, entities, values types, and factories, all things that we will discuss in detail in the course of the module. Domain services are instead responsible for things like cross-object communication, data access via repositories, and the use, if required, by the business of external services. Back in 2004, Eric Evans suggested an architecture based on a single holistic, all-encompassing object model with some special features, but over the years this ended up generating a few misconceptions. First and foremost, it now seems that domain-driven design is nothing more than having an object model with the aforementioned special features. So the object model is expected to be also agnostic of data storage and then oversimplifying quite a bit. The database is part of the infrastructure and doesn't interfere with the design of the model. Finally, another misconception is that the ubiquitous language is simply a guide to naming all the classes in the object model. In the end, things are a little bit different, so let's shed some light on these misconceptions. In domain-driven design, identification and mapping of bounded contexts is really paramount. Modeling the domain through objects is just one of the possible options. So, for example, using a functional approach is not prohibited, and you can do good domain-driven design even if you use a functional approach to coding or even an anemic object model with stored procedures. For the sake of design, if you aim at building an object model, then persistence should not be your primary concern. Your primary concern is making sense of the business domain. However, the object model you design should still be persistent at some point, and when it comes to persistence, the database and the API you use it to go down to the database are a constraint and cannot always be blissfully ignored. Finally, the ubiquitous language is the tool you use to understand the language of the business rather than a set of guidelines to name classes and methods. In the end, the domain layer is the API of the business domain, and you should make sure that no wrong call to the API is possible that can break the integrity of the domain. It should be clear that in order to stay continuously consistent with the business you should focus your design on behavior much more than on data. To make domain-driven design a success, you must understand how the business domain works and render it with software. That's why it's all about the behavior.
Aspects of a Domain Model
One of the most common supporting architectures for bounded context is currently the layered architecture we already discussed in module three in which the domain model pattern is used to organize and express the business logic. The domain layer consists of a model and domain services. We assume here that the model is an object model. Let's see what is underneath the covers of both model and services. Abstractly speaking, the domain model is made of classes that represent entities and values. Concretely, when you get to write such a software artifact, you identify one, or even likely more, modules. In domain-driven design, a model is just the same as a .NET namespace that you use to organize classes in a class library project. Services, domain services, are made of special components, such as repositories that provide access to storage and proxies towards external web services. A domain-driven design module contains value objects, as well as entities. In the end, both are rendered as .NET classes, but entities and value objects represent quite different concepts and lead to different implementation details. In domain-driven design, a value object is fully described by its attributes. The attributes of a value object never change once the instance has been created. All objects have attributes, but not all objects are fully identified by the collection of their attributes. when attributes are not enough to guarantee uniqueness, and when uniqueness is important to the specific object, then you just have domain-driven design entities. As you go through the requirements and work out the domain model of a given bounded context, it is not unusual that you spot a few individual entities being constantly used and referenced together. In a domain model, the aggregation of multiple entities under a single container is called an aggregate, and therefore some entities and value objects may be grouped together and form aggregates. To indicate a value object in .NET, you use a value type. The most relevant aspect of a value type is that it is a collection of individual values. The type therefore is fully identified by the collection of values, and the instance of the type is immutable. In other words, the attributes of a value object never change once the instance has been created, and, if it does, then the value object becomes another instance of the same type fully identified by the new collection of attributes. Value types are used in lieu of primitive types, such as integers and strings, because they more precisely and accurately render values and quantities found in the business domain. For example, let's suppose you have to model an entity that contains weather information. How would you render the member that indicates the outside temperature? For example, it can be a plain integer property. However, in this case, the property can be assigned just any number found in the full range of values that .NET allows for integers. This likely enables the assignment of values that are patently outside the reasonable range of values for weather temperature. The type integer, in fact, is a primitive type and too generic to closely render a specific business concept, such as an outside temperature. So here is a better option using a new custom value type. In the new type, you can have things like ad hoc constructors, min/max constants, ad hoc setters and getters, additional properties, for example, whether it's a Celsius or a Fahrenheit temperature, and even you can overload operators. Not all objects are fully identified by their collection of attributes. Sometimes you need an object to have an identity attribute, and when uniqueness is important to the specific object, then you have just entities. Put another way, if the object needs an ID attribute to track it uniquely throughout the context for the entire application life cycle, then the object has an identity and is said to be an entity. Concretely, an entity is a class with properties and methods, and when it comes to behavior, it's important to distinguish domain logic from persistence logic. Domain logic goes in the domain layer, model or services. Persistence goes in the infrastructure layer managed by domain services. For example, an order entity is essentially a document that lays out what the customer wants to buy. The logic associated with the order entity itself has to do with the content of the document, things like taxes and discount calculation, order details, or perhaps estimated date of payment. The entire process of fulfillment tracking status or just invoicing for the order are well outside the domain logic of the order class. They are essentially use cases associated with the entity, and services are responsible for the implementation. An aggregate is a collection of logically related entities and value types. More than a flat collection, an aggregate is a cluster of objects, including then relationships that you find easier to treat as a single entity for the purpose of queries and commands. In the cluster, there's a root object that is the public endpoint that the outside world uses to query or command. Access to other members of the aggregate is always mediated by the aggregated root. Two aggregates are separated by a sort of consistency boundary that suggests how to group entities together. The design of aggregates is closely inspired by the business transactions required by the system. Consistency here means transactional consistency. Objects in the aggregate expose an _____ behavior that is fully consistent with the business processes. Imagine now that you have a domain model made of entities and value types. Depending on the business processes to take into account, you may draw things like the following boundaries. And now we have identified four aggregates, and the direct communication is allowed only on roots, only between roots, the green blocks you see on the slide. A notable exception here is the connection between order detail and product. Product, an aggregate root, is not allowed to access order detail and non-root elements directly. Communication is allowed, but only through the order proxy. However, it is acceptable that order detail, a child object, may reference an outside aggregate root, if, of course, the business logic just demands for that.
Database-centric Domain Models
So far, with this cost, the theory of a domain model, let's see how it may go in practice. And in particular, let's try to figure out the different set of features that an object model can have depending on the focus you give it, whether it's persistence or business domain. Everybody understands what a data model is. It's the model you use to persist data. You can express a data model looking at a bunch of SQL tables or using objects you manage through the object- relational mapper of choice, for example, Entity Framework. This is essentially a persistence model. It's object-oriented, and it's nearly 1:1 with data store of choice. Objects in the model may include some small pieces of business logic, mostly for validation purposes, but the focus of the design remains storage. And then we have side-by-side domain model. It's still an object-oriented model, it's still a model that must be persisted in some way, but it is a model that doesn't contain any clue or a very limited awareness of the storage. The focus of the design here is behavior and business processes as understood from requirements. In the end, it's a matter of choice. On one hand, you have a scenario in which objects are plain containers of data and business logic is hard-coded in distinct components. On the other hand, you have business logic expressed through processes and actual objects are functional to the implementation of those processes, and because of this, data and logic are often combined together in a single class. So, now let's just look into an example, an example of a persistence model. Let's start from here. I'm going to use a domain that, although it's not common in coding, it's probably familiar to most of you. I believe this will significantly help getting the difference between a database- centric way of designing models and a behavior-centric domain model. Let's think of how would you render an entity that represents a sport game, whether basketball, football, or perhaps water polo? Here is a possible way to describe a Match class for a sport game. There is an Id property that fully and uniquely identifies the match, there are a couple of string properties for the names of the teams or the players, there is an enum MatchState to describe the possible states of the match, and there is another object Score to represent and fully describe the CurrentScore of the match. In addition, we can also have, and in this demo we have just a method is in progress to return information about the state of the match. MatchState is an enum with three or more values, depending on the business needs, that describe possible states of the match. Score is another class. It's a distinct class and a collection of individual values that altogether fully describe a possible score for the ongoing match. In particular, it has a Boolean value to indicate whether or not the ball is in play, it has a couple of integers to count the goals scored by the two teams, and it has another integer, CurrentPeriod, to indicate the CurrentPeriod of the match. Now, if we go back to the Match class, we see that at the end of the day this class is not bad, is absolutely effective, is really good, as long as we have to persist the state of the match. This class, it's obvious to see that is very, very close to a table, to a relational table you can think of using to persist the current state of a given match. So this class is good for persistence, but it says nothing about the behavior that can be associated with this match. It says nothing about the business context in which you can be using such a class. In particular, this class as is can be great if you are building a live scoring application in which all you have to do is capturing, formatting, and displaying the results or the current score of the match. But this class says nothing about the business rules and the business actions and processes in case you are building an application that the referee or the referee's assistant should use during the match to track the score and building the final scorecard of the match. And there is yet another point that I would like to emphasize that makes this class good for the persistence model, but not necessarily for the domain model. Let's switch back to the Score class again, and let's focus on TotalGoals1. This is an integer property with a public getter and a public setter method. This means, as is, that you can assign any user, any consumer of this class can legitimately assign to TotalGoals1 any value that is an integer, and it can do that in a totally unfiltered way. So, it is consistent with your implementation, but who knows if it's consistent with the business to assign a negative value to TotalGoals1. In the end, with this class, you can have a score that sounds like -3 to 4 million. It's legal if you look at the implementation, but who knows if it's really legal when you look at the real business needs.
That Crazy Little Thing Called Behavior
What's behavior? According to the dictionary it is the way in which one acts or conducts oneself, especially towards others. To move towards a behavior-centric design of the domain model, we need to set in stone what we intend by behavior. Behavior is, in particular, methods that validate the state of the object, but it is also methods that invoke business actions to be performed on the object and methods that express business processes that involve the object. In a model that exposes classes designed around database tables, you always need some additional components to perform access to those classes in a way that is consistent with business rules, yet your domain model class library has public classes with full get and set access, direct access, to the properties. In this case, there is no guarantee that consumers of the domain model will not be making direct unfiltered access to the properties with the concrete risk of breaking the integrity of data and model. A better way, a better scenario, is when you create things such as the only way to access the data is mediated by an API, and the API is exposed by the object themselves. So the API is part of the model. It's not an external layer of code you may or may not invoke. The containers of business logic are not external components, but the business logic that's the point is built right into the object. So you don't set properties, but you rather invoke methods and alter the state of objects because of the actions you invoked. This is a more effective way to deal and model the real world.
Domain Model as a Domain API
Let's see how to rewrite the Match class to make it closer to the real world. The class we have here in first place has a specific constructor. It doesn't have any more the default parameter-less constructor, but the constructor now forces any possible consumer of the class to indicate the minimum information that the class requires, the ID and the names of the teams. Without this minimum information, it won't be possible to have a fresh new instance of the Match class. Then we have properties like ID, Team1, Team2, State, and CurrentScore, plus 2 more properties, IsBallInPlay and CurrentPeriod, that in our previous version of the class were incorporated in Score class. Now IsBallInPlay and CurrentPeriod have become features, attributes of the match and not of the score. But there is another aspect that is a lot more interesting to focus on, the qualifier of the setter methods for all these properties. As you can see, all of them are either private or internal. This means that there is no way for external colors of this version of the Match class to directly set or change the state, the score, or the period or whether the ball is in play. No aspect of this internal state of the Match class can be changed through direct access. This is a first great achievement. But the most important part of the story is how can we enable consumers of the class to alter the state? This is strictly related to the business role we expect the Match class to play in the context of the model and in the context of the application we are writing. Now, suppose that we are going to use this class in the context of a scoring application that the referee or the assistant of the referee is using to create dynamically as the match progresses the score card, the final report of the match. If you look at the language of the business, if you look at the ubiquitous language for this particular scenario, you know that a match starts, a match ends, periods starts and ends, and goals are scored by players, or more in general, to keep it simple in this example, just teams. So there is in the Match class a behavior section where I listed all of the public methods that a caller of this class can use in order to describe the things going on in the session of the application. So there is a method called Start, and the implementation of the Start method is very simple. All it does is setting internally the state of the match to MatchState.InProgress. Analogously, the Finish method changes the State to finished, and when we call the Goal method all that we do is looking at the Id of the team who scored the goal, we just create a fresh new instance of the Score value type, and we assign internally the new score to the CurrentScore property of the match. Analogously, StartPeriod and EndPeriod just increment up to 4. Four is assumed in the demo to be the maximum number of periods for this particular game, and they set the IsBallInPlay Boolean value property to true or false to reflect that a new period just started or the current period just ended. Let's switch now to the Score class. The Score class is a lot simpler, and it has been implemented to be a value type, a real value type. It has a constructor that allows to indicate the number of goals for both teams, and it also has, and it only has TotalGoals1 and TotalGoals2, 2 properties to indicate the goals that each team has scored. There is no way for external callers to change TotalGoals1 or 2 directly, and there is no method either to change the goals. The score is a constant kind of type, and it's only the match that can create a new instance and set the CurrentScore property with this new instance. Because of .NET, we are having an override for the Equals and the GetHashCode methods. The Equals method overwritten in this way just returns through so it recognizes two score objects to be the same if they have the same number, the same value in TotalGoals1 and TotalGoals2. The override of the GetHashCode is a factor specific of the .NET Framework and the .NET Framework whenever you override Equals, so you also have the override GetHashCode. And finally, now that we have a Match class that exposes a business-oriented kind of behavior, let's see what kind of unit tests we can have. A new Match, Home versus Visitors, and then we can have a sequence of actions that are very, very close to the flowchart you can have from domain experts. This is a kind of code that you can even take and go out discussing face-to-face with domain experts with a reasonable certainty they can understand what you're talking about. So it looked that we can have that a match starts, then a StartPeriod, Home team scores two goals, EndPeriod, Start of the second, Visitors score, End, Start, End, Start, Goal, End, Finish, and then Assert.AreEqual new Score is 3, 1.
Aggregates and Value Types w/ DEMO
In the beginning, when you start going through the list of requirements, whether you are in a classic analysis session or in an event storming session, you just find along the way entities and value types. As you go through, you may realize that some entities go often together, and together they fall under the control of one particular entity. When you get this, you probably have crossed the boundary that defines an aggregate. Among other things, aggregates are useful because once you use them you work with fewer objects and course grained and with fewer relationships just because several objects may be encapsulated in a larger, bigger container. And well-identified aggregates greatly simplify the implementation of the domain model. An aggregate protects as much as possible the graph of encapsulated entities from outside our access and guarantees that contained objects are always in a valid state according to the applicable business rules consistency. For example, if the business has a rule that requires that no detail of the order can be updated once it has shipped, the aggregate must offer a public API that makes this update impossible to achieve in code. But how would you identify which objects form an aggregate? Unfortunately, for this there are no mathematical rules. Actual boundaries of aggregates are determined only and exclusively by analysis of the business rules. So it turns out that two architects may perhaps define different aggregates for the same domain, but it can also happen that that same domain in a different business context may suggest architects use different sets of aggregates. It's essentially primarily about how you intend to design the business domain and how you envision that business domain. For example, a customer would likely have an address, but customer and address are two entity classes for sure. Do they form an aggregate perhaps? It depends. If the address as an entity exists only to be an attribute of the customer. If so, they form an aggregate. Otherwise, they are distinct aggregates that work together. An aggregate root object is the root of the cluster of associated objects that form the aggregate. An aggregate root has global visibility throughout the domain model and can be referenced directly. It's the only part of the aggregate that can be referenced directly, and it has a few responsibilities too. In particular, it has to ensure that encapsulated objects are always in a consistent state business-wise. It has to take care of persistence for all of the encapsulated objects, and it has to cascade updates and deletions through the graph. And more than everything else, the root has to guarantee that access to encapsulated objects is always mediated and only happens through navigation by the root. But a moment ago, I mentioned persistence. Most of the code changes required to implement the responsibilities of an aggregate root occurs at the level of services and repositories, so well outside the realm of the domain model. Each aggregate root has its own dedicated repository service that implements consistent persistence for all of the objects. But in terms of concrete code, what does it mean to be an aggregate root? Well, let's just find out. An aggregate is essentially a logical concept. When you create a class that behaves as an aggregate, you just create a plain class. I would even say that you just have your entity classes, and then you upgrade one of those, a few of those, to the rank of an aggregate, but that mostly depends on how you use them, so being an aggregate is an aspect of the overall behavior and usage of the class rather than a feature you code directly. However, especially when you have a very large domain model, it could be a good idea to use something that identifies at the code level that some classes are aggregates, and this can be a very simple, plain, in some cases, just a marker interface you can call IAggregate. The interface can be a plain marker interface, so an interface with no members, or you can add some common members you expect to be defined on all aggregates. For example, in this demo I'm using an Id property, the key what makes the aggregate unique. In this demo, I'm using a Guid type to define a key. And then the IAggregate interface is implemented in an abstract class, so you can call this class Aggregate, and you just implement the interface. Now, in most domain models you have to deal with individuals and organizations. This is not the case in just every domain model, but it's a very common scenario. When this happens, you might want to look into the party pattern, which means that you have a class, you can call this class Party, which inherits from the Aggregate and just defines properties which are common to both individuals and organizations. And then you define in a typical model classes like Person from Party, which adds FirstName, LastName, DateOfBirth, and Company, which may have something like the CompanyName, and maybe a list of addresses. It's interesting to notice the way in which classes in a domain model aggregates are initialized. Typically in an object-oriented language you use constructors. Constructors work, there's nothing really bad in terms of the functional behavior with constructors, but factories are probably better, because of the expressivity that factories allow you to achieve. In particular, I suggest you have a static class within an aggregate, you can call this class Factory, and this static class has a bunch of static public methods, CreateNew, CreateNew with Address, and as many methods as there are ways for you to create new instances of a given aggregate. The big difference between constructors and factories is essentially in the fact that on a factory class you can give a name to the method that returns a fresh new instance of the aggregate, and this makes it far easier to read the code. When you read back the code from the name of the Factory method, you can figure out the real reason why you are having a new instance of that class at that point.
Classes in the domain are expected to be agnostic of persistence, yet the model must be persistent. Domain services are a companion part of the domain layer architecture that coordinates persistence and other dependencies required to successfully implement the business logic. Domain services are classes whose methods implement the domain logic that doesn't belong to a particular aggregate and most likely span over multiple aggregates. Domain services coordinate the activity of the various aggregates and repositories with the purpose of implementing all business actions, and domain services may consume services from the infrastructure, such as when they need to send an email or a text message. Domain services are not arbitrarily pieces of code you create because you think you need them. Domain services are not helper classes. All actions implemented by domain services come directly straight from requirements and are approved by domain experts. But there's more. Even names used in domain services are strictly part of the ubiquitous language. Let's see a couple of examples. Suppose you need to determine whether a given customer at some point reached the status of a gold customer. What do requirements say about gold customers? Let's say let's suppose that a customer earns the status of gold after she exceeds a given threshold of orders on a selected range of products. Nice. This leads to a number of distinct actions and aspects. First, you need to query orders and products to collect data necessary to evaluate the status. As data access is involved at this stage, this is not the right job for an aggregate to do. The final piece of information, whether or not the customer is gold, is then set as a Boolean value typically in any new instance of a customer class. The overall logic necessary to implement this feature, as you can see, spans across multiple aggregates and is strictly business oriented. But here is another example. Booking a meeting room. What do requirements say about booking a room? Booking requires, for example, verifying the availability of the room and processing the payment. We have two options here, different, but equally valid from a strictly functional perspective. One option is using a booking domain service. The service will have something like an Add method that reads the member credit status, checks the available rooms, and then based on that decides what to do. The other option we have entails having a booking aggregate. In this case, the aggregate may encapsulate entities like room and member objects, which, for example, in other bounded contexts in the same application, for example the admin context, may be aggregate roots themselves. The actual job of saving the booking in a consistent manner is done in this case by the repository of the aggregate. Repositories. What's a repository? In domain-driven design, a repository is just the class that handles persistence on behalf of entities and ideally aggregate roots. Repositories are the most popular type of a domain service and the most used. A repository takes care of persisting aggregates, and you have one repository per aggregate. Any assemblies with repositories have a direct dependency on data stores. Subsequently, a repository is just the place where you actually deal with things like connection strings and where you use SQL commands. You can implement repositories in a variety of ways, and you can find out there are a million different examples, each claiming that that's the best and most accurate way of having repositories. I'd like to take here a low-profile position. There's nearly no wrong way to write a repository class. You typically start from an IRepository generic interface and decide whether you'd like to have a few common methods there. But this, like many other implementations, are just arbitraries, and it's a matter of preference and choice. In domain-driven design, in summary, a repository is just the class that handles persistence on behalf of aggregate roots, period.
Events in the Business Domain
Events in the context of the domain layer are becoming increasingly popular, so the question becomes should you consider events then? First and foremost, events are optional, but they are just a more effective and resilient way to express sometimes the intricacy of some real-world business domains. Imagine the following scenario. In an online store application, an order is placed and is processed successfully by the system, which means that the payment is okay. Delivery order was passed and received by the shipping company, and the order was then generated and inserted in the system. Now what? Let's suppose that the business requirements want you to perform some special tasks upon the creation of the order. The question becomes now where would you implement such tasks? The first option is just concatenating the code that implements additional tasks to the domain service method that performed the order processing. You essentially proceed through the necessary steps to accomplish the checkout process, and if you succeed you then, at that point, execute any additional tasks. It all happens synchronously and is coded in just one place. What can we say about this code? Well, it's not really expressive. It's essentially monolithic code, and should future changes be required, you would need to touch the code of the service to implement the changes with the risk of making the domain service method quite long and even convoluted. But there is more. This fact might even be classified as a subtle violation of the ubiquitous language. The adverb, when, you may find in the language typically refers to an event and actions to take when a given event in the business is observed. What about events then? Events would remove the need of having whole handling code in a single place and also bring a couple of other non-trivial benefits to the table. The action of raising the event is distinct from the action of handling the event, which might be good for testability, for example. And second, you can easily have multiple handlers to deal with the same event independently. The event can be designed as a plain class with just a marker interface to characterize that, and this could be as in the demo, IDomainEvent. This is analogous to event classes as we find them in the .NET Framework. To be honest, you could even use EventArgs, the .NET Framework root class for events as the base class if you wish. Not using .NET native classes is mostly done because of the ubiquitous language and to stay as close as possible to the language of the business. And then, once you have this event class defined in the checkout domain service method after you've gone through all of the business steps, you just raise the event with all the necessary information you want to pass along. Yes, but how would you dispatch events? There's a bus. The bus is an external component typically part of the infrastructure, and it is acceptable in general that a domain model class library has a dependency on the infrastructure _____. The bus allows listeners to register and notify them transparently for the code. The bus can be a custom class you create, or it can be some sort of professional commercial class like end service bus or maybe rebus. There is more flexibility in your code when you use events as you can even record events and log what happened and can add and remove handlers for those events quite easily. Events are gaining more and more importance in software design in architecture these days, well beyond the domain model as a way to express the business logic. Events are part of the real world and have to do with business processes much more than they have to do with business entities. Events help significantly to coordinate actions within a workflow and to make use case workflows a lot more resilient to changes. This is the key fact that is today leaning towards a different supporting architecture, event sourcing, that we will look in the rest of the course as an alternative to the domain model supporting architecture.
The domain model pattern is often contrasted to another pattern known as the anemic domain model. For some reason, the anemic domain model is also considered a synonym of an anti-pattern. Is it true? Sad that software architecture is the triumph of shades of gray and nothing is either black or white, I personally doubt that today the anemic domain model is really an anti-pattern. In an anemic domain model, all objects may still match conventions of the ubiquitous language and some of the domain-driven design guideline for object modeling like value types over primitive types and relationships between objects. The inspiring principle of the anemic domain model is that you have no behavior in entities, just properties, and all of the required logic is placed in a set of service components that altogether contain the domain logic. These services orchestrate the application logic, the use cases, and consume the domain model and access the storage. Again, is this really an anti-pattern today with the programming tools of today? Let's consider Entity Framework and Code First. When you use Code First, you start by defining an object model. The model you create, is it representative of the domain? I'm not sure. I'd say that it is rather a mix of domain logic and persistence. The model you create with Code First must be used by Entity Framework, so Entity Framework must like it. To me, this sounds more like a persistence model than a domain model. Sure you can add behavior, read methods to the classes in the Code First model, and there is no obvious guarantee, however, that you will be able to stay coherent with the requirements of the ubiquitous language and still keep Entity Framework happy. And in case of conflicts, because you still need persistence, Entity Framework will win, and compromises will be in order. The database at the end of the day is part of the infrastructure, it doesn't strictly belong to any model you create, but because of the persistence requirement it is still a significant constraint you cannot ignore. An effective persistence model is always necessary because of the functions to implement and because of the performance. Sometimes when compromises are too expensive you might want to consider having a second distinct model in addition to persistence, so you want to distinguish between the domain model and persistence model and use adapters to switch between the two. An alternative, you can completely ignore the domain model, keep your persistence model you create with Code First DB-friendly, and then use different patterns than the domain model pattern to organize the business logic of the application. In the end, having implementing the domain model pattern is not at all a prerequisite for doing good domain-driven design. And the model you end up with when using Code First and Entity Framework is hardly a domain model. It is a lot more anemic then you may think at first.
Beyond Single All-encompassing Domain Models
This is close to be a Murphy law. If a developer can use an API the wrong way, he will. How to avoid that? With proper design, I would say. All of the facts explained in Eric Evan's book about domain-driven design written about a decade ago have the word object and implicitly refer to an object- oriented world, the domain model. So far in this course I tried to push hard the point that the foundation of domain-driven design is agnostic of any development paradigm in much the same way it is agnostic on the persistence model. I believe that the real direction we're moving today is pushing the same idea of a domain model, single, all-encompassing model for the entire business domain to the corner. I rather see that it is more and more about correctly identifying and rendering business processes with all of their related data and events. You can certainly do that using classes that, like aggregates, retain most of the business logic and events, so you can certainly model business processes using the object-oriented paradigm. But, and that's really interesting, you can also do that today using functional languages. If you search around, you will find a lot of references already about using F# to implement business logic. Many of the references are still vague, but some points emerge clearly. With a functional approach, for example, we don't need to go about coding ourselves restrictions in a domain model such as using value types. That's the norm in a functional language. And also composition of tasks via functions often result naturally in so simple code that it can even be read to and understood by domain experts. The foundation of domain-driven design at the very end of the day is that ubiquitous language as a tool to discover the business needs and come up with a design driven by the domain in which presentation commands places orders, orders commands are executed, and somewhere, somehow, data is saved. For this way of working for this model, more effective practices and architectures are emerging. One is CQRS that I'll discuss in the next module.
The CQRS Supporting Architecture
Hi everybody, welcome to module five of Pluralsight's course on modern software architecture, Dino Esposito speaking. About a decade ago, when Eric Evans first introduced domain-driven design, the world of software was such that an object-oriented model to fully express the intricacies of the business domain sounded like a really compelling idea. But years of practice instead have proved that the task was more difficult than the initial estimation. So, for years as developers we faced a lot of complexity and we thought that it was all about the inherent complexity of the business model. So with domain-driven design, we focused on practices to facilitate the learning of the business domain, and now we are realizing that there is more to it, and we are seeing a new approach emerging. Today there is the classic way of expressing the business domain through an object-oriented model, all-encompassing, and there is a new way. The new way goes under the name of CQRS, which is the acronym for command query responsibility segregation, and it's all about separating commands from queries using distinct application stacks. The module, this module, discusses the foundation of CQRS and identifies three flavors of it that I like to call in a growing order of complexity and sophistication, as Regular CQRS, Premium CQRS, and CQRS Deluxe.
CQRS at a Glance
Over the past decade, many embarked in projects following the guidelines of domain-driven design. Some projects have been successful, some not. The truth is that a single, all-encompassing object model to cover just any functional and non-functional aspects of a software system is a wonderful utopia, and sometimes it's even close to being an optical illusion. In module four, I briefly presented two ways to design a class for a sport match. One that incorporates business rules for the internal state and exposed behavior, and one that was a mere data transfer object trivial to persist, but devoid of business logic and with a public ____ filter the read write interface. In a single, all-encompassing object model perspective, the former class, the one with the reach behavior with methods like Start, Finish, and Goal, is just perfect. As it faithfully describes, the commands that alter the state of the system and the events observed in the real world. But, are you sure that the same class would work well in a plain query scenario? When your user interface just requires to show the current score of a given match and that's it, is such a behavior rich class appropriate? Obviously, another simpler match class would be desirable to have, especially if it acts as a plain data transfer object. So the question is, should you have both or should you manage to disable ethics of methods in query scenarios? In a single domain model scenario, none of the classes shown now are perfect, even though both can be adapted and used in a system that just does what it's supposed to do. The behavior rich class is great in use cases in which the state of the system is being altered. The other is an excellent choice instead when the state of the system is being reported. The behavior rich class requires fixes to fully support persistence via an O/RM, and if used in a reading scenario, it would expose a behavior to the presentation layer, which could be risky. The other class has no business rules implemented inside, and because of its public getters and setters, is at risk of generating inconsistent instances. Defining a single class for commands and queries may be challenging, and you realistically can get it only at the cost of compromises, but compromises may be acceptable. Treating commands and queries differently through different stacks, however, is probably a better idea, and this is CQRS, command and query responsibility segregation. A command is then defined as an action that alters the state of the system and doesn't return data. A query instead is an action that reads and returns data and doesn't alter in any way the state of the system. All that CQRS is all about is then implementing the final system so that both the responsibilities are distinct and have each its own implementation. This is actually nothing new. Back in the 80s, Bertrand Meyer already said that in software any action can either be a command or a query, but never both things. In terms of layers and system architecture, with CQRS we move from a classic multilayer architecture with presentation, application, domain, and infrastructure, as we discussed in module three and four, to a slightly different layout. In the new layout, presentation and infrastructure are, for the most part, unchanged. The application is crucial to have in the command stack, but can be considered optional in the query stack. The huge difference, however, is the implementation of the domain layer. In a CQRS, you may need to design a domain model only for the command stack, and can't rely on plain DTOs and the direct data access in the query stack. Why is CQRS then good for architects to consider? Major benefits are that separation of stacks allow for parallel and independent development, which means reuse of skills and people, and freedom to choose the most appropriate technologies without constraints. In addition, separation of stacks enables distinct optimization of the same stacks and lays the ground for scalability potential. As a pleasant side effect, the overall design of the software becomes simpler and the refactoring and enhancements come easier, because of the limited common _____ existing between commands and queries. Quite simply, CQRS is a concrete implementation pattern, and it is probably the most appropriate for nearly any type of software application today regardless the expected lifespan and complexity. A point that often goes unnoticed is that there is not just one way of doing CQRS. In particular, for the purposes of this module, I would recognize at least three different flavors of CQRS that I like to nickname after common marketing rules of hotel rooms and drinks, so Regular, Premium, and Deluxe.
CQRS is not just for overly complex applications, even though it's just there that it shines in concurrent collaborative, high traffic, high scale systems. The principle behind CQRS works well even with plain old CRUD applications, create, read, update, delete applications. When it comes to this, so to make an existing CRUD system CQRS aware, the application architecture changes slightly, like this. Instead of a monolithic block that takes care of database access for reading and writing, you now have two distinct blocks. One for commands and database writes, and one for queries and database reads. If you are using Entity Framework or another object relational mapper framework to perform data access, all you do is just duplicate the context object through which you go down to the database. So you have one class library and model for commands, and one for queries. Having distinct class libraries is only the first step. Let's examine more in detail the command and query stack composition. In the command stack, you just use the pattern that represents the best fit. So, if you have existing code to integrate or use, then there is no reason to sacrifice all or part of it to the name of some domain model. You just stick to it. Same if your company has both licenses of some product or made investments in some technology. No need to sacrifice anything to a domain model, not to mention skills. If your developers know how to do something very well, it might not be a nice idea to force them away and use something else, or at least it has to be a very, very careful move. With CQRS, you refresh the architecture again in simplicity and scalability, but without the need of ____ thinking the way you write actual code. The business logic can be expressed in a number of ways, as we have discussed in module three of this class. So, you can certainly use in a CQRS even in the simplest CQRS CRUD-like application the domain model pattern, and in doing so, you put your business rules right into the classes of the model. But you can likewise use other patterns, such as a table module or even a fully procedural, but still effective transaction script pattern. In the end, a good domain-driven analysis is one thing, having a domain model is quite another. As for the read stack, you essentially do the same and just use the code that does the job, whatever it is. So you can use the O/RM of choice, ADO.NET, ODBC, Micro Frameworks, even stored procedures, whatever can bring data back the way you want. LINQ can help in a sense that it can easily bring iQueryable objects right in the presentation layer for direct data binding. Know that an IQueryable object describes a database query, but won't execute it until you call ToList or another analogous method. So this means that you can have high Queryable objects, carry it from the bottom of the system up to presentation, and you can result the query right to view model classes as expected by the front-end. A nice tip in this context is using in the read stack of a CQRS solution, a read-only wrapper for the Entity Framework DbContext. In this way, when a query is performed, the presentation and application layers only have IQueryable data and write actions like those you can perform through save changes cannot just be performed. Here is some code to illustrate the point. You can use a simple database class instead of the native Entity Framework DbContext, but wrap has a private member of this new database class, the same DbContext object as you get it from Entity Framework. Next, all you do is return generic DbSet objects as high queryable, rather than as plain DbSet of T. That's it. It is a simple, but really, really effective trick for you to use.
CQRS Regular in Action
Let's see how to implement a CQRS design in the context of a plain simple CRUD solution based on ASP.NET MVC. There is nothing really fancy or weird to do, it's all about rethinking a few things about requests and how they are processed within a website. The key thing in ASP.NET MVC is that when a request arrives, it is captured by a controller and then processed. If that request comes directly from the browser, which means the user pushed a button to post a form or clicked a link, then the controller must return something. If it doesn't, a black view is displayed to users. If the request comes via Ajax, then the controller may even return void without affecting the user interface. From this, it sounds like CQRS works well with single page applications, which is true, but it also may be adapted to work with plain postback sites. Architecturally speaking, here is the dependencies diagram for a website designed with CQRS in mind. This diagram is not a fake, but it has been created in Visual Studio based on the references of the sample project that comes with this module of the course. As you can see, the server module on the ASP.NET site calls into read and command stacks. It also needs to reference directly externals, typically Entity Framework, and the purple arrows show the distinct routes a request can take being a query or a command. This is the project solution. You see ASP.NET references, you also see external references, that is, Entity Framework. Everything is orchestrated from the application layer, which in turn responds to requests intercepted by controllers at the presentation level. Let's see the steps involved in a query and a command. For the query, the presentation yields to the application layer, which invokes methods, and the read stack are simply to get data read. Once data is back, it is stuffed into a view model for the controller to arrange an HTML view for the browser. On the command side, the presentation yields to the application layer, which invokes methods. And the command stack are simply to have things done. If a visual response is needed, then the presentation just redirects to the read stack. In this way, at the cost of an extra HTTP 302, we get 2 benefits. First, we get a clean design in which operations are separated and involve distinct binaries and possibly tiers. Second, we implement the Post-Redirect-Get pattern, which neutralizes the possibly nefarious effects of the last post action. When a user refreshes the page, in fact, the browser reiterates the last action. If it's post, then you get the warning dialog box you see now. Sometimes it's really risky to have this, sometimes it is not; it depends on the application. For sure it is confusing for most users. The Post-Redirect-Get pattern ensures that the last operation tracked by the browser is always a get and because of that, users will never see this dialog box again. Let's take a look at the concrete implementation of a CQRS CRUD ASP.NET MVC website. When a request comes in and is handled, say, by the AdminController class, you see that the same request can come over an HttpGet or an HttpPost and this is a simple discriminator for it to be a query or a command. So let's take a look at the DisplayRegister method associated with the request of a reading about a registered item in this website. So we have to perform a query and adminService is the class that represents in this context the application layer we call a method to just read, and instead when we post to register an information item that makes sense for the application, then the PostRegister will call another method on the application layer Register, and then after we have performed the command we redirect back to the DisplayRegister method to have the view rendered back to the browser. Let's have a look at how in the AdminService application layer class calls to the command stack and the read stack are managed in the GetAdminViewModel, we see that essentially the query is run through the database class that we see defined in the read stack, and the database class, as mentioned, is the plain wrapper around the Entity Framework DbContext class that just allows IQueryable objects to be returned instead of modifiable objects as would be the default when you work directly with the Entity Framework DbContext. And the Register method instead uses regular classic O/RM entry point, the DbContext of an Entity Framework, to perform queries and the db instance here has full access to collections of data as DbSet objects, and then there is the chance to save changes back to the data store of the application.
All applications are CRUD applications to some extent, but at the same time, not all CRUD applications are the same. In some cases, it may happen that the natural format of data processed and generated by commands, so the data that captures the current state of the system, is significantly different from the ideal way of presenting the same data to users. This is an old problem of software design, and we developers solved it for decades by using adapters. In CQRS, just having two distinct databases sounds like an obvious thing to do, if data manipulation and visualization needs would require different formats. The issue becomes how to keep the two databases in sync. The golden rule of CQRS is that you always use the pattern for organizing the business logic that best fits your needs, and this is the golden rule of implementing command stacks. I tend to restrict command choices to using domain model or transaction script. However, just a separation of commands and queries should inspire you to look at coding the command stack from a more task-oriented perspective. A command is just a statement that triggers a workflow, and as you don't have concerns as for the data model, then you could just optimize the workflow and use and persist data as it comes natural. This means ad-hoc storage, and ad-hoc storage, depending on the scenario, may mean something like a relational schema or schema-less data, NoSQL, or even an alternative or in addition a data store that logs business events. Use the code that does the job remains the primary rule for coding the read stack of a CQRS premium solution. That means using the data access technology of choice, whether an O/RM or other. And as it's just about queries, then LINQ does help a lot, and so it is the feature we discussed earlier in the module, so using IQueryable of T objects instead of Dbset of T, if you are, of course, on the .NET Framework and you are using Entity Framework. If you feel that data is preferably taking different shapes in command and query stacks, then it mostly means that whatever is the most effective way to store the data of the system for query purposes, you want to have ready-made relational tables. Even when a NoSQL document database is optimal for commands, queries are still best served by a plain classic relational set of tables. The issue is having distinct data stores for commands and queries makes development simpler and optimizes both operations, but at the same time, it raises the problem of keeping the two stores in sync. For the time in which data is not synced up, your app serves stale data, and the question is, is this acceptable? The dynamics of a CQRS premium solution when two distinct data stores are used is the following. When a command executes, the command data store is updated. The state of the system is consistently saved within any transactional boundaries you need to have. At some point, in some way, the state of the system must be, so to speak, replicated in a format that makes it suitable for the read stack. There are many possible ways to keep data stores in sync, and choosing one or the other just depends on what's best for the application. The first approach that comes to mind is making everything synchronous. Every command triggered at the end just starts the sync up operation as part of its transaction. But if synchronous is too much work and also represents a potential stumbling block for scalability and/or performance, then you can make the whole thing asynchronous and run it outside of the command transaction. If having stale data is acceptable to some extent, you can also schedule synchronization as a periodical job that reads the command store and updates the read store. And finally, the synchronization can be as lazy as possible, and even triggered on demand. For example, when a read comes that may find stale data and you don't want it to serve stale data. In terms of staleness of data, the sync option keeps data automatically in sync. The async approach keeps data eventually up-to-date and does that in a matter of milliseconds. The scheduled approach assumes it is acceptable for the application to work with stale data for a few seconds or hours or even days, and then the on-demand option is a bit of everything. It's up-to-date, but just when you need it.
CQRS Premium in Action
The dependencies diagram for the sample application that makes a more sophisticated use of the CQRS principles, what I have called CQRS Premium, is, as you can see, a bit more articulate than CQRS Regular. In particular, you notice the server central model the website that calls into command and read stacks In turn, command and read stacks reference external libraries, notably Entity Framework, and also a shared helper library and the system infrastructure. The infrastructure in this case serves the purpose of incorporating and wrapping up the data access layer around Entity Framework and in case also other services and frameworks, this is done to ____ a reuse an encapsulation of code. The Visual Studio project also offers another view of the least of dependencies. As you can see in the sample application, I'm also using ASP.NET SignalR and OWIN middleware. To some extent, those can be considered part of the infrastructure, too. And let's see how to implement a CQRS Premium design always in the context of a layered system. The actual demo that comes with the module still uses an ASP.NET MVC website, but however, there is no loss of generality in that. The architecture of the solution would be the same, even with another rich or mobile client and even on a known .NET stack. So, the user interface places a request and the application layer sends the request for action down to the command stack. To see the value of a premium approach, imagine that the application is essentially an even based application, for example, a scoring app in which the user clicks buttons whenever some event is observed and leaves the business logic the burden of figuring out what to do with the notified event. The command stack may do some processing and then saves just what has happened. The data store ends up being then a sort of log of actions triggered by the user. This form of data store makes also so easy to implement undo functionality. You just delete the last recorded action from the log, and that's it. The data store, the log of actions, can be relational or it can also be a NoSQL document database. It's basically up to you and your skills and preferences. Synchronization where _____ happens within or outside the current business transaction consists of reading the list of recorded action for a given aggregate entity, for example, in this case, the match, and extracting just the information you want to expose to the UI listeners. For example, a live scoring application that just wants to know about the score of a match and doesn't care about who scored the goals and when. At this point, the read data store is essentially one or more snapshot databases for clients to consume as quickly and easily as possible. Let's start our analysis of CQRS Premium implementation from an analysis of the command stack and precisely what happens when the front-end sends a request for action down to the command stack. Because everything passes through a Controller class, we need to take a look at all methods in a Controller class that can handle post-requests. As you can see, we assume to have an interface in which several submit buttons are available in the form and whenever users click on one of those buttons, an action, a request for action, is posted, and it comes, is interpreted in the controller and has an event that has happened, and we receive out of the user interface a value that identifies as a number the type of an event that has occurred and specifically the button that the user has clicked in the user interface. Now, the controller passes the burden of executing the action down to the matchService class, which represents the application layer. And the ProcessAction method on the matchService class, as you can see, is just a long list of cases within a switch statement, but as we scroll through the implementation of the ProcessAction method, all that we see is that regardless of the event, what we do is essentially logging an event using the helper class EventSourceManager. The differences between the various events are only the number of parameters we pass, and of course, the values of those parameters. Let's have a look at the EventSourceManager class. As you can see, the class is made of, for the most part, a list of overloads for the Log method, and if we take and expand one of these Log methods, we'll see exactly what happens. The first instruction we see is a form of adaptation so that Event carries data. This data is assembled together in a class matchEvent that we can persist as is in the EventRepository. Now, if we are using a NoSQL database, the same Event class or the same bunch of data we receive from the user interface can be persisted as is, but because we are using in the demo a SQL Server table to persist events, we have to essentially prepare a record that can be easily persisted in the SQL Server, and this is exactly what these two lines of code are actually doing. After that, we need to sync up synchronously with the read model, so we have just updated at this time the command state, the current state of the system is absolutely up-to-date, we want to reflect this state for the presentation layer. So, we obtain a domain model class, the match variable represents an instance of a Match class that knows everything about the state of the current match, and we do that essentially reading all logged events. For the specified matchId, we build a consistent instance of the Match class and then we just call another helper class, the MatchSynchronizer, which essentially extracts information from the snapshot of the state and create an ad-hoc snapshot for the presentation layer to read.
Message-based Business Logic
When you think about a system with distinct command and query stacks, inevitably the vision of the system becomes a lot more task-oriented. Tasks are essentially workflows and workflows are a concatenated set of commands and events. A message-based architecture is beneficial as it greatly simplified the management of complex, intricate, and frequently changing business workflows. However, such a message-based architecture would be nearly impossible to achieve outside the context of CQRS that keeps command and query stacks neatly separated. So, what is the point of messages? Abstractly speaking, a message can either be a command or an event. So in code, you usually start by defining a base Message class that defines a unique ID for the workflow, and possibly a timestamp to denote the time at which the message was received. Next, you derive from this base Message class additional classes for denoting a command, which is a message with a name in the typical implementation, or an event which is essentially just the notification of something that has happened, and you can add to further derive event classes properties that may help in retrieving information associated with that fact. An event carries data and notifies of something that has happened. A command is an action performed against the back-end that the user or some other system components requested. Events and commands follow rather standard naming conventions. A command is imperative and has a name like submit order command; an event instead denotes a thing of the past and is named like order created. In a message-based architecture, you render any business task as a workflow, except that instead of using an ad-hoc framework to define the workflow or plain code, you determine the progress of the workflow by sending messages. The application layer sends a message and the command layer processes the message in much the same way that Windows, in early Windows operating systems, worked for years. When a message, whether a command or an event is received, the command stack originates a task. The task can be a long-running state for process, as well as a single action or stateless process. A common name for such a task is saga. Commands usually don't return data back to the application layer, except perhaps for some quick form of feedback, such as whether the operation completed successfully, was refused by the system, or the reason why it failed. The application layer can trigger commands following user actions, incoming data from asynchronous streams, or other events generated by previous commands. For a message-based system to work, some new infrastructure is required, the bus and an associated set of listeners are the main building blocks. The core element of a message-based architecture is the workflow. The workflow is the direct descendent of user defined flowcharts. Abstracted to a saga instance, the workflow advances through messages, commands, and events. The central role played by workflows and flowcharts is the secret of such an architecture, as simple that can be easily understood even by domain experts, because it resembles flowcharts, and it can also be understood by developers, because it is task-oriented and then so close to the real business and so easy to mirror with software.
CQRS Deluxe is a flavor of command query separation that relies on a message-based implementation of the business tasks. The read stack is not really different from other CQRS scenarios we have considered so far, but a command stack takes a significantly different layout, a new way of doing old things, but a new way that is hopefully a lot more extensible and resilient to changes. In this CQRS design, the application layer doesn't call out any full-fledged implementation of some workflows, but it simply turns any input it receives into a command and pushes that to a new element, the bus. The bus is generically referring to a shared communication channel that facilitates communication between software modules. The bus here is just a shared channel, and doesn't have to be necessarily a commercial product or an open source framework. It can also and simply be your own class. At startup, the bus is configured with a collection of listeners, that is, components that just know what to do with incoming messages. There are two types of message handlers called sagas and handlers. A Saga is an instance of a process that is optionally stateful, maintain access to the bus, is persistable, and sometimes long-running. A handler instead is a simpler, one ____ executer of any code bound to a given message. The flowchart behind the business task is never laid out entirely. It is rather implemented as a sequence of small steps, each calling out the next or raising an event to indicate that it is done. As a result, once a message is pushed to the bus, the resulting sequence of actions is partially predictable and may be altered at any time adding and removing listeners to and from the bus. Handlers end immediately, whereas sagas, which are potentially long-running, will end at some point in the future when the final message is received that ends the task that saga represents. Sagas and handlers interact with whatever family of components exist in the command stack to expose business logic algorithms. Most likely, even though not necessarily, you'll have a domain layer here with a domain model and domain services are calling to the architecture we discussed in module four. The domain services, specifically repositories, will then interact with the data store to save the state of the system. The use of the bus also enables another scenario that we'll discuss more in detail in the next module. This is event sourcing. Event sourcing is a technique that turns detected and recorded business events into a true part of the data source of the application. When the bus receives a command, it just dispatches the message to any registered listeners, whether sagas or handlers. But when the bus receives an event from the presentation or from other sagas, it may first optionally persist the event to the event store, a log database, and then dispatch it to listeners. It should be noted that what I describe here is the typical behavior one expects from a bus when it comes to orchestrating the steps of a business task. As mentioned, CQRS Deluxe is particular just because of the innovative architecture of the command stack. The read stack instead just uses any good query code that does the job. Therefore, it means your O/RM of choice, possibly LINQ, and ad-hoc storage, mostly relational. And the issue of stale data and synchronization we discussed earlier in the module, is still here, present, and in the context of a CQRS Deluxe solution, the code that updates synchronously or asynchronously, the read database can easily take the form of a handler.
CQRS Deluxe Implementation
At the end of the day, compared to a solution like classified as premium, a CQRS Deluxe solution only has a reacher and more sophisticated infrastructure. The infrastructure, in fact, has to provide bus capabilities and support for listeners and handlers of business logic commands and events. The bus, in particular, is a class that maintains internally a list of known saga types, a list of running saga instances, and the list of known handlers. The bus gets messages and all it does is dispatching messages to sagas and handlers. Each saga and handler, in fact, will declare which messages they're interested in, and in this regard, the overall work of the bus is fairly simple. A saga is characterized by two core aspects. One is the command or event that starts the process. The other is the list of commands and events that saga can handle. The resulting implementation of a saga class is then not rocket science. The class declares messages it is interested in through multiple interfaces, and then its body is full of handle methods, one for each type of supported message. Each saga must be uniquely identified by an ID. The ID can be a number of things, it can be a GUID, or more likely it is the ID of the aggregate the saga is all about. In general, a saga is a process that involves some collection of entities relevant in the business context. Any combination of values that uniquely identifies the main act or in the process is any way a valid identifier for the saga. Also, a saga might be stateful and needing persistence. In this case, persistence is typically being taken care of by the bus, and persistence involves saving the state of any aggregates associated with the process that saga implements. And finally, a saga might be in some cases stateless as well. In the end, a saga is what you need it to be. If you need it to be stateless, then the saga is a mere executive of orders brought by commands or it just reacts to events. The point behind CQRS Deluxe and sagas in the end is that it makes far easier to extend an existing solution when new business needs and new requests come up. For this extra level of flexibility, you pay the cost of having to implement a bus and a more sophisticated infrastructure for the business logic. This is exactly what I have called so far the message-based approach. So, let's say you've got a new handling scenario for an existing event or you just got a new request for an additional feature. In this case, all you do is write a new saga or a new handler and then just register it with the bus. That's it. More importantly, you don't need to touch the existing workflows and the existing code as the pieces of the workflow are, for the most part, independent from one another. Finally, a few more words about the bus. You can surely write your own bus class. Whether it is a good choice depends on the real traffic hitting the application, the optimizations, the features, and even the skills of involved developers. For sure, for example, you might need at some point to plug into the bus some queuing and/or persistence agents. An alternative to writing your own bus class, you can look into existing products and frameworks. A few examples are NServiceBus from Particular Software, Rebus from Rebus-org, and MassTransit from Pandora.
CQRS Deluxe Code Inspection
In an ASP.NET MVC application designed with the CQRS Deluxe approach, everything always starts from a request that hits the controller. So, for example, we have here a BookingController class. The sample application we are considering now is a booking application that users employ to book something like a sporting court, or perhaps a meeting room. When the request for a booking arrives, the controller maps that to an Add method and the Add method ends up calling the application layer, passing all possible details, such as the ID of the resource, the length of the expected booking, the starting time, and then the name of the user. Let's have a look at the booking service, so the entry point in the application layer, and this is where the interesting thing that characterizes the CQRS Deluxe application shows. There is a simple instruction, a constructor call that you create an instance of a class called RequestBookingCommand, and then the command object that is sent to the bus. The RequestBookingCommand is a plain data transfer object and contains the information that needs to be passed to the handler for the command. Let's have a look now at the internal implementation of the bus. We have here in the sample, an application class called InMemoryBus. This is a very simple implementation of the bus that doesn't support any form of persistence. Everything, all the sagas, all the handlers, all the running instances are kept into memory. The InMemoryBus class implements an interface called IBus for the sake of design, and the interface has four methods, RegisterSaga, RegisterHandler, Send, and RaiseEvent. So what we expect to be the public interface of the bus is fairly clear. We use the interface of the bus to register saga types and handler types and also to push a message and to raise an event. It's interesting to see what happens when we try to register a saga in addition to a bunch of lines of code for essential validation purposes. All that happens here is that an instance of the saga type is added to the internal dictionary, and even simpler is the code we run to register a handler. And then the Send method and the RaiseEvent method, as you can see, do exactly the same, so they process internally the message, but when RaiseEvent is called in addition, the event is sent and persisted to the EventStore. The bus is configured at the startup of the application. For example, in this static class, BusConfig, that is called from within the application start event. And the initialization of the bus is fairly trivial, easy to understand, the bus class is instantiated, and the EventStore is injected in the constructor, and then using multiple calls to register saga and register handler, we just populate the internal collections of the bus as far as known sagas and known handlers are concerned. Let's have a look now at the internal details of a saga sample class. There is a base class that we consider part of the infrastructure, so it's our own code you find the code of Saga class in the source code that comes with this module. BookingSaga is then a saga and it declares the message that starts the saga with the IStartWithMessage interface, IStartWithMessage of T, and this part of the class definition is exactly what tells the infrastructure that whenever a RequestBookingCommand message is received, a new BookingSaga instance has to be run and stored inside of the bus. And the implementation of the IStartWIthMessage interface consists of a public method called Handle, which just receives the specific command message and what it does internally is precisely the code that handles the message, the code that you would have expected to find right here in the application layer had you coded the workflow following a certain use case directly with your own code. An interesting thing to notice back into the saga is the sequence of actions that we perform here. In the first place, we create a BookingRequest object, and BookingRequest is an object in the domain model we have in the command stack. Then we persist a BookingRequest and we obtain a real booking with its own unique ID, and we do that through a repository, or more specifically, or more in detail, using a domain service, which interacts with the command stack database. That response we get here contains simply a Boolean answer whether or not the operation was successful, and if the operation for whatever reason failed, we then create a new event message, BookingRequestRejectedEvent, and we raise the event to the bus. If everything went fine instead, we create a BookingCreatedEvent message and push that to the bus. Because we called in both cases a RaiseEvent, both results, whether successful or not, are logged as events. And the final aspect I want to emphasize is the extensibility of any piece of business logic that is created using the CQRS Deluxe approach. Suppose now that you want to have a chance, users want you to give them a chance to handle in a different way the situation in which a BookingRequest is rejected. So you want to have a handler for this specific event. All you do is go back to the BusConfig and just add a new handler, or perhaps a new saga here in the configuration of the bus, and then you simply create an ad-hoc class, and this is a sample handler class. The handler class inherits from a base class, and implements maybe multiple times with different types the IHandleMessage, and IHandleMessage interface is also an interface you can have on sagas should the saga be able to respond to multiple commands, even if it's a running instance. IHandleMessage has nearly the same configuration as the IStartWithMessage interface, it's all about having a Handle method which receives a message event and does what it's supposed to do, what it's expected to do. In this particular case, the EmailHandler is interested in receiving a notification of a BookingRequestRejectedEvent, and when this event comes, all that it does is send an email with an error message to the registered user.
Hi everybody. Welcome to module six of Pluralsight's course on Modern Software Architecture, Dino Esposito here. Events are a concept that is empowering software beyond imagination and even plain CRUD systems command applications are going to be touched. As soon as simple forms of business intelligence and the statistical analysis become subject of customer requests, your traditional vision of an application starts wobbling, and sometimes even collapses. At a deeper and more insightful look, the real world is mostly made of events. We see events after facts, and events actually are notifications of things that just happened. In software instead, we rarely use events in the business logic. We tend instead to build abstract models that hopefully can provide a logical path for whatever we see in the domain, and sometimes working this way we end up with a root object we just call God or Universe. Events may bring some key benefits to software architecture. Primarily this is because events are immutable pieces of information, and by tracking events, you never miss a thing that happens in and around the business domain, and finally, events can be replayed, and by replaying events you can process their content and build on top of that multiple projections of the same core data. In this module, we'll start by giving an architectural perspective of events, and then we'll proceed showing what it means to have events as the primary data source of the application and what it takes to replay events and build ad-hoc projections of data. Events, keep this in mind, are really mind blowing.
From CQRS to Events
In module five, we have seen the benefits of an implementation of the command stack based on messages being exchanged between the presentation layer and business layer via the application layer. I briefly touched in module five on business events that may be optionally saved to an ad-hoc store in order to log business relevant facts that just happened. In module five, the focus was more on the use of messages and events to implement the business workflows triggered by commands. In that context, events were just cross handler notifications of things that just happened for handlers to possibly react. This is just the first step of a longer evolution aimed at transitioning software architecture from the idea of models to persist to the idea of events to log. As I see things, this is the starting point of a change that will have a deep impact on system architecture. Now let me tell you the story of my personal epiphany with events. You may have heard about the story of the apple that fell on the head of Sir Isaac Newton and led him to formulate the universal law of gravitation. Recently an analogous apple fell on my head and the message I got was that a snapshot database approach was more than okay for version one of an application I wrote, but with version two, the customer just wanted more. In particular, the customer now wants the ability to know the state of the system at a given date of the past for statistical analysis and business intelligence. You may or may not be able to extract past information and past statuses of the system with canonical and common storage techniques. Events are a more powerful approach, but, and this is the remarkable thing and the actual apple that fell on my head, events are not just for complex applications and complex scenarios, they are also for the common application. So it is not just that you don't need events to read and store, you don't need events yet. The role and the potential benefits of events in software architecture can be illustrated starting with a very common example, invoices and their sales taxes. Any invoice nearly all over the world must include some extra VAT tax. Most of the time, the VAT is fixed and can be even treated as configuration data or hard coded. In some cases and in some countries, however, the VAT varies quite often and often it is not flat, but varies per category of goods. In some way, the percentage of VAT for a given invoice must be injected in the invoice itself. But what is going to tell you the right value for a specific good? This is, or seems to be, the perfect job for a domain service, but still a domain service needs to read the details from a persistent store, and the store must be updated as taxes laws change. So, let's say that you have a database for this. What is the content? How would you organize tables? One way to organize the content of such a table is having one record per each category of goods with a column that indicates the current VAT. If the VAT changes, the record is updated. When an invoice is issued, however, the actual VAT value is stored, should be stored, within the invoice so that the invoice can easily be reprinted and displayed correctly, even if the VAT has changed meanwhile. Let's consider, however, another way to store the VAT. You have one record for each known change of VAT for any category of goods, so you keep track that VAT was X in a given timeframe and Y in another timeframe. In both cases, you can create invoice successfully, but in the second case, your database has a lot more information and didn't miss a single event in the domain. This extra information makes the entire system a lot more extensible.
Event Sourcing at a Glance
Event sourcing is a design approach based on the assumption that all changes made to the application state during the entire lifetime of the application are stored as a sequence of events. Event sourcing can be summarized by saying that we end up having serialized events as the data building blocks of the application. Serialized events are actually the data source of the application. Look, this is not how the vast majority of today's applications work. Most applications today work by storing the current state of domain entities and use that stored state as the starting point to process business transactions. If you think this is weird, have a look at the following example. Let's say you have a shopping cart. How would you model a shopping cart? Reasonably, if you're going to model it, you would probably add a list of ordered products, payment information, for example, credit card details, maybe shipping address. Easy said, easy done. The shopping cart has now been modeled and the model is a faithful representation of the internal logic we commonly associate with a shopping cart. But here is another way to represent that same information you can expect to find around the shopping cart. Instead of storing all the pieces of information in the columns of a single record or in the properties of a single object, you can describe the state of the shopping cart through the sequence of events that brought it to contain a given list of items. So, add first item, add second item, add payment information, update the second item, for example, to change the quantity or the type of a product, maybe then remove the first item, add shipping address. All these events relate to the same shopping cart entity, but we don't save in any place the current state of the shopping, but just the steps and related data that brought it to be what it is actually under our eyes. By going through the list of events then, you can rebuild at any time that state of the shopping cart. This is actually an event-based representation of an entity. For most applications, think for example of CRUD applications, today structural and event-based representations of entities are functionally equivalent, and most applications use to store their state via snapshot, so they just store the current state and they ignore events. More in general, the picture you see now onscreen is only the tip of the iceberg. There is a more general and all-encompassing way to look at storage that incorporates events, but this is just under the surface. On top of an event-based representation of data, you can still create as a snapshot database the current state representation that returns the last known good state of a domain entity. This is, at the end of the day, event sourcing, and in my opinion, we are today, for the most part, just here. Right, on the water line. In summary, here are a few key facts of event sourcing. First, an event is something that has happened in the past. Second, events are an expression of the ubiquitous language. Events are named using past tense verbs and are not imperative, or they're created. For example, is a valid event name, but not place order. If you do event sourcing, you should store events in some way. You can go with an ad-hoc relational table, or a NoSQL store, or even with a specific event store product. An event store of any kind is an append-only store, and doesn't support deletions. Events express altogether the state of a domain entity. To get that state, you need to replay events. To get the full state, you should replay from the beginning of the application timeline. Sometimes this may require you process too much data. In this case, you can, and actually should, define snapshots along the way and replay events as the delta of the state from the last known snapshot. In the end, a snapshot is the state of the entity at a given time. An event is something that happened in the past. This is a very important point about events, and should be kept clearly in mind. Once stored, events are immutable. Events can be duplicated and replicated, especially for scalability reasons. Any behavior associated with a given event has been performed the moment in which the event is notified, replaying the event, in other words, doesn't require to repeat the behavior. When using events, you don't miss a thing. You track everything that happened at the time it happened, and regardless of the effects it produced. With events, any system data is saved at a lower abstraction level than today.
Events as the Data Source
CQRS making justice of too many years of DDD misconceptions. Doing DDD right takes you to not just avoiding what DDD seems to recommend, domain model, et cetera, for example, but to look beyond it for alternatives, and CQRS is the first pattern you find when you look over. CQRS is the dawn of a new software design world, and events are what we find when the dawn has actually turned into a full day. More than everything else, CQRS is not just for a few types of apps, either. Be honest. At some point, relational databases have become the center of the universe as far as software design is concerned. They brought up the idea of the data model and persistence of the data model. It worked. And it still works. But it's about time you reconsider this approach, because chances are that it shows incapable of letting you achieve some results, getting more and more important in the coming days. There is no automatic call to action here. It's just warm suggestions to look beyond state and look in the direction of events. Events are a revolutionary new approach to software design and architecture, except that events are not really new and not really revolutionary. Relational databases themselves don't manage current state internally, even though they expose current state outside. But internally, relational databases were looking at the actual actions that modify the initial state. Events in the end are not a brilliant new idea of these days. Thirty years ago old apps still used something based on the concept of events. Simply there was no fancy name at the time for the approach they were using. Events may be the tokens stored in the application data source in nearly every business scenario. It's just that sometimes a current state classic approach is functional, as well as more comfortable to implement. Events just go at the root of the problem and offer a more general storage solution for just every system. So, what should you do today? I see two options. You can insist on a system that primarily saves the current state, but start tracking relevant application facts through a separate log. Second option, you store events and then replaying through the event stream you build any relevant facts you want to present to users as data. So in other words, from events you create a projection of data in much the same logical way you build a projection of computed database columns when you run certain SQL queries. Let's see what it means to store events. When the user adds a product to a shopping cart, you record an event. A second product, credit card information, you change the quantity of, say, the second product, you remove then the first product, you add shipping information. From these basic facts, you can easily rebuild a projection of data and represent the current state of the shopping cart or the order that you are going to submit to the back-end of the system for actual processing. So in the end, it's really about two distinct, but equivalent representations of the same information. To decide whether or not in your specific scenario you need events, consider what follows, the two points you see onscreen now. Is it important to track what was added and then removed? Is it important business-wise to track when an item was removed from the cart? If you say yes, then you probably need events. Storing events in a stream works really well, I would even say naturally with the implementation of commands. And commands submitted by the user form a natural stream of events. At the same time, queries work much better if you have structured data, so in the end you need both. You need commands for storing and queries and classic current state storage for queries. This is the whole point of CQRS, command and query responsibility segregation. As an architect, you only have to decide whether you want to have a command stack based on an event stream that works in sync with, say, an orders table, or more conservatively, you want to start with orders tables and then just record an event stream separately for any relevant fact. It's a sort of middle ground to proceed step-by-step ahead. Just a final word of wisdom, be aware that moving from a current state solution to a full event store scenario may require significant refactoring. So, it may or may not be the case to use for updates of an existing system, but when it comes to full rewrites of systems, I suggest you take events in careful account.
In software, persistence is made of four key operations through which developers manipulate the state of the system. They are CREATE, UPDATE, and DELETE, to alter the state, and QUERIES to read the state without altering. A key principle of software and especially CQRS inspired software, is that asking a question should not change the answer. Let's see what happens when a new relevant entity is created and must be persisted in an event-based system. There is a request coming from the presentation layer or in some other known UI asynchronous way, the command stack processes that, and appends a new record, say, an order, with all the details, such as products, shipping payment, customer details, discounts, and the like. The existing data store is extended with a new item that contains all the information it needs to be immutable. If, say, the information is required, the current rate is better stored inside the logged information and not read dynamically from elsewhere. In this way, the stored event is really immutable and self-contained. There's also some information of other type you need to have. Each event should be identified uniquely and in some way you need to give each event an-app specific code to indicate whether you're adding an order or an invoice. This can be achieved in a variety of ways, by type, if you use document store, or through one columns if you use a relational store. In addition, you want to have a timestamp so that it's clear when the operation took place, and any relevant ID you need to identify the entity better the aggregate being stored. Finally, full details of the record must be saved as well. If you are recording the creation of an order, you want to have all order details, including shipping payment, transaction ID, confirmation ID, order ID, and so forth. If necessary, payment and shipping operations may in turn generate other events being tracked as well. And finally, when it comes to event storage, it is transparent the technology you use so it can be relational document-based, graph-based, or whatever works for you. The CREATE operation is an event-based persistence scenario that is not much different from classic persistence. You just add a record. UPDATE instead is different, as you don't override an existing record, but just add another record that contains delta. You need to store the same information as we create operations, including a unique event ID, timestamp, code to recognize the operations, and more, and changes applied. You also need the aggregate ID and delta. If you only updated the quantity of a given product in a given order, in particular, you only store the new quantity and the product ID. Storage also in this case is transparent and can be relational, document-based, graph-based, or whatever works for you. Note that in some cases, you might want to consider also storing the full state of the entity along with the specific event information. This can be seen as a form of optimization so that you keep track of individual events, but also serialize the aggregate with the most updated state in the same place to speed up first level queries. I will have more to say about queries in awhile. The DELETE operations are analogous to UPDATE operations. Subsequently, we can say that the deletion is logical and consists in writing that the entity with a given ID is no longer valid and should not be considered for the purposes of the business. The information in this case is just made of event ID, timestamp, code of the operation, aggregate ID. There is no strict need of having delete specific information unless, for example, the system requires a reason for the deletion. Storage is transparent and can be also in this case whatever works for you. Events lend themselves very well to implement the UNDO functionality. The simplest way to have it done is by deleting physically the last record as if it never happened. In this way, there is a contrast with the philosophy of event sourcing while you never miss a thing and track just anything that happens. UNDO can also be a logical operation, which, for example, allows to track how many times a user attempted to undo things. But personally I wouldn't mind a physical deletion in this specific case. What's far more important instead is not deleting events in the middle of the stream. That would really be dangerous and lead to inconsistent data. And now we are ready to learn more about queries.
Data Projections from Stored Events
The main aspect of event sourcing is the persistence of messages, which enables you to keep track of all changes in the state of the application. Recording individual events doesn't give you immediate notion, however, of the state of the various aggregates. By reading back the log of messages, you rebuild the state of the system, and at that point, you get to know the state of the system. This aspect is what is commonly called the replay of events. Replay is a two-step operation. First, you grab all events stored for a given aggregate, and second, you look through all events in some way and extract information from events and copy that information to a fresh instance of the aggregate of choice. A key function one expects out of an event-based data store is the ability to return the full or partial stream of events. This function is necessary to rebuild the state of an aggregate out of recorded events. As you can see from the code, it's all about querying records in some way using the AggregateId and Timestamp to order or restrict. In RavenDB, for example, getting the stream of events is as easy as in the following code. Notice that you can code the same also using a relational database. That structure of a generic event class depends on the application. It may be different or in some way constrained if you are using some ad-hoc tools for event sourcing. In general terms, an event class can be anything like the following code. Again, key pieces of information are EventId, something that allows to distinguish and easily query the type of the event, Timestamp, AggregateId, and of course, event specific data. The actual rebuilding of the state consists in going through all events, grab information, and then altering the state of a fresh new instance of the aggregate of choice. What you want to do is storing in the fresh instance of the aggregate the current state it has in the system, or the state that results from the selected stream of events. The way you update the state of the aggregate depends on the actual interface exposed by the aggregate itself, whether it's a domain class or relies on domain services for manipulation. There are a few things to mention about event replay. First and foremost, replay is not about repeating commands for any generated events. Commands are potentially long-running operations with concrete effects that generate event data. Replay is just about looking into this data and perform logic to extract information from this data. Another aspect is that event replay copies the effects of occurred events and applies that to fresh instances of the aggregate. And finally, stored events may be processed in different ways in different applications. There is a great potential here. Events are data rendered at a lower abstraction level than plain state. From events you can rebuild any projection of data you like, including the current state of aggregates, which is just one possible way of projecting data. Ad-hoc projections can address other, more interesting scenarios, like business intelligence, statistical analysis, what if, and why not simulation. More specifically, let's say that you have a stream of events. You can extract a specific subset, whether by date or type or anything else. Once you've got the selected events, you can replay them and apply ad-hoc calculations and business processes and extract just the custom new information you were looking for. Another great point about events and replay of events is that streams are constant data, and because of that, they can be replicated easily and effectively, for example, to enhance the scalability potential of the application. This is actually a very, very pleasant side effect of event immutability. Now, is there any performance concern with event replay? What if you get to process too many events for rebuilding the desired projection of data? Well, to get the long story short, projecting state from logged events, yes, it might be heavy-handed and impractical with a very large number of events, and the number of logged events in many application be aware can only grow over time, because it's an append only store. In many cases, however, replay only involves just a handful of events, but that's not a general rule. So, what is the workaround when you get the process and replay too many events? An effective workaround consists of setting a snapshot of the aggregate state or whatever business entities you use at some recent point in time. Instead of processing the entire stream of events, you serialize the state of aggregates at a given point in time and save that as a value. Next you keep track of the snapshot point and replay events for an aggregate from the latest snapshot to the point of interest.
Event Sourcing in Action
Event-based Data Stores
You can definitely arrange an event sourcing solution all by yourself, but some ad-hoc tools are appearing to let you deal with storage of events in a more structured way. The main benefit of using an event aware data store is that the tool like database guarantees essentially business consistency and full respect of the event sourcing approach. Let's briefly look at one of these event-based data stores, in particular, event store you can get from geteventstore.com. Event store works by offering an API for plain HTTP and .NET and the API is for event streams. In the jargon of event store, an aggregate equates to a stream in the store. No concerns about the ability of the event store database to manage and store potentially millions of events grouped by aggregates. This is not a concern of yours, the framework underneath the tool is able to work with these numbers. You can do three basic operations on an event stream and event store. You can write events, you can read events, in particular, you can read the last event, a specific event by ID and also a slice of events, and you can subscribe to get updates. This is the representation of an event as it is written to the store. As you can see, the timestamp is managed internally, and you only have an eventId, eventType, data, and there's no way that's important to delete events arbitrarily. Another interesting feature of event store is subscriptions. There are three types of subscriptions. One is volatile, which means that a callback function is invoked every time an event is written to a given stream. You get notifications from this subscription until it is stopped. Another subscription is catch-up, and it means that you'll get notifications from a given event specified by position up to the current end of the stream. So give me all events from this moment onward, and once the end of the stream has been reached, the catch-up subscription turns into a volatile, and you still keep on getting any new event being added to the stream. And finally, the persistent subscription address set the scenario when multiple consumers are waiting for events to process. The subscription guarantees that events are delivered to customers at least once, but possibly multiple times, and if this happens, the order is unpredictable. This solution is specially designed for high scalable systems, collaborative systems, and to be effective it requires a software design that supports the notion of hidden potency, so if multiple times an operation is performed, the effect is always the same. In particular, catch-up subscriptions are good for components called denormalizers, which play a key role in a CQRS, and in CQRS jargon, denormalizers just refer to those components that projections of data for the query stack.
Designing Software Driven by the Domain
Hi everybody, and welcome to module seven, the final module of Pluralsight's course on Modern Software Architecture. As usual, Dino Esposito is speaking. Domain-driven design, DDD, has been the first serious attempt to systematize design and development of software systems. In this class, I went through the inspiring principles and common practices of DDD and discussed a few supporting architectures for the actual implementation of systems. What does it mean? Design driven by the domain, a larger version of the acronym DDD. What's the exact meaning we should intend? Abstractly speaking, it indicates the idea of building a piece of software that mirrors the business domain. In the beginning, domain-driven design came with the idea of having an object model like the business domain. In this class, we also discussed how focusing on tasks makes it easier to model the system. Finally, in a time in which users are much less passive and forgiving as they were only a decade ago, the role of the user interface and its impact on the user is critical. So, in this final module I'll offer a software wrap up about domain-driven design, and also discuss ways it applies to legacy code, task-based design, and user experience.
Dealing with Legacy Code
There are many known ways to define legacy code. Often by legacy code we mean code that exists that some other people wrote and that is complex and undocumented enough that everybody is scared to touch. Legacy code is primarily code that just works. It might not be doing things in the most appropriate way for the current needs of the business, so you feel like it has to be properly refactored and possibly rewritten. In other words, you don't like to have it around. But because it works and because everybody is constantly late on something, there is never time to do it, and in the end, you can hardly get rid of it. However, sometimes what we call legacy code is not necessarily badly and poorly written code. Sometimes it's just _____ now for the current state of the business should be doing additional things or differently. But for a number of reasons, it's risky to touch it, or it just takes time because of the complexity it carries. Let's go through some of the common aspects of legacy code. Any existing and running code has probably its own data model established and implicit. It may not be obvious how data is moved around and if data is packaged in classes or ____ items. Determining what goes in and out of legacy code and creating a sort of an interface may then be hard. On the other hand, legacy code was written to have some work done, not to be reused and showcased around. In addition, it could have been written years ago following good practices of the time that now just turned to be obsolete. How to incorporate legacy code in new applications? All in all, my favorite approach is trying to expose the legacy code as a service if all possible. If not, I would seriously consider rewriting. But this is just a general guideline I use, but in the end, it always depends case by case. So I'd say you start with the idea of rewriting the system and get all the abstractions you need. If some features you need to rewrite then exist in the legacy system, I'd consider incorporating existing assets as services rather than rewriting everything from scratch. Needless to say, this only works if the new system you need has exactly the same features and the same behavior. If you can disconnect the legacy code and recompile it as an independent piece of code, then put it in a bounded context, expose it as a service, and create a façade around it to make it work with the rest of the new system. It may not work, but if it works, it's just great. Not because you can do something, you should be doing that particular thing. Not all legacy assets are equal. Some can be reused as services, some not. Some are just old and obsolete. Before integrating legacy code, make sure you carefully evaluate the costs of not rewriting from scratch and also the costs of integrating legacy code as a service. Next, if legacy code can be turned into a service, then just let it be and focus on other things to do.
Revisiting CRUD Systems
CRUD is a long-time popular acronym to indicate a system that focuses on four fundamental operations in relationship to both the user interface and storage, create, read or retrieve, update, and delete. In this regard, it's essentially correct to state that all systems are CRUD systems. Well, to some extent at least. The CRUD cycle describes the core functions of a persistent database, typically a relational database. The code implements the CREATE, READ, UPDATE, and DELETE operations, and saves the resulting state to the database. While it's probably correct to state that merely all systems are CRUD to some extent, the devil is in the details of what actually turns out to be the object of those operations. Is it a single individual entity or is it a more complex and articulated data representation? Let's try to list a few common attributes of a CRUD system. First and foremost, a CRUD system is database-centric. The model of data available to the code is exactly the same being persisted. Deletions and updates affect directly the data stored. This means that at any time you know the current state, but you are not tracking what has really happened. A CRUD system for the most part has relatively basic business logic and is being used by one or very few users and deals with very elemental collections of data, mostly one-to-one with database tables. In summary, a CRUD is typically quick and easy to write and subsequently it often looks unrealistic these days. When we hear or pronounce the words it's basically a CRUD, we are right in that we want the four basic operations being implemented. But tracking actions or not, the amount of business logic and rules, concurrency and dependencies between data entities change significantly the scope of the system and raise significantly the level of complexity and effort. So it's basically a CRUD system, but must be devised in quite a different way to be realistic and effective as of today. Sure, a CRUD system is database-centric. In modern software, however, that persistence model is just one model to think about. Sometimes to better process relationships between collections of data, you need to build a different model that expresses behavior while saving state or just actions through a different model. More often than not, the context of change must be tracked. An update cannot simply be overriding the current state of a given record. It might be necessary that it's tracked whether it is an update to the state, tracking to delta, and all of them in the life of a data entity. Plenty of business logic, concurrency issues, and interconnected entities, graphs of objects populate the modern software and systems must take that into account. So, we still have CRUD operations against a database, but CREATE deals with graphs of entities, READ fills out views, UPDATE and DELETE log the change to the current state of the system, and today, C, U, and D, CREATE, UPDATE, and DELETE are commands and reads, the R, are queries, and this why the CQRS approach is vital and fundamental for today's applications as we discussed in module five of this course.
A Gentle Introduction to UX-driven Design
The easiest way to capture an idea has always been a sketch on paper and sometimes paper napkins. In fact, many great ideas and projects in history have been first sketched out on the napkins of some cafeterias. What you see is what you get is an old slogan of the graphical user interfaces of development tools like Visual Basic, but what you see is what you get is also today a philosophy we can apply to software architecture. What users first perceive of each and every application is what they see and what they get as they interact with the application. What they see is commonly referred to as the user interface. What they get, including sentiment and emotions, is user experience. User experience is a very frequently used expression these days, but what is UX exactly? It can be wrapped up by saying that the user experience is the experience that users go through when they interact with a given application. In light of the user experience, what is the most effective way to architect a system? Start from a database where relevant would be persisted, arrange some business logic out of requirements and persistence constraints, and then put some UI on top of everything. This is how it mostly works today, and if something gets lost along the way, you serve a UI and a UX that customers just don't like, that is a problem to face, I'm sorry, and you start just from scratch or sort of that. What if instead you just start from what users like and approve, and then from there, you build the system. The system internally may not be perfect when you first deploy it, but you can be sure that the core input and output is just right. To build a software system today, is the classic bottom-up approach still the most effective? Personally, I'm leaning towards a different top-down approach. The bottom-up approach to software architecture is based on the idea that you understand requirements and start building the foundation of the system from the persistence model. Your foundation is a layer of data with some endpoints that, like beans, are looking for an outlet. As long as users are passively accepting any UI enforcements, this has been the case for several years, let's just say that, you can then accommodate any foundation, perhaps at the cost of introducing in the middle a less than optimal user experience. But now more and more users are actively dictating user interface and user experience preferences. Therefore, you'd better start designing from some front-end they like, they certainly like. Endpoints to connect to the rest of the system are exposed from the presentation layer, like beans, and the rest of the system is then designed just to match those endpoints, like an outlet can be designed just to match given bean. And persistence is designed to save data you need to save in the shape that data counts your way down the stacks. In the end, it is presentation and everything else is sort of a big black box.
Highlights of UX-driven Design
The whole point of top-down design is making sure that you are going to produce a set of screens and an overall user experience that fully matches user expectations. Performance and even business logic can be fixed and fine-tuned, but if you miss the user experience, customers may have to alter the way they do business and go through their tasks. It may not be ideal. So, in the user experience design approach, the first step is building up UI forms and screens as users love them. You can use Y frames and Y framing tools for this purpose. At this point, once screens have been approved, you have a complete list of triggers that can start any business process. Hence, you implement workflows and bind workflows to presentation endpoints. For example, in an ASP.NET MVC application, this means invoking workflow process from a controller class. A workflow represents a business process and you create old layers that need be there, repositories, domain services, whatever else that serves the purpose of implementing successfully the business process. For the whole method to work, however, it's key that you hold on and iterate on the UI forms approval process until you get users signing off approving explicitly. And when they say yes, this is what we want, and this is the way we like to work with this application, only then you proceed. Any time you appear to waste, this stage is saved later by not likely having to restructure the code because of a poor or wrong user experience. In summary, sign off on what users want and use sketches and Y frames to get their approval. A sketch is a freehand drawing mostly made to capture an idea. A wireframe is a more sophisticated sketch with layout navigation and content information. You may not need mockups instead as mockups are just wireframes with some CSS and graphics attached. It may be required instead that you produce a prototype just to show live how the whole thing will work. A prototype is a simulation of the real system and its implementation usually have very little to do with the implementation of the final application. So it only appears to work as the final app, but there is nothing under the covers behind the surface that can likely be reused. The point is avoid doing any seriously billable work until you are certain about the front-end of what you're going to create. More in detail for each screen, UX driven designs suggest you have a basic flowchart, like determine what comes in and out of each screen and create appropriate view model classes. Make application layer endpoints receive and return such view model data transfer object classes, and finally make the application layer orchestrate any subsequent tasks through the layers down the stack, so repositories, domain services, external services, and whatever else you may need to set up and work with. In UX-driven design, a second architect role works side-by-side with the classic solution or software architect role. This is the UX Architect. Responsibilities of the UX Architect are defining the information architecture and content layout, defining the ideal interaction, which means essentially storyboards for each screen and each UI trigger, being around the definition of the visual design, and more importantly, running usability reviews, even filming users if necessary. This is done in order to identify bottlenecks in the storyboards as soon as possible. For the work, for the role of the UX Architect, a few tools are helpful. Here is a short list. There is Balsamiq, which is a wireframing tool, UXPin, which is a platform, a Cloud-based platform with the ability to create wireframes and storyboards and testing and collaboratively getter feedback about the work being done, Infragistics Indigo is another analogous product that also is available as a standalone, and JustInMind is yet another tool with wireframing and storyboarding capabilities offered on _____ and through a Cloud subscription model. UX Architect in the end is only a role and the same individual can be at the same time acting as the UX and/or the solution software architect.
Pillars of Modern Software
Hey, if you're here, it probably means you went through the entire course and hopefully enjoyed it. A few final words about the state of the art of today's software architecture then are in order. First and foremost, the definition of domain-driven design, DDD, should be revisited to add emphasis on the tools for domain analysis, such as ubiquitous language, bounded contexts, and context maps. Second, in the building of each bounded context, the layer the architecture is key to having best of breed_____ today layers over tiers are preferable as scalability these days can easily be achieved, obtained with a stateless single tier, small, compact, simple servers, implemented perhaps as web roles in some Cloud platform and then scaled according to the dashboard and the rules and the possibilities of the Cloud platform. The design of systems should be top-down, as it starts from what users really want and expect. This means great user experience by design and built from the ground up. And finally to achieve everything to build actual back-end of the system, CQRS command and query responsibilities, segregation, and event sourcing are the new and really, really important hot_____ things. These days separating command and query stacks makes everything in the building of the application more natural, convenient, and simple to code, and even simple to optimize, because the stacks are separated and the command part and the query part can be deployed and optimized without effecting each other. Event-based persistence, yet another cool aspect of modern software architecture, lets you not miss a thing and makes the entire solution subsequently easy to extend in the future by adding more features and supporting more notifications, more commands, and subsequently, more events. What else? Thank you very much, that's all. Thank you very much for taking the class, and you know about that old actor's joke? That's all, folks! Thank you very much for coming. If you liked the show, please let everybody know. Otherwise, keep it for yourself. Thank you very much, till the next one.