Enter the (testing) pyramid

There is no wonder why automation testing has taken a front seat. There are numerous benefits that we see when we invest time in automation testing (ONCE). The main reason I personally love automated testing is because it helps us to quickly know if something is broken (Fail Fast), makes the dev team keep marching forward and makes team fearless during refactoring.

Anyone who is involved in automation testing might be aware of the below pyramid (Without Mummies) that shows the layers of testing and how, as we climb up the pyramid layers, our tests gets slower in execution and gets expensive too.

Testing Pyramid

In last couple of years I got a privilege to work with multiple teams where I saw almost all teams having knowledge of the testing pyramid and spent good amount of time in automation testing. But, during every code reviews we could also see the efforts put by team were considerable in correcting earlier test cases. Digging further we found few things were not done quite right,

  • Tests that should have been as part of one layer where found in a different layers
  • Not all code was covered in the core modules (How to find core modules?)
  • Redundant tests across layers
  • Test classes were not refactored (Will cover more in another article)

“Automation Testing is a life saviour, ONLY IF ITS DONE RIGHT. If done wrong, it introduces maintainability issues, false positives, consumes more time in correcting tests which overall hampers the productivity of the team.

Why multiple layers?

While working on enterprise software’s, not all test cases can be covered in one layer that is the only reason we need the next layers. So if we can test something in a below layer then lets not test it again in the above layers.

In automation testing the focus should be on coverage. If we could entirely cover in one or more layers then that’s sufficient. For eg: Consider a small product bundled as one exe, we can use only functional testing layer and ignore all other layers. So number of layers changes based on the requirements, intent remains same.

“Starting from bottom layer, find what could not be tested in the layer and then introduce a top layer and have tests around the missing cases”

Best Practices

Let us try to find few best practices that can be used in each layer to keep things clean

Unit Testing Layer: The super fast and cheap

  • Be consistent with naming the test cases, to understand what failed when some test fails. I recommend <MethodName>_<Scenario>_<ExpectedResult>
  • Use Arrange Act Assert syntax in the body
  • Try to get to 100% (at least close to) with Unit Testing. In fact use TDD to cover everything.
  • No test cases on private functions. Private function called within public functions should be tested by calling the public functions only.
  • Have ways to check the coverage while developing (a simple command line generating coverage results and showing non covered lines).
  • Body of the test cases should be small. If test cases are getting bigger, refactor the class.
  • The test class should change only if there is a change in the class under test. So, mock everything outside the class.
  • Create wrappers and interfaces for OS calls like DateTime, FileSystem etc so that edge cases can be written around them.
  • Create wrappers and interfaces to convert asynchronous operations to synchronous to avoid waiting (Thread.sleep) before assertions.

Functional Testing Layer: The Requirements Verifier

  • When stories are groomed with the team, and PO’s define the functional requirements and architects define the technical requirements, both should be converted to Given-When-Then syntax that will be the test cases in this layer.
  • PO’s should be able to see this layer and understand what is being done and help team to refine test cases. Hence, use Ubiquitous language.
  • The test class should change only if there is a change in this component under test. So, mock everything else outside the component.
  • Test for negative flows only at the boundaries with mocks (eg: if infra is down and web api gets a request), since negative flows within the component is taken care in Unit Testing.
  • If testing web api’s also assert the response structure to indicate if team has introduced any breaking changes and needs a api version update
  • Reuse wrappers created in Unit Testing Layer.

Integration Testing Layer: The Infrastructure Verifier

  • Primary test cases should be around “When real infrastructure like Containers, DB, Message bus, Auth etc are provided is the component using the infrastructure and preforming its job as expected or not?”
  • Example1: Passing correct and wrong auth token should give 401 or 200 in case of web api , confirms that auth infrastructure is used correctly.
  • Example2: In a web api call POST and assert GET results to see if data is stored in database correctly or not.

System E2E Layer: The End User

  • Primary test cases should be around the end users scenario.
  • If end user will be using UI then have UI tests.
  • If end user will be using API then have API tests.
  • In case of enterprise software, the tests should onboard customers and run the end user scenarios and delete the onboarded customers which also suggests having onboarding/deboarding process through API’s.

Cheat Sheet

Test LayerBranchIntent of TestingTesting ScopeWhat to Mock?Testing syntax
Unit Testing (Dev machine & Build Machine)Feature BranchDoes the public function work for ALL positive and negative scenarios?Public Methods/FunctionsEverything external to the classArrange
– Mock others for asserting right.
– Initiate class and inject mocks.
Act
– On Public functions.
Assert
– For Every detail.
– On Return values.
– On calls going outside the class.
– For datetime test in between days, weeks, months and years.
Functional testing (Dev machine & Build Machine)Feature BranchAre all the scenarios/Business case met?
How does this component behave even when any external dependencies are down?
Component (Service, Web API, DLL, JAR etc.)Everything external to the Component
(Infrastructure like DB and Message Bus, other services etc.)
Given
– Mock infrastructure/ other service calls.
– Initiate the component and inject / Override with mock.
When
– Trigger the scenario ( Call Web API/Invoke event in case of EDA).
Then
– Assert if relevant calls are made to repositories for read-write / calls to message client for publishing new event (EDA).
Integration Testing
(Post Deploying to Dev environment)
Feature BranchIs the component working when real instances of Infrastructure are provided?Component (Service, Web API, DLL, JAR etc.)
Infrastructure – Database, message bus, Auth
NothingGiven
– Set Prerequisites before the call
When
– Trigger the component scenarios
Then
– Assert sync/async responses
System Testing (After any new deployments in a bounded context)Master/ TrunkIs the End to End Business case working when all services are up and runningBounded Context (Multiple services, containers and dependent infrastructures)NothingGiven
– Set Prerequisites before the call
– Onboard customers if its part of business process
When
– Trigger the end users scenarios
Then
– Assert end user expectations
– Deboard if onboarded

Happy Learning

Codingsense

Is Duplication really Evil?

From our early days of coding we have always learnt duplication is BAD and it is so true. It decreases the quality of the code. Team’s have to remember all the places where similar functionalities exist and enhance/fix code at all the places. Since its a manual process to remember things at multiple places most of the newbies who are not well equipped with the code usually fixes the code in only few of the parts, breaking the system in some scenarios.

But in recent year with DDD getting popular we are hearing more often that duplication is not as evil as we thought and its necessary to have duplication in certain scenarios to have clean microservices and building an autonomous teams.

In my last article we extracted domain services from requirements and we got to a completely different set of services as shown below.

Now the service dealing with payments will have a customer entity that will have customer ID and payment options of the customer, while shipping service will have another customer entity that will have customer ID with shipping details of the customer. In DDD, such boundaries where duplicate entities can exists is termed as “Bounded Context” in DDD.

One of the key practices to follow in DDD is to agree upon a “Ubiquitous Language” where domain experts, developers and users mean exactly same thing when talking about a model. In our example the domain experts, developers and users when talking about customer within shipping context would primaraly mean the delivery or pickup address, delivery type etc.

In DDD the CONTEXT IS THE KING.

Read More »

Extracting services from requirements

When we start working on a new product requirements (every developers dream), usually we have several discussions with the stakeholders to understand the requirements better, then document and start with designing the system. And mostly with my experience I have seen after few years from the start of the product we start feeling that somethings could have been designed a bit differently. This usually happens with almost 100% of the products out there, but the magnitude of the emotion matters. Are we talking about refactoring or revamping. Refactoring is part of every product life cycle and should be done frequently to keep the product healthy. But revamping the system and starting building new will have a huge impact on the entire business.

So are there any best practices that we can follow from the start that can help us avoid revamping the architecture down the line. Good news is there are quite a few.

Lets take an example to get more clarity. Imagine a online retail domain (Not really the domain that I have worked so far, but let me give a try) where customers registers themselves, selects products, makes payment and gets the products delivered to their homes. The business process of the company will have key components like CRM, Inventory, Payments, Shipping etc.

We can design this in different ways

Approach 1 – Entity services based approach

In this approach we start thinking of entities that we have in our domain in this example we can say Customer, Product and Inventory as our major entities and can design a DB and services around these entities that will look something like below. When the time comes to write a business process for this domain every services would require some aspects of these entities to perform and action and will eventually end up in all services knowing everything about all the entities.

Is this bad? I guess so. There are few concerns with this style of designs are

  • Tight coupling between services that creates dependencies in between teams for feature development and releases
  • All services will know more than what they really need i.e. Do we really need customer address during payment, or payment details during shipping?
  • If any model changes the impact is very huge on the overall system
  • Multiple routes to serve different services will complicate API’s maintainability
  • Cannot scale based on business needs but would scale based on entity

Approach 2 – Domain services based approach

In this approach we try to understand the domain and business process and then define our services and entities. The best way to know about the domain is to start with anti-requirement with domain experts.

In this example we may ask the following questions to know more about what the domain experts feels on these properties?

  • How customer payment details are related to customer address?
  • How price is related to quantity of the product?

if domain experts cannot find any relationship between them then keep them seperate. And we can see this model looks quite different than what we had got with entity based approach.

Domain services gives us a lot of benefits as compared to entity services.

  • They are the right fit for Microservices architecture since they create autonomous teams with services that are focused and single responsible.
  • The Api’s around these services have clean interfaces and can be understood more clearly.
  • Since every services owns the data they require, they can be scaled easily and usually more performant.
  • Reduces release dependency and new feature dependency in between different teams.

Happy Coding

Codingsense

Before writing production code

I love coding, but I have learnt in a hard way that there are many things to do before we jump to code that will help us use our energy and time wisely.

Old Me

When earlier I used to get some requirement I used to jump into the IDE and start coding for days/weeks/months on new/changing requirements and complete the releases in the best possible way.

Few stuffs that we used for checklists are

  • Understanding requirements
  • Good design patterns and practices
  • Good coding practices
  • Good CI/CD practices
  • Automating Tests as per testing pyramid
  • Innovations
  • and the list keeps growing year on year due to introduction of Agile, microservices, DevOps, DevSecOps

Are these practices good? Absolutely yes. But, after few years and releases since the product grows, maintaining the above disciplines across all the services was hard.

For code qualities we brought few automation to match the coding standard across all the services.

Checkout the related articles :

But what about designs? when we retrospect the designs based on the evolving understanding of the product, few important services were poorly designed and few not so important services were designed very well.

Overtime we refactored the important services with good designs to accommodate new/changing requirements. But obviously could not recover the time spent in not so important areas. I was very curious if there was some way we can identify in the beginning of the product itself, like what parts of the architecture are important and what are not that would really help to focus more on the important areas.

The Trigger

While reading a article few years back, I came across the Domain Driven Design aka The blue book by Eric Evans which changed my perspective on how to look at a software requirement. The book has multiple things to learn at different levels.

Few of the key concepts that stuck out to me were the Ubiquitous language, Strategic design, Core domain, Supporting & Generic subdomains. It also introduces concepts of Aggregates/Entities/Value Objects which are very useful while designing complicated domains (will cover in my later posts).

The strategic design being the most important thing, helps not only the architects but also to the management to know what pieces of the enterprise stack is critical and take good decisions around those areas. Seriously…..? Let me tell you how.

In every enterprise software, which have multiple services, can be divided into below 3 categories.

Generic subdomains

The easier one to identify is the Generic subdomain. These are the area’s that most of the software will have and are not domain specific. For eg: Reporting/Dashboarding/Rule engines, Payment gateways etc. It is advisable to use off the shelf software’s/ libraries (Buy/Open source) and not spending time and energy on these.

In Generic Subdomains,

  • Spend money but not energy and time
  • Resources who are good at finding stuff (using Google :)), and analyzing the pros and cons of existing tools are best fit
  • Create good interfaces so that the vendors can be replaced easily

Core Domains

This is the area which every company wants to keep it as a secret since it gives the company the competitive advantage. For eg: For Google it may be how to search efficiently and with great speed, For Netflix it may be how to stream videos faster on multiple devices, for Uber it may be find and finish a ride faster.

In Core domains,

  • The companies have to invest the maximum energy and time
  • The best and most experienced resources should be working in these areas
  • High quality of design and code should be maintained (MOSTLY ALL THE DDD CONCEPTS)
  • More time should be spent on Innovations.

Supporting Subdomains

This is an area which is initially hard to find, but over time it shows up. This is the area between Core domain and generic subdomains.

In Supporting Subdomains,

  • The companies have to invest decent energy and time
  • Less experienced resources or outsourced resources can be working in this area
  • Decent quality should be maintained

New Me

With this, I was able to see the earlier products had all 3 of them without proper demarcations and we used to give equal time and energy on all the services. I started using this on the next products and from then time and energy is spent more wisely.

As the saying goes “Things grow where the energy flow” lets keep our energies for the core domains more than other parts of the software.

Happy Learning

Sensible Code Part III – What’s in your toolkit??


As we have seen in the previous article how does a software degrades by time and what is software quality, lets now see if there are any best practices that can be followed which will help us to keep the code in better shape and give early warnings before the code rot.

I have come across couple of tools that will help us to get early warnings when we side track our quality path. Start filling your toolkit with the below tools.

Coding guidelines:
All developers think differently. Some follow coding standards some don’t. How to make sure everyone writes the code in similar style following all the coding standards that are set, so that the overall code in different modules written by different developers looks same for others to understand better quickly. Here comes the first tool in your toolkit Resharper.

Resharper is a productivity tool for .Net. Its a plugin which integrates with Visual Studio and is very handy during coding. Apart from its beautiful default settings, it has options to customize setting to cater our company standards. It helps in keeping the code upto the company standard. As soon as we break any of the rule set, the particular code is underlined with warning and we can ask Resharper to fix it for us. All developers can set the same rules, to make the code look similar in all different modules.

This has lots of other benefits too that are listed here and here.

Resharper cheat sheet is available here.

While working on a file, resharper shows summary of all the warning and errors that are set for the coding guidelines and are violated with an image on the top right cornor of the editor. Now the teams responsibility is to fix all the warnings and errors suggested by the resharper in the file before check-in. If the top right corner shows a checked green icon, you are at the standard required.

Quality Parameters:
As we have seen in the previous post about the parameters those define software quality, now the question comes popping

  • “How the heck can I know where the quality is poor in my project with hundreds of dlls and multi million of lines of code??”
  • “I am interested, but where to start??”
  • “I dont want to waste more time, is there a tool which will give me the results quickly??”
  • “How often to run it?? Can I automate it??”

There are couple of tools that can help to monitor quality parameters some of them are FxCop, Sonar and NDepend. I have used NDepend extensively for .Net since its very easy to use, fast, excellent documentation in its site, has a great support team, has a sister product called JArchitect for Java. So once you have rules configured for .Net products the same can be used even for Java products too that is a wowww factor.

This tool when run on a project, lists out all the quality parameter voilations like Cyclic Dependencies, Unused code, Abstractness, Cyclomatic complexity, variables and fields in method and classes, even code that is breaking the company coding standards, performance issues and much much more. It also gives you flexiblity to write a custom rule in a format similar to SQL called CQL (Code Query Language) in older versions and CQLinq in new versions.

For eg: CQLinq to list all methods that are larger than 30 lines of code

warnif count > 0

from m in Application.Methods

where m.NbLinesOfCode > 30

select m
NDepend also has the features to compare 2 versions of the code, for eg: you ran the tool in first week and in second week you ran it again, you can compare the 2 builds and check how the quality changed, which is a good matrix to know how are we improving in quality.

This can also be integrated with contineous integrations tools, to run automatically during daily builds to monitor the quality every single day. For critical rules we can even break the builds and email the team about the code that was recently checked in that is not to quality.

Duplicate code:
The days were so good, when I used to love copy and pasting the code. I was so productive, delivering similar features were like a day’s or a weeks job for me and my managers used to praise me a lot, LOL.

But now, when I see a duplicate code my blood boils, what the heck happened to me? Is the duplicate code good or bad?

Duplicate code is like a cancer; as it grows it starts showing how bad is its presence and how hard it is to cure it, you need to invest huge money and time to cure it.

But why is duplicate code bad??

  • A fix at one peice of duplicate code will force us to change at all the places where we copy pasted the code. If we miss, god help the customers.
  • The person who has copy pasted is the only one who knows where and all they are other jewels(clones). What if a new guy comes to fix an issue in the section.
  • It breaks core design principles such as DRY, SRP and OCP.
  • Increases the LOC, More code to maintain, bigger assemblies

hmm interesting, now lets see is there any tool to help us get rid of it. I used couple of tools and found Simian (command console) and Atomiq (GUI) very good in identifying the places of clones.

Simian is really fast and has couple of options to include and exclude files and folders, set mininum duplicate line count based on our own standard, and gives output in differnt format like XML, CSV etc. I prefer xml with which I can write my own tool to read the xml and use the data to show duplicates in veriety of styles to analyze better.

Contineous integration:
As said in wiki : Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day.

Why do we have to do it so frequently?
In one of my old projects we used to generate a build only at the end of the iteration or cycle that used to be once in a month and that day would be like a festival for us. No work, whole team in one cubicle surrounding a person who builds the product ( he would be totally freaked), we would be going home really really late, all managers starring at us as we have done some crime, postponing the delivery was a habbit.

But, what was the problem,

  • Taking latest copy from the repository having all the changes from all the developers after a month and building it, guess what hundreds of errors, fights between the developers on why the interfaces were changed and not intimated.
  • Many files referred locally but not checked into the repository
  • After fixing the build issues and generating the build, we could find missing functionalities at the boundries, on everyone assuming the other would handle it.
  • Rework, nights at office

So generating builds frequently and doing integration tests regularly will ensure the final night we are at home sleeping peacefully in bed.

There are couple of tools to help us resolve the issue. CruiseControl.Net and FinalBuilder are on the top list. These tools can be scheduled to take the latest copy of your code from repository automatically and build the entire project/product and send notification via mails on failure or success of the builds.

Finally what we get from all these??
Using all these tools together we can improve the quality of overall software and keep monitoring them… Ask me how??
Use the continous integration tools to build the assembiles and run custom tools like NDepend/Simian to check quality parameters and report any breakages to the team through emails.

By having such systems in place we can get early notification and take corrective actions to resolve the issues as an when they are introduced. Imagine our leads coming and telling us that “yesterday you checked in one class and in it a method named I_AM_BIG_METHOD is too big, please make it small or move it to some other class to make SRP compliant” on the next day after we have checked in when its fresh in our mind, rather than making us to sit and fix it on the final day of the release or after few months when we have totally forgotten the code we ourselves have written.

As its said An apple a day keeps doctor away, for programmers its Using these tools everyday keeps bad code away.

Clean and happy coding,
Codingsense 🙂

Sensible code Part II – What is software quality??


Before getting to software quality, lets spend time to see how a software degrades?

Software degrade doesn’t happen in a day or a week, it takes couple of months or even years for us to realize that the software quality is degraded. And it takes even more time after that to bring it back on track. It just starts to happen from the day when we start rushing the code into it, without giving enough time for ourselves to think before implementing features or fixing defects, we always tend to think the code that we write is good.

And if we are not concentrating on quality on every step we take in developing we end up as I had mentioned in my last post here.

Imagine how it would be if we could track the degrade at the moment when we insert a bad code. It really needs good observation and practices to achieve it, but in real world since we are so busy in writing new code and delivering quickly we least bother on looking back on what we have done wrong. And we are always proud of our code, how can we check if we are at the best.

There are few good practices that will really help us to avoid sidelining with your product quality path. And using these practices in every step will ensure us that we get early warnings and keep our product lively.

Let’s see what are the quality parameters that really matters in product health.

  1. Cyclic Dependencies: This is the most important of all, this parameter targets the types and namespaces. If there is any cyclic dependencies then it shows most of the principles are broken. and once we introduce cyclic dependencies, its really really hard to maintain that piece of code. To describe a cyclic dependency lets taken an example. If a type/namespace A is using a type/namespace B and B in-turn is using A, then we call such dependency as cyclic dependency. In this case both A and B are tightly coupled and any changes in A or B will result in an impact in both of them. Writing UT for such dependent types is also hard to mock or stub.
  2. Afferent coupling (Ca): This parameter helps us to monitor single responsibility principle (SRP). This parameter is applicable for method, type (class), namespace (package) and even assemblies. For eg: if 3 other types are dependent on a type t1, then afferent couple of t1 is 3. If there are more types referring a type then it shows that it is taking more responsibility and it should be broken down into multiple types.
  3. Cyclomatic complexity (CC): This parameter is the count of number of path of execution. For eg: If there is no condition in your method then the CC of that method would be 1. If there is one if condition it would be 2. So any condition in your method will increase the CC by 1. All the following keywords increases CC by 1. If, while, for, foreach, case, default, continue, goto, ||, &&, catch, ?:,?? etc..
  4. Interfaces: Following dependency inversion principle (DIP) and interface segregation principle (ISP) gives us very much flexibility for future modification.
    DIP says no classes should be accessed directly but should be used thorough an interface.. For eg imaging if a type A wants to use B, instead of referring B directly in A, create IB interface and use IB in A, now A is dependent on the signature not the behavior, if new type of B1 comes we don’t have to change anything in A, and simple pass the instance of B1 instead of B.
    ISP says dont clutter your interface with all the methods but segregate it using SRP. For eg: if we are writing some class for Person and create an interface IPerson with methods, walk, run, stop, sit, stand, eat, sleep, meditate, exercise etc. Instead separate IPerson with behavioral interface, like IMove can have walk, run, stop methods, instead of having them in IPerson directly. and then inherit IPerson or better Person class from IMove.
  5. New keyword: For creating any object, we have to use a new keyword, but why is it called bad to use new keyword? new keyword is good if its in right places, it should not be scattered though out the code, the less new keyword is used the better. We can decrease the usage of new keyword using factory method, dependency injection or MEF. But how does it help us?? imaging a logger class, and as of now we have a requirement that logger should log only in a text file and we implemented it and used new keyword across our code wherever we need logging. Later if the requirement says we should provide also option to log in event logger then we start writing if-else inside our logger class. Instead its a new responsibility and should be taken care by a new class. So if we had the creation of logger in a factory, we can just write a new implementation and return it.
  6. Comment and names: Comments as suggested by uncle bob in his book “Clean Code” is ugly. He suggests that if the variable, methods, types, namespaces and assemblies are named in a descriptive way, then there is no need for a comment. Reading a line should feel like reading an English sentence. Always make a class noun and a method verb, Eg a class Cycle can have methods like Ride(), Start(), Stop()
  7. Number of lines: This parameter depends on programming language that is used, for .Net and Java for a method 15 is an ideal count, and for a type its 200. There are many cases that is observed when you cannot write a logical flow within 15 lines in a method. But ideally it’s good to method and type as short as possible.

There are many many more quality parameters that needs to be considered. But lets focus on these above parameters since they are really important to be taken care of. In next post lets see how to get early warnings on the above parameters as soon as we break the law.

Let me know if there are any important parameters that I have missed to mention.

Happy Learning 🙂

Sensible coding part I – Howz your product???

After few years of experiencing development in various projects and technologies, I started to think of reflecting my past on what went right and what went wrong in terms of development for self appraisal.

While analyzing I found couple of common things that happens in a life time of a product and people working on it.

Kick off a project – Happy birthday to our new project (Year 0):
Product is getting born, very much in versatile state. Lets name it Mr Perfect. Management bless Mr Perfect to be built with world class code and can sustain any changes in the requirement and will live long and long with good health and bring a very good name to the company.
Best competent people are selected for the team. Everyone in the team are excited about the new deliverable that are planned, we can see the excitement and energy in the team. All are busy in learning a new technology or new version of technology, people come up with different approaches for any given problem. Working with such team seems wonderful.

First release of Mr Perfect (Year 1):
Product has survived the first release and any changes or bug fixes have not effected it. People start working on new requirements and changes that comes from the filed users very positively.

Second release of Mr Perfect (Year 2):
Lots of changes and bug fixes were done, new and new features are getting pushed into the product. Deadlines are short, managers ask for quicker deliveries, customer asks for quicker fixes. People start to doing hard work and ignore smart work to achieve the goals.

Third release of Mr Perfect (Year 3):
The product has grown big with more than half million lines of code. People have created a mindset on which feature is good and which feature is bad ( few techies call it legacy code 🙂 ). The feature that are labelled legacy, is creating panic in the people who are associated with it. If there is any bug raised in it, they start to panic and get frustrated sometimes.

Fourth release of Mr Perfect (Year 4):
Our marketing and requirement guys go to customer to check how do they feel about the product and what else they need. They come up with a big list of new features, scrap some features and list of customer complaints on improper support.
Development team is unable to digest the changes, they start looking for workarounds, hiding some features, implementing new features.

BOOOOMMMMMMMM!!!!

What are the probable outcomes?? any guesses??
Check which all of the below listed would be true.

  • A bug fix in one module, starts creating impact on some other module.
  • The code is hard to understand, very fragile and not versatile.
  • Duplicate code is introduced everywhere, any fixes at one place should be fixed everywhere.
  • Performance of the product is very low.
  • Loads of memory leaks results in slow performance and crashes the tool often.
  • Removing a feature is not easy since some of its classes is used by many of them.
  • Development team asks for much higher estimations to achieve even a small change.
  • Bugs reported in critical features (Legacy code) are ignored and delayed as much as possible.
  • Quality team raises non compliance on some features with more bugs.
  • Managers are worried about their competent team.
  • Development team suggests lets refactor the features or build a new product.

Blah Blah Blah..

Guess what would happen to such Mr Perfect product or the customers who rely on them?

Has anyone seen these problems in the product?? What went wrong suddenly just last year everything was fine?? Is there any solution for this??

Let me know your comments if you have seen such issues or are living with it or overcame it.
hmm I bet there has to be some solution for such 🙂

Next >> Sensible Code part-ii What is software quality?

Happy Learning 🙂

Steps to improve maintainability of your code

Hi,

I have seen people always telling that maintaining a product is harder. All people want to go into the product or module where its building from the scratch and pass the older code to our juniors. Why?? Is it that we want to learn new things or we don’t want to waste time in the product where we have put lots of effort to ruin it or are we afraid??

Is there any easy way to convert bad code to good code. Its not like moving a magic wand and all code is straight. First of all we should accept that its our mistake, unless we accept our mistake we keep giving hundreds of reasons to run away from it. We should have interest in making it good, plan properly and execute with determination. Here’s my story wherein I have gone through phases from good to bad and bad to good.

Two years back in Feb 2008 I started a product and was playing a role of tech lead. There were lots of eyes and concerns on that product that it will be a game changer for our company. There was a long list of feature given by our functional team, plan to combine 2 existing product functionality into one, easy navigation, and the old saying was repeated “URGENT REQUIREMENT” we were given 1 year for the first product release. So it began, planning of modules, prioritizing the features, coding standards was revisited, right features were assigned to right persons etc etc. Full team was motivated to work in the upcoming successful product.

Days passed… somewhere after 6 months an internal release was planned for the functional team and for the stakeholders. Great all were happy, I and my team got many awards, good recognition in the company and got some suggestions (also known as changes) in the software. All changes were incorporated successfully with the new features parallel development.

All was going great, until after some 5 months (1 month left for delivery) some expert gave (very late) a very good suggestion to the management for future perspective. When we received the document of changes they were huge, major changes had to be done in many of the key classes. All teammates were sort of disappointed that their feature which was done with lot of hard work and interest would be corrupted. They started blaming the management, for such changes. But all cooperated since we were the experts and it would be shame to go against the changes expected.

Now started the actual impact, changes were planned, one change would affects other flow, dependency between the classes was more, poor commented code, testers started doing stress testing on the bug tracking tool by putting more and more bugs in it.All team was under stress, daily pig meetings were started to know the status. Day by day the deadlines were nearing and the features and bugs were poured more on us.

Finally we ended up not giving the full product in the expected deadline. The days of the awards and recognition were gone. For next week I and some senior members were very busy in answering what went wrong and why we dint reach the target. Since there was intense pressure from the marketing team the product was shipped to the customers who were waiting for it. There were still more changes pitching in and we were very much tensed how to incorporate those without creating regression bugs in our product. Many bugs were reported by the customer and fixing them was a huge task for us.

On one evening was not feeling to go home so stayed till night I thougth that I and my team were responsible for the bad code and we are the one going to fix those and show our potential. I started recollecting all the mistakes that were done by me and my team and started documenting it, further found some tips across the web and added those. After analyzing the entire problem I found angle of deviation that we had been from our actual goal and plans and listed all the faults that we had done. Made up my mind to speak to the management in the early morning regarding it and get some time to refactor and restructure the code. The following steps were planned to be executed with intense care.

  • Put proper comments
  • Fix coding standards and project conventions
  • Remove unnecessary code
  • Identifying and breaking up big classes
  • Breaking up big methods
  • Refactor existing code
  • Remove memory leaks
  • Restructuring
  • Write automated unit tests

After following each step properly with collecting the daily status from the team, sharing the problems and discussing some new solutions, finally it took around 7 months to complete the product.
Even though we were late by 8 months we had reached the goal properly and were ready to accept any further changes. From last year the product was sent to the market and there was very good response from our customers and the days of awards had begun again.
From next posts I am planning to go in depth of execution of each of the steps followed.

For any clarification please revert back.

<p value="<amp-fit-text layout="fixed-height" min-font-size="6" max-font-size="72" height="80"><span style="font-family: Verdana,Arial,Helvetica,sans-serif; font-size: small;">Happy Coding<br>Naveen Prabhu :)</span>Happy Coding
Naveen Prabhu 🙂

Try Avoiding Exception

Hi,

Last week I was optimizing a module, as I was working I found that if exceptions are avoided then we can save lot of cpu cycles. So made the below sample to see how much time we can save by avoiding an exception.

In below sample a method just iterates and another method throws an exception.




using System;

namespace ConsoleApplication1
{
class Program    
{
static void Main(string[] args)
{
System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();

sw.Start();
loopProper();
sw.Stop();
Console.WriteLine("Looping Time : " + sw.ElapsedMilliseconds);

sw.Reset();
sw.Start();
ThrowException();
sw.Stop();
Console.WriteLine("Exception not hanlded Time : " + sw.ElapsedMilliseconds);

Console.Read();
}

static void loopProper()
{
for (long Index = 0; Index < 350000; Index++)
{

}
}

static void ThrowException()
{
long Temp = 0;
long Zero = 0;

try            
{
Temp = Temp / Zero;
}
catch (ArithmeticException ex)
{

}
}
}
}

After running the above sample I found that time taken by just one exception is approx around time taken by 3,50,000 iterations. So if we can avoid exceptions then we can use cpu for other work in the same time.

I dont mean that exception handling should not be used, but try to avoid it to maximum. In above example a simple IF can avoid executing the line that can throw an exception in some scenario.

Any about my program, the module had a algorithm that would take 80-85 sec to complete and after optimization its taking 2 sec to complete. Changes in logic of the algorithm and exception handling increased the performance drastically.

Happy learning,
Codingsense 🙂

Clone fields and properties in a abstract class

Hi Friends,

In this post we shall see how we can clone the properties of a abstract class.

If there is a normal class and we need to clone it, then we will use the simple method of cloning
1) Create a new object
2) Copy all the properties of this object to the newly created object
3) Return the object

But assume a condition when we have a abstract class in which lot of properties are been defined and it has lot of derived classes. Now if we want all the derived classes to implement clone and clone all the properties of the base class then we need to write clone method in each derived classes. Since we know that we cannot create a instance method in abstract class becuase its instance cannot be created.

What would you do in the following case, here is the solution for it, Its simple and a generic method that can be used in any class that needs to implement clone. Here it goes.

Clone Properties:

abstract class CloneProperties: ICloneable    
{
public int fldi = 0;

int propi = 0;
public int Propi
{
get { return propi; }
set { propi = value; }
}

int propj = 0;
public int Propj
{
get { return propj; }
set { propj = value; }
}

        #region ICloneable Members

public object Clone()
{
//we create a new instance of this specific type.            
object newInstance = Activator.CreateInstance(this.GetType());

//We get the array of properties for the new type instance.            
PropertyInfo[] properties = newInstance.GetType().GetProperties();

int i = 0;

foreach (PropertyInfo pi in this.GetType().GetProperties())
{
properties[i].SetValue(newInstance, pi.GetValue(this, null), null);
i++;
}

return newInstance;
}
        #endregion    
}

To check the output we shall create a dummy class and inherit it from cloneProperties class

class Prop : CloneProperties    
{
}

And in the Main function we shall call the clone and check

class Program    
{
static void Main(string[] args)
{
CloneProperties cloneProp1 = new Prop();
cloneProp1.fldi = 1;
cloneProp1.Propi = 2;
cloneProp1.Propj = 3;
CloneProperties cloneProp2 = (CloneProperties)cloneProp1.Clone();

Console.WriteLine("Cloning Properties");
Console.WriteLine("fldi = {0} , Propi = {1}, Propj = {2}", cloneProp2.fldi, cloneProp2.Propi,cloneProp2.Propj);

Console.Read();
}
}

Output:

Output after cloning Properties
Output after cloning Properties

Well it has worked fine, we can see that only the properties are succesfully cloned so they have got the values 2 and 3 and for fields it dint clone and it has value 0.

Clone Fields:

If you need to clone only the fields then you can replace the above clone function with the below one

public object Clone()
{
//we create a new instance of this specific type.            
object newInstance = Activator.CreateInstance(this.GetType());

//We get the array of properties for the new type instance.            
FieldInfo[] fields = newInstance.GetType().GetFields();

int i = 0;

foreach (FieldInfo fi in this.GetType().GetFields())
{
fields[i].SetValue(newInstance, fi.GetValue(this));
i++;
}

return newInstance;
}

Output:
cloneflieds

Here you can see that only fields are cloned and properties are set to the default vaule.

Happy Learning 🙂