Pages - Menu

C# - Convert string to alphanumeric only

Task

Today I have a task to strip off characters from a string that are not alphabets or numbers. I implemented it as extension method so that it is reusable.

Code

public static string ToAlphaNumericString(this string inString)
{
    if (string.IsNullOrEmpty(inString))
        return string.Empty;

    return new string(inString.Where(c => char.IsLetterOrDigit(c)).ToArray());
}

With linq, the code is short and simple. There are 2 interesting points in this.

char.IsLetterOrDigit()


One of the magic part of our magic line is the char.IsLetterOrDigit(). It checks if a character is letter or digit, then it will return as part of the list via linq.

ToArray()


The Where clause from linq returns IEnumerable. Normally we use ToList() to write the result to the memory for later use. However, because string is an array of chars in C#, we try to convert the linq result to an array, we then pass char[] to the constructor of string to instantiate a new object.


Thoughts

You probably already noticed by now. What I have done was checking up the characters one by one and instantiate a new string object with the characters. This has a potential performance issue as create an object is an expensive operation, at least for the purpose of stripping illegal characters. I have not done my benchmark yet, but I think StringBuilder will be faster.

EDIT: It turns out Linq is actually pretty good in performance according to some benchmarks.
Ref: http://stackoverflow.com/questions/11395775/clean-the-string-is-there-any-better-way-of-doing-it/11396039#11396039

3 Ways to Write Re-runnable SQL Scripts for Sproc

Previously spent a great deal talking about Re-runnable SQL Scripts for DDL and DML.

Our goal is the same, we are trying to create a suite of re-runnable and repeatable scripts to regenerate database from fresh. Previously, I deliberately left out the sproc part as I wanted to show some advance techniques.

1. Drop then Create 

We know for the fact that we cannot create the sproc if it already exists, the most basic technique is to drop the sproc in advance, so that the create script will not throw error.

If exists(select 1 from sys.procedures where Name = 'MyProc')
Begin
 Drop PROCEDURE [MyProc]
End
GO

CREATE PROCEDURE [dbo].[MyProc]

AS
BEGIN 
 Print '1'
END
GO

The drop and create is not a transitive relation. If there are any associated permissions or dependencies with the object, they get dropped with the object, and the create statement will not recreate them.

2. Create then Alter

A better approach is by creating a dummy object beforehand and use alter statement. Let's code first theory later.

If not exists(select 1 from sys.procedures where Name = 'MyProc')
Begin
 Exec('Create PROCEDURE [MyProc] AS SELECT 1')
End
GO

ALTER PROCEDURE [dbo].[MyProc]

AS
BEGIN
 Print '1'
END
GO

So far so good, and it is the typical answers that can get above 100 votes in stackoverflow easily.


While it does not lose object permissions or dependencies, I am having issues of losing the history of the objects. Let's take a look.


As you can see, this script is not 100% re-runnable. Modify_date is modified every time I run the script. I am losing the history of records and potentially bad for troubleshooting purpose.

3. Create then Alter on demand

Just to recap from previous. Here is our motto about re-runnable SQL scripts.
A SQL script is re-runnable if and only if the script will only have impact to the SQL server exactly once regardless how many times you run the script.

In order to conform this, we can only alter the sproc if it is dirty. Once again, let's code first.

If not exists(select 1 from sys.procedures where Name = 'MyProc')
Begin
 Exec('Create PROCEDURE [MyProc] AS SELECT 1')
 Print 'INFO - sproc MyProc created successfully'
End
GO

If not exists(Select 1 
  from sys.sql_modules 
  where object_id = (OBJECT_ID(N'MyProc')) 
  and CheckSum([Definition]) = '412750031')
Begin
Exec('
ALTER PROCEDURE [dbo].[MyProc]

AS
BEGIN
 Print ''1''
END')

 Print 'INFO - sproc MyProc updated successfully'
End
Else
Begin
 Print 'WARNING - sproc MyProc already updated.'
End
GO

Checksum

There is a few hurdles here. Firstly, we need to find out the checksum value of the sproc as part of the SQL script, BEFORE it is even created. That means we need to create the script first, run it, find out the checksum from the sproc, and then put it back in the script.

I would create the sproc in the database and find out the checksum by this query.

Select CheckSum([Definition]) as [CheckSum]
from sys.sql_modules
where object_id = (OBJECT_ID(N'MyProc'))

Dynamic SQL

As we cannot directly execute ALTER statements within an if-block, the only way out is using dynamic SQL. We will need to escape double quote in our dynamic SQL. This is often hard to read and creates a barrier for developers without strong SQL background.

Conclusion

Many developers would choose the Create then Alter method as it is easier to implement and mostly does the job.

Personally I prefer the more robust way of Create then Alter on demand method. We are able to receive instant feedback, and retaining record of history about the sproc. This information may sound trivial, but it could be vital for enterprise company with strict policy about live environment changes.

Although the learning curve is higher, but the result is cherish.

Re-runnable SQL Scripts for DDL and DML

At some point of time in my career, I was in a role that I would prepare scripts for auto-refresh the database by running some SQL migration scripts, so that the database will match the latest binaries.

Concept

Here is my definition of re-runnable SQL scripts.
A SQL script is re-runnable if and only if the script will only have impact to the SQL server exactly once regardless how many times you run the script.
This is a very important aspect if we are able to re-run some upgrading scripts without worrying about what-to-run and what-not-to-run during developments or deployments. Let alone continuous integration.

Implementation

There is a pattern in re-runnable SQL script. I will try to document as much as I could, but if you could learn the pattern instead of the code, then you can use it in any DDL (Data Definition Language) or DML (Data Manipulation Language).

Add column to existing table

If not exists(Select 1 from sys.columns where Name = N'PublishDate' and Object_ID = Object_ID(N'BlogPost'))
Begin
 Alter Table dbo.BlogPost 
 Add PublishDate datetime NULL

 Print 'INFO - Column PublishDate is added for table BlogPost successfully.'
End
Else
Begin
 Print 'WARNING - Column PublishDate already exists for table BlogPost.'
End
Go

Pay attention to the if-else logic, we simply put our DDL inside a if clause. Too simple?

If we run the above script twice, you would expect it not running the second time.

As we can see in the result pane, the DDL will not run the second time. 

Create table

If not exists(Select 1 from sys.tables where Name = N'Fit')
Begin

 CREATE TABLE [dbo].[Fit](
  [Id] [int] IDENTITY(1,1) NOT NULL,
  [Code] [varchar](50) NOT NULL,
  [Name] [varchar](255) NOT NULL,
  [SubjectToAcl] [bit] NOT NULL CONSTRAINT [DF_Fit_SubjectToAcl]  DEFAULT ((0)),
  [LimitedToStores] [bit] NOT NULL CONSTRAINT [DF_Fit_LimitedToStores]  DEFAULT ((0))
 ) ON [PRIMARY]

 Print 'INFO - Table Fit created successfully.'
End
Else
Begin
 Print 'WARNING - Table Fit already exists.'
End
Go

Add PK

If not exists(Select 1 from sys.key_constraints where parent_object_id = Object_id('Fit'))
Begin
 ALTER TABLE dbo.Fit 
 ADD CONSTRAINT PK_Fit PRIMARY KEY CLUSTERED (Id) ON [PRIMARY]

 Print 'INFO - PK Fit added successfully.'
End
Else
Begin
 Print 'WARNING - PK Fit already exists.'
End
GO

Alter an existing column NULL to NOT NULL


In this scenario, we are changing a column from nullable to non-nullable. Things get a little tricky here

The first if statement act as a precaution to check if the column exists, because we cannot modify a column if it does not exist and it will throw error and causing the script to stop.

The second if statement depends on what you are modifying from. If we were changing nvarchar(50) to nvarchar(max), we would have written it differently. Also, depending on your technical requirement, you may want to force the column to be nvarchar(max) regardless, or it may not be an issue at all. It really DEPENDS!!

If exists(Select * from sys.columns where Name = N'Keyword' and Object_ID = Object_ID(N'SearchTerm'))
Begin
 If not exists(Select * from sys.columns where Name = N'Keyword' and Object_ID = Object_ID(N'SearchTerm') and Is_nullable = 0)
 Begin

  Alter Table dbo.SearchTerm 
  Alter column Keyword nvarchar(max) NOT NULL

  Print 'INFO - Column Keyword is altered for table SearchTerm successfully.'
 End
 Else
 Begin
  Print 'WARNING - Column Keyword is already non-nullable for table SearchTerm.'
 End
End
Else
Begin
 Print 'ERROR - Column Keyword does not exists in table SearchTerm.'
End
Go

For argument sake, should I also be checking if table exists when I am adding a column to a table too? That's right. It will throw error if we try to add a column to a non-existing table. In my technical situation though, I know I am expecting the table there, but that would be a nice pre-caution. Therefore, it DEPENDS on your technical structure.

Alter an existing column TEXT to VARCHAR(MAX)


Slightly different approach to the above. I join the system types table to find out the definition of the column.

If exists( Select * from sys.columns C 
   join sys.types T on T.system_type_id = C.system_type_id
   where C.Object_ID = Object_ID(N'MyTable')
   and C.Name = 'Description'
   and T.Name = 'text')
Begin

 Alter Table dbo.MyTable 
 Alter column [Description] varchar(max) NULL

 Print 'INFO - Column Description is altered for table MyTable successfully.'
End
Else
Begin
 Print 'WARNING - Column Description is already not text type for table MyTable.'
End
Go

Insert Data with Identity Insert


Notice you must have the keyword go after turning identity insert on/off, but since you can't have go inside begin/end, the statements will be outside of the if-block.

SET IDENTITY_INSERT [dbo].[TaxCategory] ON 
GO

If not exists (Select 1 from TaxRate where TaxCategoryId = 6 and CountryId = 6)
Begin

 INSERT [dbo].[TaxRate] ([TaxCategoryId], [CountryId], [StateProvinceId], [Zip], [Percentage], [StoreId]) VALUES (6, 6, 0, NULL, CAST(10.0000 AS Decimal(18, 4)), 0)
 
 Print 'INFO - Data for TaxRate.[TaxCategoryId] = 6 and TaxRate.[CountryId] = 6 inserted successfully.'
End
Else
Begin
 Print 'WARNING - Data for TaxRate.[TaxCategoryId] = 6 and TaxRate.[CountryId] = 6 already exists.'
End
GO

SET IDENTITY_INSERT [dbo].[TaxCategory] OFF
GO

Update Data - Exact match

If not exists (Select 1 from Country where Published = 1 and TwoLetterIsoCode = 'AU')
Begin

 Update Country
 Set Published = 1
 where TwoLetterIsoCode = 'AU'

 Print 'INFO - Data for Country.Published = 1 for TwoLetterIsoCode = ''AU'' updated successfully'
End
Else
Begin
 Print 'WARNING - Data for Country.Published = 1 for TwoLetterIsoCode = ''AU'' already updated.'
End
GO

Update Data - Multiple matches


The if clause is slightly different to above. It checks for if exists not equal to instead of if not exists equal to. That is because we want to update all the records as long as there is one or more than one record is returned when the where clause match.

If exists (Select 1 from Country where Published = 1 and TwoLetterIsoCode <> 'AU')
Begin

 Update Country 
 Set Published = 0
 where TwoLetterIsoCode <> 'AU'

 Print 'INFO - Data for Country.Published = 0 for TwoLetterIsoCode <> ''AU'' updated successfully'
End
Else
Begin
 Print 'WARNING - Data for Country.Published = 0 for TwoLetterIsoCode <> ''AU'' already updated.'
End
GO

Delete with conditions


If exists (Select 1 from [ShippingMethod] where [Name] = 'Australia Post, Regular Parcels')
Begin
 Delete [ShippingMethod]
 where [Name] <> 'Australia Post, Regular Parcels'

 Print 'INFO - Data for ShippingMethod <> ''Australia Post, Regular Parcels'' deleted successfully.'
End
Else
Begin
 Print 'INFO - Data for ShippingMethod <> ''Australia Post, Regular Parcels'' already deleted.'
End
GO

Conclusion

.Net developers might already notice I adopted some of the .Net logging technique in the print messages. That's right, I would look for warning or error (nothing fatal here) after running my script. The initial investment of a deep level logging would pay off easily in future troubleshooting. This is something that I would not do 7 years ago when I was not influenced by web developments.

I hope I have demonstrated enough as an appetizer of what re-runnable SQL scripts look like.

There is no hard rock science about what and how many checks to put in the where clause. It is up to the judgement call of the developer.

Testing with CyberSource

I had some random issues about testing in CyberSource in the past. I asked their support a few questions, back and forth. These are already documented by CyberSource, but I found it easier to summerize them in one place.

Test Credit Card

I found this numbers from the DM_developer_guide_SCMP_API.pdf. This can be downloaded from their CyberSource Test Business Center.

American Express  3782 8224 6310 005
MasterCard 5555 5555 5555 4444
Visa 4111 1111 1111 1111

Test Total Amount

If the grand total amount is between $100 and 200, the system may or may not throw error as it is designed for testing different responses. I learnt this in a hard way.

http://www.cybersource.com/developers/getting_started/test_and_manage/simple_order_api/FDI_Australia/soapi_fdiaus_err.html

Test CVV

Similarly, CVV between 901 and 906 yield to different AuthReply response. By the look at the pattern, they might add more test cases later on, so I would avoid using anything > 900.

http://www.cybersource.com/developers/getting_started/test_and_manage/simple_order_api/FDI_Australia/soapi_fdiaus_cvv.html

Setup Continuous Integration with Visual Studio Online

Scope

I am looking to setup a continuous integration service to our process. I have previously setup Jenkins on another project, but I am after something that could cost me less time to setup. We already using Visual Studio Online for our file repository and it comes with a CI integration that I maybe able to utilize.

Steps

Create a Build

  1. In visual studio, go to Team Explorer
  2. Select Builds -> New Build Definition
  3. In Trigger, we will choose continuous integration 
  4. In Source Settings, we will pick the branch that we are working on as active; We will pick folders that we do not want to include for the build as cloak, this will reduce the time for the CI server to get the files.
  5. After walking thru the wizard, we will save our build definition.
  6. We now have our first CI build for the branch.

Email Notification

  1. In Team Explorer, go to Settings
  2. Under Team Project, choose Project Alerts
  3. Choose Advanced Alerts Management Page
  4. We can choose to receive alerts if a build complete or a build fail

NuGet

In our solution, we do not check in packages to the version control as we use NuGet to get the binaries to our workspace. This caused a little problem in CI as the build server doesn't seem as smart as our workspace that it would download the packages from NuGet.

After some extensive readings, we know that NuGet.exe command tool is available in VSO. In step 3, I am adding a pre-build event that will force MSBuild to restore NuGet packages before building the solution. In step 1 and 2, I simply need to download and setup NuGet.exe command for local development environment.

  1. Download NuGet.exe to local environment. https://nuget.codeplex.com/releases
  2. Add local NuGet.exe to Path in your Systerm Enivronment Variables 
  3. Add our magic line in the pre-build event of the project.
    nuget.exe restore $(SolutionPath)
  4. Build solution locally. We need to test and make sure the if the nuget runs correctly or not in local.
  5. If you get a 9009 exit code, it means the NuGet.exe path cannot be find. Restart visual studio or try run the NuGet restore in command prompt to see if the exe path is set correctly.

Test the Build

Time to test the build.
  1. Right click on the build and choose Queue New Build
  2. Choose Queue on the next screen. The build is now queue up in the build controller.
  3. Right click on the build and choose View Build. We can verify if the test build successful or not.

Thoughts

I am quite impressed about CI in VSO. Firstly, it is already available to me as a service and I did not have to setup a machine for it. It has a nice integration with Visual Studio and I am able to create / run builds from VS easily. The overall experience is more user friendly than Jenkins.

Ref

What are the 2 types of NuGet Package Restore?
http://docs.nuget.org/docs/reference/package-restore

Package Restore with Team Foundation
http://docs.nuget.org/docs/reference/package-restore-with-team-build

A little more hint about why a CI build might fail with NuGet packages.
http://blogs.msdn.com/b/dotnet/archive/2013/08/12/improved-package-restore.aspx

nopCommerce - Schedule Task Plugin

Scope

Recently having issues with the integration with our payment gateway provider. Occasionally our synchronize call to their server will not get a response back.

We are to write a schedule function to run regularly. It will pull out a list of pending orders and re-run the authorize and capture.

We are:-
  • using nopCommerce 3.3
  • utilizing the nopCommerce Schedule Task
  • using plugin approach

Technical Overview

We are to use the Nop.Services.Tasks.Task and ScheduleTaskService to create a Nop Schedule Task that runs periodically.

Implementation

Task

Firstly create a new plugin. In our plugin, we will create a MyTask class that implements ITask.


The only method in the interface is Execute(). This is the place where we will put our calling codes. In this example, I am calling my own method QueryPendingOrders().

IoC

In my example, I am using a new service class, so I will need to register the service class via IoC in our plugin. A new DependencyRegistrar class will do the trick.


Install Schedule Task

Next, we need to create a schedule task for the task. This can be done by 
the overrides of Nop.Core.Plugins.BasePlugin, In the Install() of my plugin, we will call the following.


Calling Method

For the purpose of demonstration, I am just writing to the log.


Schedule

After installing the plugin, a new schedule task is created as follow. (I have changed the run period to 60 seconds for demo)


Log

Let it run for a few minutes and check the log. It seems quite spot on that it is called every 60 seconds.


Conclusion

There was not much work involved to create a schedule task. All the magic are already done and made available for us from the nopCommerce.

During development, I noticed the TaskManager utilize singleton pattern. It is then responsible to instantiate the instances for the TaskThread. The task threads will run continuously and kick off the Execute() periodically. 

One thing I wanted to do is to compare the performance between Nop Schedule Task vs SQL Server Agent Job. I have a feeling that the sql job may run a little faster as the nop task thread is not a push notification, but simply a continuous running thread.


Contribute to Open Source nopCommerce project via Git

Scope

Since nopCommerce 3.4, the project is now moved from Mercurial to Git. The following shows a little example on how to do it. In this example, I am going to add a field in the DiscountBoxModel so that I can display different color if the discount is applied sucessfully or not. The fork is here.

Codeplex

Fork and pull request were previously discussed here.
http://tech.sunnyw.net/2013/11/contribute-to-mercurial-in-simple-steps.html

Git

Clone

After we fork, the clone command will get latest from remote repository to the local directory.

$ git clone https://git01.codeplex.com/forks/swon/discountboxisapplied
C:\tfs\Nop34> git clone https://git01.codeplex.com/forks/swon/discountboxisapplied
Cloning into 'discountboxisapplied'...
remote: Counting objects: 111111, done.
remote: Compressing objects: 100% (30586/30586), done.
Receiving objects: 100% (111111/111111), 257.70 MiB | 706.00 KiB/s, done.
emote: Total 111111 (delta 81017), reused 107610 (delta 78287)
Resolving deltas: 100% (81017/81017), done.
Checking connectivity... done
Checking out files: 100% (5332/5332), done.
We can verify by ls that we now have the files in local.

$ ls discountboxisapplied
    Directory: C:\tfs\Nop34\discountboxisapplied


Mode                LastWriteTime     Length Name
----                -------------     ------ ----
d----         1/09/2014   5:22 PM            src
d----         1/09/2014   5:19 PM            upgradescripts
-a---         1/09/2014   5:19 PM       2473 .gitignore
-a---         1/09/2014   5:19 PM        980 README.md

Status

After I made some changes to my files in local, the status command will show me the pending changes.

$ git status
# On branch master
# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#       modified:   src/Presentation/Nop.Web/Controllers/ShoppingCartController.cs
#       modified:   src/Presentation/Nop.Web/Models/ShoppingCart/ShoppingCartModel.cs
#       modified:   src/Presentation/Nop.Web/Nop.Web.csproj
#       modified:   src/Presentation/Nop.Web/Themes/DefaultClean/Content/styles.css
#       modified:   src/Presentation/Nop.Web/Views/ShoppingCart/_DiscountBox.cshtml
#
no changes added to commit (use "git add" and/or "git commit -a")

Commit

The commit command will now commit my changes to the repository, but before that happens, this command will cause my editor to popup and I will be able to enter my commit message.
$ git commit -a
[master 6762145] Add an additional IsApplied field to indicate if discount code
is applied successfully.
 5 files changed, 11 insertions(+), 2 deletions(-)

Push

After committing to our local repository, the last thing to do is to synchronize the changes from our local repository to the remote repository. This is done by push.
$ git push
Counting objects: 35, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (18/18), done.
Writing objects: 100% (18/18), 1.68 KiB | 0 bytes/s, done.
Total 18 (delta 14), reused 0 (delta 0)
To https://git01.codeplex.com/forks/swon/discountboxisapplied
   89f0ede..6762145  master -> master

Conclusion

Obviously there are more commands and options in Git. This article only showed the basic operations on how to contribute codes to an open source project.

As a developer that traveling between VSS, SVN, ClearCase, TFS, Mercurial and Git, I am not too excited about what tools are used, but rather what and how can be done. The way how fork and clone, push and pull are certainly innovative for open source platform. I found the commands were simple to use, and easy to remember. The experience was quite nice.

Performance Tuning - nopCommerce Category Page by Caching

Scope

In eCommerce, perhaps the most heavily viewed page is the product listing page (category page). It is the page that draws shoppers attraction, quick glance at the products and prices. We are trying to minimize the load time of the page as the scope of the article.

Benchmark

It is very important for the fact that when it comes to performance tuning, we need to know where we are at and what we want to achieve. During my career, I have seen enough people and including myself that we are trying to optimize the system by modifying a bunch of codes, and the system just 'feels' like running faster without any scientific numbers on the paper.



In our load testing, ignore the first request for the fact that we are initializing the app pool and other bunch of crap that we need to prep for IIS, the average response time is currently at around 8 seconds. Not very impressive!

Another important fact about benchmarking is that we need to set a goal for our performance tuning. I have a goal to take it down to under 2 seconds. Some of my previous big named projects were doing pretty good during Christmas when the local benchmark is at around the 1.5 second mark.

nopCommerce 3.4

nopCommerce 3.4 recently did a performance tuning that boosted the system. Since it is open source, there is no harm for me to have a peek at what they are doing.

I merged some of the code for the CatalogController.Category(). I am so glad to see that they are now refactoring the code in a much nicer way than in version 3.1, much more like a platform now with all the protected virtual methods.

Prior to 3.4, they are caching if a category have any subcategories by using the key CATEGORY_HAS_SUBCATEGORIES_KEY.

In 3.4, they have changed it to cache the actual subcategories by using the key CATEGORY_SUBCATEGORIES_KEY.

This is in fact a very nice move, and was a question that I have been asking for so long. If we bother to cache if we have something or not, why don't we just cache what we have.

Solutions

As usual, I like to take the steps further. There is a lot of calculations and database calls(although already cached like the subcategories above) in the CatalogController.Category() action. I am going to cache the entire CategoryModel used by the action in the cache. The benefit is there is a lot of boxing and unboxing in this method and I am trying to save those as well as the database calls.

Checked in the official forum that they have other concerns for not caching the entire model so that they can cater for wider audience. I don't have that concerns in my business requirement, so I am ready to go for the big cut.

I first refactored the code that gets the CategoryModel called PrepareCategoryModel(). Then I override the methods and add the model to the cache.


Conclusion

By looking at the results side by side, I have to say I am pretty happy with the performance. It is now running 8 times faster than previous; twice faster than what I would have expected at around 1 second.


CyberSource - Login Screen Keep Alive Trick

During development, found a bit frustrated with my CyberSource get timed out too quickly due to their PCI compliance and made me a little unproductive, so I managed to find a little trick to keep my login session alive.

Open the Order Status Notification page with Test Url button. If you know your session timed out, click on the Test Url once before you navigate somewhere else, this little trick actually help generate an "activity" within CyberSource, then you now have an active session again.

Good for developers during development without the hassles of logging in and out where test data are not sensitive information. Not recommended for live environment though.


They maybe fixing this in the future, and I am definitely not reporting this bug to CyberSource for my selfishness. :)

nopCommerce - Multi Tenancy - Round 2

Multi Tenancy in nopCommerce by itself seems like an on going project for me now. Previously (http://tech.sunnyw.net/2014/05/nopcommerce-multi-tenancy.html), I blogged about how I was able to create brand specific dll in one share code base. (No splitting / branching in Source Control, one code base only). I used MS Build to separate the dlls during compilation which is neat but clumsy, as most developers would overlook these places.

Scope

As previously suggested,
It is not so easy to deploy, as I need to build the project once per brand (so it will generate different dlls)
This technique rely on the developers to build a different BrandedService.dll that the IoC will pick up via IDependencyRegistrar. This gives different brands a slightly different set of dlls as it was hard to maintain.

At the end of this session, we want to achieve,
  • Only build once and use the same set of dlls across all brands.
  • Easy for the developers to deploy.

Code

  • We store a unique id "BrandId" in our Web.config, and of course we will utilize web config transformation for this matter.
  • In the DependencyRegistrar for each brand project, we only register the classes if the brand id matches.
  • IoC will register different concrete classes during reflection (as usual).
  • Brands without matching ids simply will have the register logic skipped (although the IoC will still run the through the class because they all implement the IDependencyRegister.
As demonstrated in code, this is the DependencyRegister for TNF.Services and Speedo.Services.



A snapshot of the bin folder shows the dlls are exactly the same for different tenants.



There will be a redundant TNF dlls in Speedo project and vice versa, but my time is more valuable than a few kb dll, so I think this is a better approach. Removal of redundant dlls will not break the site :)

Nop.Web

In our web project, we need to voluntarily register all our Brand.Services dll, this will force MSBuild to include all the dlls during build or publish.


Conclusion

Both of these approach have their pros and cons. I am sure there will be other ways doing this too. The key is the developers or the team must be comfortable with the process, and the people doing it are happy about it.

Further Improvement

nopCommerce uses App_Data\Settings.txt to store database connection string. This works fine for staging or live site as we just need to skip App_Data during deployment. However, it became a hassle to swap connection string for local development as web config transformation does not help in this situation. More into this topic next time.


nopCommerce Discount Rules Plugin - Exclude Sale Products

Scope

Create a discount rule plugin that when a user create a discount, they have the option to add requirement whether to include sale products to the discount or not.

We are currently on the latest nopCommerce 3.3 platform, as of today.

How it works

I went through the source code of how other DiscountRules plugin works and concluded the followings:
  • The relationship between Discount and DiscountRequirement is just a one-to-many FK relationship. One Discount have many DiscountRequirements.
  • Any properties such as which customer role is required, or how much dollar value to pass the requirement rule is stored in setting.
  • The setting key is defined within the plugin and transparent from the calling controller / services.
  • In XxxDiscountRequirementRule.cs, we define CheckRequirement() that will return true / false if the requirement is fulfilled. It reads the relevant settings to check the properties such as dollar value or customer roles for passing the discount rule. 
  • The rule file must implement the IDiscountRequirementRule interface.
  • The pattern for how the settings are stored are generally in this format. string.Format("DiscountRequirement.xxx-{0}", discountRequirement.Id)
  • There is an ajax call in the view that does a post back to the XxxController.Configure when saving the DiscountRequirement. 

Steps

Create the project

Following my previous post How to write a nopCommerce plugin I have created a new project called DiscountRules.SaleProduct from other DiscountRules plugin.

Model

I added a boolean property in the RequirementModel to indicate if we want to include sale products.

public bool IncludeSales { get; set; }

View

The project I copied from were using DropDownList, I changed to CheckBox and modified the Ajax call that post back the property to the controller. Make sure to follow the pattern on suffixing the requirement id to the setting, as the screen can have multiple instance of the same rule.


Change the controller name in the ajax post back to match the new plugin name.

Controller

Modified the Configure method to work with the Ajax post back call. Make sure the parameter names match.

XxxDiscountRequirementRule.cs

This CheckRequirement() method is the magic method that we will put our logics in checking sale items in the cart or not, and we will return true / false accordingly to exclude sale products from discount. Hopefully the customers still happy...

Depending on your implementations, the code might look like something below and could be very simple.
public bool CheckRequirement(CheckDiscountRequirementRequest request)
{
    ...

    foreach (var product in cart)
    {
        if (product.SpecialPrice.GetValueOrDefault() > 0
            && product.SpecialPriceStartDateTimeUtc < DateTime.Now
            && product.SpecialPriceEndDateTimeUtc > DateTime.Now)
            return false;
    }

    return true;
}


Troubleshooting

While in debug mode developing the plugin, if you found the plugin not reset properly even after you rebuild the solutions or clear cache. You could try to clear the Temporary ASP.NET Files folder. If you get access denied because it is being used, try delete the w3wp.exe from task manager. That will stop the IIS from accessing it.

nopCommerce Plugin - Reflection error about LoaderExceptions

Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.

Today trying to setup a few build configurations for my multi-tenant projects, so that I can use different web config transformation when working on different projects.

I came to an error complaining about pluginDescriptor.ReferencedAssembly reflection System.Reflection.ReflectionTypeLoadException. The error looks like this.





Solutions

It is basically the reflector try to look up the dlls for reflection and throw exception as dlls are not there.

I quickly went through the build configurations in one of my plugin project Nop.Plugin.Tax.FixedRate and found the following.



The new build configuration "DebugSpeedo" that I just added to the system had a default output path to the bin folder. It looks like the only place had the proper path (that puts the dlls inside the presentation layer for nop) was the "Debug" and "Release" configuration. 

It seems to me that we have to fix up the path for any new custom build configurations that we create. I have changed the output path value for "All Configurations" which hopefully, I will not see this issue again when I create more custom build configurations down the track.

Summary

It is a similar error as to the ones when we first create custom plugins, that we need to fix the output path. Just didn't expect I had to revisit it again.

nopCommerce - Painless Upgrade from 3.1 to 3.3

We are upgrading from our current nopCommerce 3.1 platform to the latest 3.3 version. It was a fair amount of works and efforts but not bad at all. Glad that I took away the pain by utilizing MVC and n-tier techniques in earlier days.

Walkthrough


  1. Pay attention to 
    1. Use NuGet to fix any packages.config package version issue.
    2. .net framework target
    3. Nop.Admin\sitemap.config
    4. Any RouteProvider, GenericUrlRouteProvider, DependencyRegister
    5. Themes Folder
    6. Custom keys in Web.Config
    7. Custom Controllers
  2. What I noticed
    1. BaseNopController now split into BasePublicController and BaseAdminController
    2. BaseNopPaymentController is renamed to BasePaymentController
    3. My payment plugin that implements IPaymentMethod now need to implement the SkipPaymentInfo method. This method is used by CheckoutController if payment info should be skipped. (Why don't we call it RequirePaymentInfo? ^^")
    4. CategoryNavigationModel.CategoryModel became CategorySimpleModel
    5. Follow compiler error, the followings are to be fixed
      1. Change from private to protected for methods or properties that was customized in the core.
      2. Recreate any hook
    6. For Kendo UI, replace,
      1. GridCommand -> DataSourceRequest
      2. GridModel -> DataSourceResult (and remove strongly type)
      3. Remove Grid Attribute
    7. Reinstall and fix any 7spike plugin issues. Be warned, the AjaxFilter one is problematic!!
    8. Our own custom plugins worked seamless :) Just need to update the supported version from 3.1 to 3.3, or else the plugin list won't load the plugins.
    9. Copy and replace Web.Config from other nop plugins to our custom plugins.

Thoughts

As you can see, there were lots of work involved but I wouldn't say it was upgrade-resistance. The Kendo UI upgrade was huge, but it is for a good sake. Telerik is just overdue to retire.

What do I need for upgrading from 3.3 to 3.4?
  • This cheat sheet
  • And, a cup of coffee.




BitCoin Hoax Email - Wrong Target Audience?


Received an hoax email about successful bit coin authorization today. Occasionally you get emails about winning lotto or inherit some money from your lost ancestors, but why bit coin?

BitCoin was certainly a hot topic in the recent years, but it is still among the more IT savvy who knows the digital world in and out. It seems a bit contradicting when an hoax email suppose to target someone less geeky?

To think out of the box, I am going to look at the email from a non-technical perspective.

  • BTC is an acronym short for Bit Coin, but not for general public.
  • Most people wouldn't click since there is no incentive like big lotto winning or money waiting for you.
  • For people who do own or use bit coin, there simply no such thing as invoice confirm. Bit coin is all about block chains.
It did however added some fun to my day while I am doing branching and merging. :)



eCommerce store review - Kmart

Kmart is a large department store that sells clothes, stationary, electronics etc in Australia.

I was trying to search for some "Wii U" stuff in Kmart and as we know, that implies the Nintendo Wii U game console stuff.

A little disappointed to see what I get back in the search result. I ended up with a bunch of U-shaped pillows with no Wii.

http://www.kmart.com.au/webapp/wcs/stores/servlet/SearchDisplay?searchTerm=wii+u&categoryId=&storeId=10701&catalogId=10102&langId=-1&pageSize=15&beginIndex=0&sType=SimpleSearch&resultCatEntryType=2&showResultsPage=true&searchSource=Q&pageView=&orderBy=5



Technical Insight

How it works?

The site used "or" clause in their search. It means they are trying to match any products that have "Wii" or "U" in the product keywords / names / descriptions. Thus the result set is not relevant to what was intended.

What happen to the Wii? My logic is simple. They didn't have any Wii products online but only their brick and mortar store.

How would I do better?

  1. One way I can think of is to add ban word to the pillow products to disallow them showing up in a search of "Wii anything".
  2. Another way is to give the user an advance options so they can choose to use "or" or "and" for their keywords. (This one is quite commonly found as it is easier to implement.)
  3. Take out ambiguous or noise words from keyword such as "U"

SQL Server - Transaction log chewing up harddrive space

Scope

In a production server that have Recovery Model set to Full mode, it is easily noticeable the transaction log will grow to some unmanageable size in a short period of time depending on the number of activities of the database.

Steps

There are numerous ways to do it, and depends on different version of SQL Server. I found this is the easiest and worked across the board.


Backup

Do a full backup before we begin. This will create a last well known tlog backup as we are about to destroy the chain.

Ideally you want to switch your website off during this operations. Otherwise there will be no logging during this operation time. I personally think that's not a big deal, but I thought I will point it out anyway.

Truncate

Change Recovery Model to Simple Mode. This is to truncate the transaction log. 


Shrink

Run command to shrink the log.

exec sp_helpdb database_name

DBCC SHRINKFILE (transaction_log_name, 1)

exec sp_helpdb database_name




Change Recovery Model back to what it was - Full Mode

Backup Again

At this point, I prefer to do another full backup. It is not compulsory, but will clearly define where the new tlog chain begin.

Further Reading

nopCommerce - Hello World to Multi Tenancy

Finally, it is about time to do some multi tenant works. Let's recap my goal and how I am travelling so far.

Scope

  • Build a multi tenant e-commerce platform for 15 or more international brands.
  • They are completely different brands and target consumers are completely different.
  • They will share some similarities in terms of functionalities but there will be customizations for individual brands
  • Administrator of a brand can only see stuff about their own brand.
  • Easy to develop
  • Easy to deploy
  • Easy to maintain
  • There is NOT necessary a super admin role to overlook ALL the brands.
  • Customers are NOT shared across brands.

Preparation

So far, I have built my first brand with the above scope in mind, the following strategies were implemented.

Specification

Since our first site went live, we are now ready for our next brand and the following needs to bear in mind with the followings:
  • Separate the brand specific codes in separated dll.
  • Database to be considered at later stage.
  • No brand specific code at controller level.
  • Views for different brands are handled by Nop Theme.
  • We will deploy for all the brands from the same Visual Studio Solutions.

Decision Making

Get Dirty with Code


Project Setup

Notice I am skipping some of the steps and only covering the key points. (eg. Add references, routes, register dependency etc...) They are covered previously.


I created a Misc Service and an interface for our platform.


Then I create brand specific services in the relevant projects.







Build Configuration

With the default Debug configuration, we will clone a set of new build configs per projects. In each build, we only include one brand dll at a time. (Only service dll in this example)






Only for the relevant build configurations of the brand projects, we will set the dll output path to the same as the Nop.Web project.



Controller

We setup a controller that will consume ITAMiscService and return the brand name. Notice how the controller does not have knowledge to the concrete class but only the ITAMiscService interface.



Hello World to Multi Tenancy


Who am I?

By changing the build configuration, I am able to change my workspace to the corresponding brand.







This is achieved by a MS Build trick.
  • The brand projects are not referenced anywhere (very important)
  • Therefore, when we build Debug, there will not be any brand specific dlls.
  • We used build configurations to trick MS Build to dump the brand dll to the web project for relevant brands.
  • Autofac reflection will pick up the brand dll if they exists, because of the way how Nop Dependency Register works.

Caution

  • Use clean solution in between the building of different brand environments (to delete any brand dlls).
  • Pay attention to the dlls in the Nop.Web bin folder. Occasionally the clean solution do not delete everything in the bin folder. Then we will need to manually clear this folder.
  • Always check Who am I to make sure you are working in the right brand.

Further Improvement

  • It is not so easy to deploy, as I need to build the project once per brand (so it will generate different dlls)
  • Unfortunately in practice, clean solution doesn't always work. For anyone who familiar with MS Build, we could create pre-build event to properly clean up the solutions (instead of relying on Visual Studio)
  • Combine MS Build with PowerShell, we can implement a way to "quick switch" dlls for different brands.

TSQL - Delete Duplicate Rows Basic Techniques

Scope

From time to time, I will be asked to delete duplicate rows from the database. This is a very easy job but if you don't know the trick, it might get you off-guarded.

In this exercise, we are trying to find out if there are any products with the same SKU in the system and remove all the duplicates but leave one alone.

Steps

First we will check if any duplicates by using group by and having.

select Sku, count(Sku) as Cnt
from Product
group by Sku
having count(Sku) > 1

We found 245 duplicate products.

We will verify the above by checking the product id.

select P.[Id], Duplicates.Sku
from Product P
join (select Sku 
      from Product
      group by Sku
      having count(Sku) > 1) Duplicates on (Duplicates.Sku = P.Sku)

The same SKU shown with different product ids.

The trick we use is use aggregation to group them together and pick the odd one we want. Showing all the duplicate products with only 1 row per sku by their max id, and leave the duplicates out of this result set.

select max(P.[Id]) as Id, Duplicates.Sku
from Product P
join (select Sku 
      from Product
      group by Sku
      having count(Sku) > 1) Duplicates on (Duplicates.Sku = P.Sku)
group by Duplicates.Sku

We picked our 245 products that we will keep.

The last thing to do is to remove the duplicates except the ones we picked. This is by using delete from. Notice the deived table in the query are from previous checking code.

delete Product
from (select max(P.[Id]) as Id
      from Product P
      join (select Sku 
            from Product
            group by Sku
            having count(Sku) > 1) Duplicates 
                                          on (Duplicates.Sku = P.Sku)
      group by Duplicates.Sku) ProductToDelete
where Product.Id = ProductToDelete.Id

245 rows with duplicate SKUs deleted.

What if?

Sounds too good to be truth? What if the following scenario arise, can we still do it?
  • A legacy SQL table doesn't have ID nor primary key for us to use group by on.
  • Need to find out duplicate values of whole row or multiple fields of a row and not just 1 field (in my case SKU)
The common fallency is that we need to use cursor, multiple sql scripts or a console app to loop through something. In fact not, it is achievable with T-SQL only and these are typical job interview questions that I would ask (as an interviewer) in the SQL area. :)

HttpWebRequest Error. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Scope

In an attempt to use HttpWebRequest to make https call to our server, I got the error of

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Pretty self explained that it is complaining about SSL certificate or security issue or UTC clock offset etc. For my purpose, this is a local server that I am calling and I am not transferring sensitive data anyway, so I am not worrying about SSL security.


Solution

try
{
    ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(delegate { return true; });

    var req = HttpWebRequest.Create(url);
    var res = req.GetResponse();

    using (var stream = res.GetResponseStream())
    {
        using (var sr = new StreamReader(stream))
        {
            return sr.ReadToEnd();
        }
    }
}
catch (Exception ex)
{
    Console.WriteLine("Exception caught: " + ex.Message);

    return string.Empty;
}


All it take is the first line of ServerCertificateValidationCallback delegate to return true, the rest is just standard HttpWebRequest call.

Create a Script for Daily SQL Backup Job with Unique Name

Scope

Create a SQL script or job that manage daily sql backup with various options.

Code

The below script is generated from Sql Server Management Studio with my own little tweak that append a timestamp to the file name. This will create backup with file name MyDatabase_yyyy-mm-ddThh.mi.ss.mmm.bak
Declare @SQLPath nvarchar(4000)
Set @SQLPath = N'D:\Backup\MyDatabase_' + Replace(CONVERT(nvarchar(30), GETDATE(), 126), ':', '.') + '.bak' 
BACKUP DATABASE [MyDatabase] TO DISK = @SQLPath WITH NOFORMAT, INIT, NAME = N'MyDatabase-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10

GO

PS. According to Microsoft. By default, SQL server will backup as NOINIT (append). For those who do not use the timestamp and always backup to the same filename, must make sure the right options are set for INIT or NOINIT depending on backup strategy of the business.

Example

A simple cursor example that does backup on all user databases and override the last backup.
DECLARE @name VARCHAR(50)
DECLARE @path VARCHAR(255)
DECLARE @fileDate VARCHAR(20)
DECLARE @fileName VARCHAR(255)
 
SET @path = 'D:\Backup\'  
SET @fileDate = Replace(CONVERT(nvarchar(30), GETDATE(), 126), ':', '.')

DECLARE db_cursor CURSOR FOR  
SELECT name 
FROM sys.databases
WHERE name NOT IN ('master','model','msdb','tempdb') -- system databases
and [state] = 0 -- online
 
OPEN db_cursor   
FETCH NEXT FROM db_cursor INTO @name   

WHILE @@FETCH_STATUS = 0   
BEGIN   
       SET @fileName = @path + @name + '.bak'  
       BACKUP DATABASE @name TO DISK = @fileName
       WITH NOFORMAT, INIT, 
       SKIP, NOREWIND, NOUNLOAD, STATS = 10

       FETCH NEXT FROM db_cursor INTO @name
END   

CLOSE db_cursor   
DEALLOCATE db_cursor

Xml Deserialization with Namespace Prefix in C#

Scope

  • To deserialize an xml from a 3rd party vendor into a strongly typed C# object.
  • The xml has a namespace attached.
  • No DTD or XSD were provided by vendor.

Steps

XML Schema Definition Tool (Xsd.exe)



Basically 2 command lines I executed:
  1. I first created the xsd file from a sample xml file that I have.
  2. Then I created the cs class file for the model by using the xsd scheme. 

Model

It is the best to generate model via Xsd tool because it contains a lot of System.Xml.Serialization Attribute that are needed for the serializer. Especially around the namespace area!! 

The class file will look something like this. A typical model class with lots of attribute hints.



Serializer

All the hard works were done for generating the model and xml attributes. That left this part a bit simple in comparison. It is a bit similar to Autofac, where the hard works are done at the entity level and the actual conversion is just 1 line of ToEntity().

The code looks like this. Some people managed to do this in 2 lines, but I found 4 lines look pretty neat for readability and my troubleshooting.



Troubleshoot


Unbounded Complex Type Bug

1 of the error that I ran into looks like this.

Unable to generate a temporary class (result=1). error CS0030: Cannot convert type

It is a confirmed Microsoft bug and won't be fixed.

The work around is to change the unbounded or add a dummy element for the complex type.


I added the element and regenerated the model class file, and the serializer is happy again.

Xsd generated incorrect schema

The xml schema reverse engineered by using xsd.exe from a sample xml file is slightly incorrect. It predicted I might have multiple child nodes, thus the class generated had all these unnecessary array declaration.

Looks like this.

   1: public MyComplexType Notes []


I would not blame the xsd tool but there are some manual works required to fix it. (Or C# code will end up with a lot of .FirstOrDefault()

XElement.Name.LocalName

As described before, there is a fallency of trying to access the element via XElement.Name. This method only works when there is no namespace. Out of curiosity and I read further that the correct way is to use XElement.Name.LocalName. I wouldn't recommend going down this path, but it is interesting to know that how we conventionally accessing an element name incorrectly (when you have namespace).

http://msdn.microsoft.com/en-us/library/system.xml.linq.xelement.name%28v=vs.110%29.aspx