NLog to DB with UserId

who-did-what_scaled

The Problem

A user (whether live or internal) allegedly has an issue with your ASP.NET web site that stores the user Id in the ASP.NET session. They describe the problem and the steps to reproduce, but even with that it would be nice to have more information about what specifically they did. Or better yet what the code did on their behalf.

For this, it helps to have more granular data than tracking. We will wade through the pile of application logs to find our smoking gun. If you’re using NLog odds are that you already have this.

Now there is a different problem, the sheer volume of log statements. Even for a relatively small site (~50 concurrent users) plucking out the relevant statements for your problem user becomes a problem.

The Solution

Simply add the userid or any other session variable to each log statement and then you can easily filter based on that. Wait a second though…I don’t want to have to edit each and every log statement. Fortunately thanks to NLog you don’t have to.

Install the NLog, NLog.Config, NLog.Schema and NLog.Web packages using the following commands. Install-Package_NLog

NLog.Config will stand up a shell configuration file with examples.

Install-Package NLog.Config

NLog.Web will add the ability to use ASP.NET session variables and other goodies with NLog.

Install-Package NLog.Web

Update the NLog.Config file like below to include the new value.

There you have it. Now you can easily filter log entries by user. You can find my code here.

Logging_Auth_Info_And_Session_Info_2017-06-15_22-55-34

References

  1. AspNetSession layout renderer
  2. NLog Database target
  3. AspNetSession layout renderer not working
  4. NLogUserIdToDB code

South Florida .NET Code Camp 2017

Thank you to all the volunteers, speakers and sponsors that came together to make South Florida .NET Code Camp 2017 happen. Thank you for providing Code for Fort Lauderdale with a community table to tell people about how we’re trying to improve our city and county. I enjoy meeting and talking with all the attendees. I learn a lot from those conversations. I’ve recorded some notes from one of the sessions that I was able to attend below.


Your Application: Understanding What Is and What Should Never Be
by David V. Corbin

Here is the powerpoint for this talk. When testing your application, it’s important to have a narrow focus. Take the simple example of calculating the slope of a line.

y = mx + b

m = delta y / delta x = rise / run

What happens when it is a vertical line? The run is 0. How does the program handle divide by 0?

The Testing Taxonomy contains Top level “Kingdoms”.

Transitory Testing

  • Thinking about the problem
  • Ad hoc execution
  • Local helper programs
  • No long term record
  • How can you possibly know what he did in his head?

Durable Testing

  • Durable Testing
    • Why do we skimp on Durable testing? Perceived cost is high. We’re not being effectively lazy. Maximize the amount of work not done. from Agile Manifesto. Once you get through the mindshift, it is easier for most things. Some things you have to pay to implement it.
  • Tests exist with expected results
  • Audit trail showing the test was done
  • Manual tests
  • Unit tests
    • Unit Tests and System Tests are the endpoints of a spectrum
  • Automated tests
  • System Testing

UI/Browser -> Logic -> code -> Logic -> DAL -> S.P. TSQL -> Data

Component Tests

  • API Tests
  • Integration Tests
  • Sub-system Tests

We never tried that set of inputs. Never did those two things at the same time. It worked in the last version! Get rid of regression errors permanently! “I hate the same pain twice.”

It’s important to understand current state of the application and the constraints of future state. For example, this should action not take longer than a given time period. Have some artifact about the constraints and that can be tested automatically. Testing should be a game in the mathematical sense. When there is a set of decisions with a desired outcome, Game Theory.

Where do we get value in our organization and in our situation?

How are we measuring our testing

  • Code coverage
  • Low numbers indicate large amounts of untested code
  • High numbers are often meaningless
  • Cyclomatic complexity
  • Absolute minimum test paths that you need to run
  • Does not detect data driven scenarios

Data Specific Considerations

  • Reveals many errors in logic/calculation
  • Can be hard to identify

Time specific considerations

  • Discovers problems that are often not found pre-production
  • Virtually impossible without special design considerations

IO rewriting

  • Multi-threaded and async operations
    • Often the most difficult to understand, categorize and test
    • Careful design is your best defense
    • Using the latest await/async
  • How to test if a collection is modified? You can with unit tests.

Negative Testing

  • The art of testing what is not there
  • Common problems
    • Unexpected field changes
    • Unexpected events
    • Unexpected allocations
    • Unexpected reference retention
  • Nobody achieves perfection.
    • Forget about always and never.
    • Exploratory testing is your best defense for catching the gaps.

Multiple Views with Long Term Focus

  • Deep understanding encompasses a range:
    • A wide view
    • A deep view
  • It is impossible to get to the point of Understanding Everything
  • One will never be Done
  • It is a continuing quest for understanding

What is Software Quality?

  • Grady Bosh (UML)
  • Look at what is not quality
    • Surprise
    • If things happen according to expectation, then you have your desired level of software quality
    • Understanding reduces surprises
    • There will always be bugs/defects/surprises
    • Increase in known issues is a good thing
  • One cannot test everything!
    • Don’t attempt to.
    • Create a simple prioritization Matrix.
    • Identify a small target for your next sprint.
    • Strive for continual improvement.
    • Add a robust definition of done.
    • Experiment and try to make each time a little bit better.

Jenkins Create TFS Label

Why?

Who needs this if there is already a TFS plugin for Jenkins and the feature has been completed? I couldn’t find a graphical guide on how to do it. There are a lot of configuration pages in Jenkins. I assume you have Jenkins and TFS playing together for this guide. You can follow the steps below to have Jenkins create a label in TFS.

How?

1. Open Jenkins

jenkins_home_2017-02-28_16-38-22
2. Open your project

jenkins_ryans_test_project_2017-02-28_16-40-15
3. Click “Configure”

4. Click “Post-build Actions”

jenkins_configure_project_2017-02-28_16-41-16

5. Click “Add post-build action”
6. Select “Create a label in TFVC” (TFVC = Team Foundation System Version Control)

jenkins_post_build_action_2017-02-28_16-43-22
7. Set the label as you see fit

jenkins_create_a_label_in_tfvc_2017-02-28_16-44-29
8. Click “Always” or “If the build is successful” depending upon when a label should be created

9. Click “Save”

10. Go back to your project

jenkins_ryans_test_project_2017-02-28_16-40-15

11. Click “Build Now”

12. Open your Build

13. Click “Console Output”

jenkins_console_output_2017-02-28_16-45-49

14. Go to TFS to see the newly created label

jenkins_label_in_tfs_2017-02-28_16-47-48

That’s it. Now you can trace your Jenkins builds back to a specific version in your TFS source control.

Making Chutzpah and TFS 2013 get along

If you’re like me, you got your tests running in Visual Studio(VS). Then you moved on to other priorities. When you returned, some of the javascript unit tests started failing. The problem is that you didn’t know when. You thought to yourself “this should be part of the application’s build process”.

First, head to a great article called “Javascript Unit Tests on Team Foundation Service with Chutzpah“. I completed most of the preliminary work and forged ahead. My javascript tests (using jasmine in my case) still failed.

I turned on tracing, because there’s got to be loads of useful information in those logs. I added the .runsettings file according to the chutzpah wiki. The .runsettings file looked like the below.

After doing this, I received the following error.

An exception occurred while invoking executor ‘executor://chutzpah-js/’: An error occurred while initializing the settings provider named ‘ChutzpahAdapterSettings’. Error: There is an error in XML document (4, 5).There is an error in XML document (4, 5).The string ‘True’ is not a valid Boolean value.

In an effort to rule out the obvious, I changed the “True” to “true” so it looked like this.

To my surprise, it worked took me to the next error. Chutzpah generates a trace file at

c:\users\{build_user}\AppData\Local\Temp

on the build agent machine. I opened up the trace file to find the following error.

vstest.executionengine.x86.exe Information: 0 : Time:11:57:10.4535821; Thread:10; Message:Chutzpah.json file not found given starting directory \content\js

I added the Chutzpah.json file using the wiki page as a reference. All javascript tests and the chutzpah.json must be set to copy always.

ReferenceError: Can’t find variable: {ClassName} in file:///D:/Builds/4/CI.Test/bin/content/js/file.test.js (line 24)
attemptSync@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:1886:28
run@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:1874:20
execute@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:1859:13
queueRunnerFactory@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:697:42
execute@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:359:28<mailto:execute@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:359:28>

I added references at the top of the file that took the TFS build agent src folder structure into account. I followed the same pattern as this stackoverflow question. Once I did that, all my javascript unit tests hummed along.

Build Failed: TFS out of space

Problem

As I looked forward to my weekend, Team Foundation Server(TFS) 2013 devised other plans. I chugged along making changes in the code, and then hit issues when I tried to queue new builds.

The error included the following message. You can view a screenshot below.

tf30063-you-are-not-authorized-to-access-microsoft-iis_8-5

Solution

This took me on a wild goose chase to here, here and here. None of these touched the issue in any meaningful way. A coworker of mine mentioned that it might be a space issue on the TFS server machine. I logged in to find the infamous blue pie of disk space.

TFS 2013 leverages IIS to communicate with the client, check in code and queue builds. With developers whacking at TFS day and night, it didn’t take much time for the IIS logs to grow to ~17 GB.

The size allocated for the drives never took this into account. In this case, the machine had a whopping 50 GB C drive. The IIS logs gobbled up the extra space until no one could write to the C drive.

To put out the fire, I deleted the IIS logs and watched TFS spring back to life. I came up with the following preventive measures going forward.

  1. Increase the drive size to 60 GB
  2. Configure IIS to put the logs on a different drive
  3. Disable IIS logging for the time being
  4. Create alerts to track the space on all drives for that machine

Powershell: Backup & Copy

Automating deployments can save you a great deal of headaches and even trauma in some cases. However I found it disappointing when I actually tried to put the best practice of automating deployments into practice.

I needed to make a backup copy of the production folder and then overwrite the live copy with the tested copy from the QA environment. Much to my dismay, I couldn’t find any publicly available and publicly licensed PowerShell scripts to do this job.

I created the below module to do just that.

Noteworthy Items

  1. Backup folder is datetime stamped in a consistent manner
    1. Makes troubleshooting  simpler
    2. More straight forward rollbacks
  2. Different domains necessitated the use of New-PSDrive. It does still work on the same domain though.
  3. Excluding the web.config to preserve environmental configuration/differences.
  4. Delete backups older than 7 days so your infrastructure team doesn’t have to ask you why the drive is full.

 

TFS Team Build with .NET 4.0 and 4.5

Disclaimer: I don’t recommend a solution that contains projects targeting different versions of the .NET framework. However there are legitimate cases (e.g. inherited code), that demand solutions rather than a rewrite. You may decide to use a later version of the .NET framework than some of the existing code. If you mix these projects in the same solution and use the same nuget packages in different projects, you will get the following error.

The type or namespace name could not be found (are you missing a using directive or an assembly reference?)

MSBuild builds your solution in an order that determines dependencies but leaves the rest to chance. When nuget package references are resolved, they include the .NET version. By default MSBuild builds the solution into a single output directory. This means that if project A references package X and targets .NET 4.0 and project B references package X and targets .NET 4.5 then the last one to be built will overwrite the previous one’s package X dll. This will cause the build to fail.

But fear not, there is a solution.

Update the Output Location

  1. Edit the build definition
  2. Navigate to 2. Build
  3. Change 4. Output Location from to AsConfigured

Yay! Our build works again, but our unit tests won’t be run as part of the build.

2032-sad-emotion-egg

Update our Test Run Settings

  1. Edit the build definition
  2. Navigate to 3. Test
  3. Go to 1. Automated Tests
  4. Expand 1. Test Source
  5. Click the ellipse on Run Settings
  6. Add ..\src\*\bin\*\*test*.dll to the default Test assembly file specification which is **\*test*.dll;**\*test*.appx

dotnet_versions_edit_test_run

This important step instructs msbuild to look for unit tests in the source directories bin folders in addition to the standard single bin for the solution.

Now your tests will actually get run as part of your build.

pi5ejoyrt

References

  1. StackOverflow: Using AsConfigured and still be able to get UnitTest results in TFS
  2. Override the TFS Team Build OutDir property in TFS 2013

PowerShelling your Deployment Emails

QA and Development departments team members coordinate at many points along the software development life cycle (SDLC). Part of this communication revolves around deployments, which often take the form of an e-mail that includes information from Team Foundation Server(TFS). To achieve success, multiple points need to be communicated.

  1. When – It needs to be clear when a deployment is finished and the environment is ready for testing
  2. What – The stories/Product Backlog Items (PBI’s)/Bugs that are included in the build
  3. Where – The environments that will be changed and/or updated as a result of the release

The below script will lookup the provided items in TFS, update them and send relevant information in an email. I highly recommend using distribution lists for emails like this. It facilitates people subscribing and unsubscribing without updating the script.

In this example, the person executing the script would need to enter the PBI’s that are going to be deployed. Then the script will query TFS to retrieve the relevant information to include in the e-mail (Id, Title, Type, Iteration, Assigned To). In our case, part of the deployment included assigning the stories/PBI’s/Bugs to a QA user for clarity and historical record. To prevent mismatches between the e-mail and assignment, the script updates the AssignedTo for each provided item.

The output of the query comes in a tab separated vector (tsv) format. The TsvToHtmlTable function converts the output from tsv to something (aka html) that looks decent within an email. The endpoints listed in the e-mail remove guesswork about where to find the environment and perform testing.

For me, investing the time and energy into these communications helped foster trust and collaboration between Dev and QA. I hope it helps you achieve the same, if not more.

Gated Check-In and Nuget Blues

Gated Check-In

You might have heard that it’s a good idea best practice to use gated check-ins. It seems like a no-brainer to make sure that your code at least builds and passes the unit tests. TFS 2012 has the ability to setup gated check-ins, what luck. You set it up and the sun and shining and birds are singing.

birds-in-a-nest

Oh happy days. Then one day, you see this.

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): The primary reference "\bin\Library.dll" could not be resolved because it has an indirect dependency on the assembly "NLog, Version=4.0.0.0, Culture=neutral, PublicKeyToken=5120e14c03d0593c" which was built against the ".NETFramework,Version=v4.5" framework. This is a higher version than the currently targeted framework ".NETFramework,Version=v4.0". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): The primary reference "\bin\Efile1Library.dll" could not be resolved because it has an indirect dependency on the framework assembly "System.IO.Compression, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" which could not be resolved in the currently targeted framework. ".NETFramework,Version=v4.0". To resolve this problem, either remove the reference "\bin\Efile1Library.dll" or retarget your application to a framework version which contains "System.IO.Compression, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089".

Short Term Fix Alert

More than likely your Product Owner (PO) didn’t select anything for the sprint that included debugging gated check-in issues. However you committed to finishing the work that you can’t now check-in. If your build fails with a similar error, do yourself a favor and try cleaning out the local(on the build agent) nuget cache.

This will actually solve the issue….for a bit. The birds sing but appear kind of nervous.

Swing and a Miss

Then the issue returns. You think hey “Maybe there’s something to all that stuff about different versions.“. Then you look at your solution and notice different versions of the same Nuget reference. For me, it was Newtonsoft.Json. All the projects are referencing the same version of the .NET framework (note: Client Profile vs regular is a significant difference).

Project A had Newtonsoft.Json.6.0.8 and Project B had Newtonsoft.Json.7.0.1. I upgraded both to reference Newtonsoft.Json.8.0.2. While I recommend consolidating your nuget package versions, it did nothing to actually solve the problem that I was having.

Dealing with the Real Issue

This zombie error just won’t stay down. When you really need to commit a changeset, you will be revisited by this error.

The root cause of the error was the Library.Test project that referenced the Library project. I inspected their references and noticed that while Library had a reference to NLog (which I highly recommend) but the test project did not. This gave me an idea. The solution was to add NLog to the Library.Test project.
Install-Package_NLog
It would’ve been greatly appreciated if that were mentioned anywhere in the error or better yet if a compiler error were generated before even running the unit test. Now the birds can sing carefree once again.

 

SQL Saturday 2015 – South Florida #379

pass_logo

Thank you to Nova Southeastern University(NSU), all sponsors, speakers and volunteers for putting on a great SQL Saturday #379. Met lots of passionate people that showed how much of a community we have in Broward County. I attended the following sessions and talked to lots of people about Code for Fort Lauderdale.


DH2i & DxEnterprise Stand-Alone to Clustered in Minutes

Carl Berglund

This was a vendor session for DH2iworks across multiple SQL versions and OSes. It was a very interesting session, particularly for a developer less familiar with virtual environments (at least standing them up, configuring, updating, feeding, watering, etc.). I know that for me, it introduced the following new and/or less familiar terms.

  • quorum – the minimum number of members of a deliberative assembly. In the context of Clustering and High Availability, it is minimum number of nodes that must be in a cluster for it to be viable.
  • InstanceMobility – The ability to move an instance of a virtual machine across physical machine boundaries
  • SAN – Consolidated block level storage (as opposed to file level storage). Only block level operations are supported.
  • Internet Small Computer System Interface(iSCSI) SAN – Allows for the emulation of a SAN over IP networks

The DH2i & DxEnterprise works across multiple SQL versions and OSes. Microsoft Clustering is not designed for Quality of Service(QoS). I was impressed by the demo given showing how to build out a 2 node cluster. You can see the steps below.

  1. Create individual nodes
  2. Add the disks
  3. Create a vhost
  4. Set the two nodes to be active on this vhost

In the demonstration, one node was running Windows Server 2008 R2 and the other node was running Windows Server 2012. He explained how this simplifies implementing a non-traditional cluster(different OS/SQL versions). As an added benefit, it provides a way to perform a consistent install on each node.

References

 

  1. Wikipedia: Storage area network
  2. Wikipedia: iSCSI

 


PowerShell and Python – The Clash Part Deux

Jorge Besadaz (jbesada@carnival.com, jbesada@yahoo.com)

Below is a comparison between Python and PowerShell.

PowerShell

  • Powershell is installed by default.
  • Politically correct to use PowerShell
  • Can instantiate .NET classes
  • “Steal from the Best, And Create the Rest”. Chad Miller is better than you, see for yourself on CodePlex.
  • PowerGUI Script Editor

Python

  • Easy to learn
  • Very good support for windows
  • has modules for everything
  • has “batteries included” philosophy

JetBrains makes an outstanding Python editor called PyCharm. In python, indentation is equivalent to brackets. You can use python to call sqlcmd.

There is a great video from Jeff Snover, PowerShell Creator, that you can watch. You can find the presentation here.

References

  1. http://www.maxtblog.com/2015/06/powershell-in-south-sql-saturday-379-was-a-great-success/
  2. http://bost.ocks.org/mike/bar/

PowerShell with Visual Studio SQL Data Tools

Max Trinidad
maxtblog.com
Florida PowerShell User Group

Max demonstrated how to debug PowerShell with Visual Studio and use many of the data tools available. There were big changes between versions of powershell (2-5).

New/Enhanced Features

Rival Editors

  1. SAPIEN is editor of choice.
  2. PowerShell Studio 2012
  3. PrimalXML
  4. SharpDevelop

Takeaway

“I can do it. You can do it!” -Max