Jenkins Run MSTest Unit Tests

Problem

You’ve worked hard creating unit tests using MSTest. That’s a great start. This doesn’t mean you’re finished though.

  1. How do you know that the tests are being run?

Solution

Running the tests

Jenkins makes this part easy. There are two different types of builds that you can choose from. I’ll share how to do run unit tests with MSTest for both types.

You can add this command within a build step of a Freestyle Project or a Pipeline Project. I am assuming the build will run on a windows node or master.

 

The location of MSTest will vary based on the install location and version of Visual Studio. The /resultsfile:"%WORKSPACE%\Results.trx" parameter stores the test output to the specified file. You can use the /testcontainer:"%WORKSPACE%\MyTests.dll" parameter to test many assemblies and add all results into a single results file.

 

Advertisements

No IFrame For You

Security isn’t easy, but its becoming more important. There’s lots of evidence explaining the dangers of missing any flaws. One of the items that got flagged on a project that allowed IFrames from any other site. The findings referenced the X-Frame-Options header. In my particular case, the business wanted to allow IFraming across domains. This ruled out using DENY or SAMEORIGIN. ALLOW-FROM would’ve fit the bill if it were supported. For MVC, you can leverage built in web.config values or ActionFilter Attributes.  I was supporting a webforms site though.

For my case, I had to write some custom code. It is below. IIS can leverage the produced HttpModule. Values from the web.config allow only specific sites to iframe the site the web.config belongs to. It assumes that many sites will be semi-colon delimited.

Your web.config file could look like below for example.

 

AD your Splunk

Problem

Splunk Enterprise offers a great solution for anyone that has legal or compliance reasons requiring an on-premise setup. It’s very useful for developers that would like to do testing in a locally destructive fashion. One of the keys to creating an easy to maintain environment is getting authentication and authorization right. In my case, the vast majority of users belong to a shared Active Directory(AD) Domain.

Who Cares?

Splunk Enterprise does offer its own store of users. The reason for managing them with AD is that when people require access changes(leaving/joining teams/the company) having them all in one place makes this much simpler. Particularly if other systems already use this as a point of reference.

Okay, How?

There are many sites that reference configuring Splunk Enterprise for AD authentication/authorization. I haven’t found any that go into enough detail to make it simple. I’ve attempted to do that below.

Solution

LDAP Configuration for User Roles

  1. Click Settings
  2. Under USERS AND AUTHENTICATION, Click Access controls
  3. Splunk_Enterprise_Settings.jpg
  4. Click Authentication Method

  5. Under External select LDAP

  6. Splunk_LDAP_Authentication_Method.png
  7. Click LDAP Settings

  8. Splunk_LDAP_Connection_Settings.png
  9. Pro Tip: Make sure the Group base DN for groups points to an OU with all the groups rather than the root

  10. Make sure the user base DN is the root of AD

  11. Splunk_LDAP_Connection_Settings_2.png

AD Mapping

  1. To Map Groups
  2. Under Actions Click Map Groups
  3. Splunk_LDAP_Strategies.png
  4. You can click any group from the group base dn you provided
  5. Then you can select the roles that a given AD Group will have

Splunk_LDAP_Mapping.png

References

  1. Manage Splunk user roles with LDAP

Bitbucket Server Webhook to Jenkins

Problem

You have two great tools that you’d like to integrate. In this case, you’re using BitBucket and Jenkins. You can configure Jenkins to check BitBucket  for changes(aka polling) to make a build. But this is clunky and repetitive.

Solution

There is a better way. BitBucket offers a plugin called “Webhook to Jenkins for Bitbucket“. This plugin calls Jenkins for each new commit to a repository. This way Jenkins doesn’t call BitBucketBitBucket calls Jenkins. It’s The Hollywood Principle, “Don’t call us, we’ll call you”.

New Problem

Now like so many times in programming, your solution to one problem has created another. In debugging, this is progress. You need to know how to stitch this together. You’ll be able to configure it by clicking Edit (the pencil icon) to bring up the below screen. Once you enter all the information, click Trigger Jenkins to test the connection. You may see the following error.

Temporary failure in name resolution

BitBucketServerIssue_2017-10-05_10-10-57

New Solution

You may need to provide with the fully qualified domain name for the Jenkins instance. The machine name alone will not work(e.g. awesome_machine). You need to enter the fully qualified machine name in the Jenkins url. Assume the fully qualified machine name is awesome_machine.awesome.domain. This would make your url look like http://awesome_machine.awesome.domain:3456(assuming the port is 3456). Once you do that, you’ll get a new error.

New Problem (Again!)

Once you click Trigger Jenkins, you may get an error stating No Git jobs using the repository.Jenkins_Without_Poll_SCM_2017-10-06_14-39-58

New Solution (Again!)

To work around this, you can configure the trigger for the job to poll the scm without a schedule. You can do this by clicking Poll SCM and leaving the Schedule text area blank. You can see an example below.

 

Jenkins_Configuration_2017-10-06_14-38-44

It’s important to note that despite the above setting, Jenkins will never poll Git.

References

  1. Github: Debugging “Error: Jenkins Response: No git jobs using repository” #147
  2. Webhook to Jenkins for Bitbucket

Updating Jenkins Plugins with Powershell

Problem

Jenkins provides an outstanding open source continuous integration platform for a multitude of languages and technologies. It accomplishes this by allowing the community and other interested parties to contribute plugins. Many of these plugins are frequently updated, which is amazing! Even though Jenkins has a pretty nice user interface (UI) for updating plugins, it gets tedious since on a system of scale, there could be updates daily.

Jenkins_Plugin_Updater_2017-10-05_21-20-41

Solution

Fortunately for me, Jenkins provides a really straight forward command line interface (CLI). This allowed me to create a powershell script that will update all of the installed plugins to their lastest version. I configured this to run weekly and it’s been a huge time saver. The added benefit is that you get all the latest features for your plugins without doing anything.

I configured it to send an email out with the plugins that have been updated. I had to copy the powershell script into a gist to make it display correctly here, but here is the proper repository in case you are interested.

 

Release Management Gotcha: Quotes

Here is a gotcha I encountered in Release Management 2013 that lead to a cryptic error. Why use Release Management 2013 when there is a new Team Foundation Server(TFS) 2017 version? Releasing a new version doesn’t upgrade all the applications using the previous version. This will help anyone still using it or provide historical context.

The Problem

Release Management 2013 allows you to create components. These components can be re-used across builds. They are helpful for making your custom builds DRY. I was running a batch file and thought that I should include quotes to account for spaces in the batch filename. I configured it like the below image.

Release Management Component Configuration With Quotes.png

The problem only surfaced when I tried to run the build. I got the below errors.

Release Management Errors

The error only says Illegal characters in path. and that it Failed. This doesn’t provide a ton of information to use to start troubleshooting.

The Solution

The solution was to remove the quotes in the component configuration. Once I did that the build worked as designed. You wouldn’t know that was the issue from the message provided though. Release Management 2013 must do the proper quoting and escaping internally.

Final Thoughts

If you’re doing new development, I wouldn’t recommend using Release Management 2013. I’d recommend using Jenkins followed by TFS. I am a strong proponent of Jenkins for the following reasons.

  1. Transparency
    • Errors can be reproduced by running the command line
    • No secrecy in what’s getting executed. All output shown plainly.
  2. Local Instance
    • This gave people a playground to become familiar with the platform.
    • This allowed people room to play and experiment without interfering with the shared instance.
  3. Open Source
    • You can literally open up and examine the source code.
    • There is a vibrant community of people building plugins.
  4. It’s Old
    • I’ve yet to come across an issue that isn’t well documented on stackoverflow or a blog.
    • This is less exciting, but really awesome for production work.

Web API PUT Gotcha

I learned the below lesson about the default ASP.NET Web API routing. My specific example had to do with implementing an endpoint that supported the PUT http verb. For the sake of clarity, I chose to specify the variable name to be companyId rather than id. This would make the url look like root/{companyId}/. You can see this in the below example DoesntWork.cs.

These kinds of routing errors are not always obvious. As a consumer of the API, you may receive a response of 404 Not Found. This doesn’t provide a ton of information and can be frustrating.

The default ASP.NET Web API routing looks for a method with a variable name that is id by default. This is why the method was not found. You can specify the route in an attribute like the below as well.

Another lesson that I had to learn the hard way.

In ASP.NET Core, they’ve fixed this issue. They don’t provide an implicit [FromUri] naming convention. It supports [FromRoute], but the route must call out the variable name. You can see the snippet in dotnetcoreway.cs above. You can find a complete working sample in this repository on github.

 

NLog to DB with UserId

who-did-what_scaled

The Problem

A user (whether live or internal) allegedly has an issue with your ASP.NET web site that stores the user Id in the ASP.NET session. They describe the problem and the steps to reproduce, but even with that it would be nice to have more information about what specifically they did. Or better yet what the code did on their behalf.

For this, it helps to have more granular data than tracking. We will wade through the pile of application logs to find our smoking gun. If you’re using NLog odds are that you already have this.

Now there is a different problem, the sheer volume of log statements. Even for a relatively small site (~50 concurrent users) plucking out the relevant statements for your problem user becomes a problem.

The Solution

Simply add the userid or any other session variable to each log statement and then you can easily filter based on that. Wait a second though…I don’t want to have to edit each and every log statement. Fortunately thanks to NLog you don’t have to.

Install the NLog, NLog.Config, NLog.Schema and NLog.Web packages using the following commands. Install-Package_NLog

NLog.Config will stand up a shell configuration file with examples.

Install-Package NLog.Config

NLog.Web will add the ability to use ASP.NET session variables and other goodies with NLog.

Install-Package NLog.Web

Update the NLog.Config file like below to include the new value.

There you have it. Now you can easily filter log entries by user. You can find my code here.

Logging_Auth_Info_And_Session_Info_2017-06-15_22-55-34

References

  1. AspNetSession layout renderer
  2. NLog Database target
  3. AspNetSession layout renderer not working
  4. NLogUserIdToDB code

South Florida .NET Code Camp 2017

Thank you to all the volunteers, speakers and sponsors that came together to make South Florida .NET Code Camp 2017 happen. Thank you for providing Code for Fort Lauderdale with a community table to tell people about how we’re trying to improve our city and county. I enjoy meeting and talking with all the attendees. I learn a lot from those conversations. I’ve recorded some notes from one of the sessions that I was able to attend below.


Your Application: Understanding What Is and What Should Never Be
by David V. Corbin

Here is the powerpoint for this talk. When testing your application, it’s important to have a narrow focus. Take the simple example of calculating the slope of a line.

y = mx + b

m = delta y / delta x = rise / run

What happens when it is a vertical line? The run is 0. How does the program handle divide by 0?

The Testing Taxonomy contains Top level “Kingdoms”.

Transitory Testing

  • Thinking about the problem
  • Ad hoc execution
  • Local helper programs
  • No long term record
  • How can you possibly know what he did in his head?

Durable Testing

  • Durable Testing
    • Why do we skimp on Durable testing? Perceived cost is high. We’re not being effectively lazy. Maximize the amount of work not done. from Agile Manifesto. Once you get through the mindshift, it is easier for most things. Some things you have to pay to implement it.
  • Tests exist with expected results
  • Audit trail showing the test was done
  • Manual tests
  • Unit tests
    • Unit Tests and System Tests are the endpoints of a spectrum
  • Automated tests
  • System Testing

UI/Browser -> Logic -> code -> Logic -> DAL -> S.P. TSQL -> Data

Component Tests

  • API Tests
  • Integration Tests
  • Sub-system Tests

We never tried that set of inputs. Never did those two things at the same time. It worked in the last version! Get rid of regression errors permanently! “I hate the same pain twice.”

It’s important to understand current state of the application and the constraints of future state. For example, this should action not take longer than a given time period. Have some artifact about the constraints and that can be tested automatically. Testing should be a game in the mathematical sense. When there is a set of decisions with a desired outcome, Game Theory.

Where do we get value in our organization and in our situation?

How are we measuring our testing

  • Code coverage
  • Low numbers indicate large amounts of untested code
  • High numbers are often meaningless
  • Cyclomatic complexity
  • Absolute minimum test paths that you need to run
  • Does not detect data driven scenarios

Data Specific Considerations

  • Reveals many errors in logic/calculation
  • Can be hard to identify

Time specific considerations

  • Discovers problems that are often not found pre-production
  • Virtually impossible without special design considerations

IO rewriting

  • Multi-threaded and async operations
    • Often the most difficult to understand, categorize and test
    • Careful design is your best defense
    • Using the latest await/async
  • How to test if a collection is modified? You can with unit tests.

Negative Testing

  • The art of testing what is not there
  • Common problems
    • Unexpected field changes
    • Unexpected events
    • Unexpected allocations
    • Unexpected reference retention
  • Nobody achieves perfection.
    • Forget about always and never.
    • Exploratory testing is your best defense for catching the gaps.

Multiple Views with Long Term Focus

  • Deep understanding encompasses a range:
    • A wide view
    • A deep view
  • It is impossible to get to the point of Understanding Everything
  • One will never be Done
  • It is a continuing quest for understanding

What is Software Quality?

  • Grady Bosh (UML)
  • Look at what is not quality
    • Surprise
    • If things happen according to expectation, then you have your desired level of software quality
    • Understanding reduces surprises
    • There will always be bugs/defects/surprises
    • Increase in known issues is a good thing
  • One cannot test everything!
    • Don’t attempt to.
    • Create a simple prioritization Matrix.
    • Identify a small target for your next sprint.
    • Strive for continual improvement.
    • Add a robust definition of done.
    • Experiment and try to make each time a little bit better.

Jenkins Create TFS Label

Why?

Who needs this if there is already a TFS plugin for Jenkins and the feature has been completed? I couldn’t find a graphical guide on how to do it. There are a lot of configuration pages in Jenkins. I assume you have Jenkins and TFS playing together for this guide. You can follow the steps below to have Jenkins create a label in TFS.

How?

1. Open Jenkins

jenkins_home_2017-02-28_16-38-22
2. Open your project

jenkins_ryans_test_project_2017-02-28_16-40-15
3. Click “Configure”

4. Click “Post-build Actions”

jenkins_configure_project_2017-02-28_16-41-16

5. Click “Add post-build action”
6. Select “Create a label in TFVC” (TFVC = Team Foundation System Version Control)

jenkins_post_build_action_2017-02-28_16-43-22
7. Set the label as you see fit

jenkins_create_a_label_in_tfvc_2017-02-28_16-44-29
8. Click “Always” or “If the build is successful” depending upon when a label should be created

9. Click “Save”

10. Go back to your project

jenkins_ryans_test_project_2017-02-28_16-40-15

11. Click “Build Now”

12. Open your Build

13. Click “Console Output”

jenkins_console_output_2017-02-28_16-45-49

14. Go to TFS to see the newly created label

jenkins_label_in_tfs_2017-02-28_16-47-48

That’s it. Now you can trace your Jenkins builds back to a specific version in your TFS source control.