Updating Jenkins Plugins with Powershell

Problem

Jenkins provides an outstanding open source continuous integration platform for a multitude of languages and technologies. It accomplishes this by allowing the community and other interested parties to contribute plugins. Many of these plugins are frequently updated, which is amazing! Even though Jenkins has a pretty nice user interface (UI) for updating plugins, it gets tedious since on a system of scale, there could be updates daily.

Jenkins_Plugin_Updater_2017-10-05_21-20-41

Solution

Fortunately for me, Jenkins provides a really straight forward command line interface (CLI). This allowed me to create a powershell script that will update all of the installed plugins to their lastest version. I configured this to run weekly and it’s been a huge time saver. The added benefit is that you get all the latest features for your plugins without doing anything.

I configured it to send an email out with the plugins that have been updated. I had to copy the powershell script into a gist to make it display correctly here, but here is the proper repository in case you are interested.

 

Advertisements

Release Management Gotcha: Quotes

Here is a gotcha I encountered in Release Management 2013 that lead to a cryptic error. Why use Release Management 2013 when there is a new Team Foundation Server(TFS) 2017 version? Releasing a new version doesn’t upgrade all the applications using the previous version. This will help anyone still using it or provide historical context.

The Problem

Release Management 2013 allows you to create components. These components can be re-used across builds. They are helpful for making your custom builds DRY. I was running a batch file and thought that I should include quotes to account for spaces in the batch filename. I configured it like the below image.

Release Management Component Configuration With Quotes.png

The problem only surfaced when I tried to run the build. I got the below errors.

Release Management Errors

The error only says Illegal characters in path. and that it Failed. This doesn’t provide a ton of information to use to start troubleshooting.

The Solution

The solution was to remove the quotes in the component configuration. Once I did that the build worked as designed. You wouldn’t know that was the issue from the message provided though. Release Management 2013 must do the proper quoting and escaping internally.

Final Thoughts

If you’re doing new development, I wouldn’t recommend using Release Management 2013. I’d recommend using Jenkins followed by TFS. I am a strong proponent of Jenkins for the following reasons.

  1. Transparency
    • Errors can be reproduced by running the command line
    • No secrecy in what’s getting executed. All output shown plainly.
  2. Local Instance
    • This gave people a playground to become familiar with the platform.
    • This allowed people room to play and experiment without interfering with the shared instance.
  3. Open Source
    • You can literally open up and examine the source code.
    • There is a vibrant community of people building plugins.
  4. It’s Old
    • I’ve yet to come across an issue that isn’t well documented on stackoverflow or a blog.
    • This is less exciting, but really awesome for production work.

Web API PUT Gotcha

I learned the below lesson about the default ASP.NET Web API routing. My specific example had to do with implementing an endpoint that supported the PUT http verb. For the sake of clarity, I chose to specify the variable name to be companyId rather than id. This would make the url look like root/{companyId}/. You can see this in the below example DoesntWork.cs.

These kinds of routing errors are not always obvious. As a consumer of the API, you may receive a response of 404 Not Found. This doesn’t provide a ton of information and can be frustrating.

The default ASP.NET Web API routing looks for a method with a variable name that is id by default. This is why the method was not found. You can specify the route in an attribute like the below as well.

Another lesson that I had to learn the hard way.

In ASP.NET Core, they’ve fixed this issue. They don’t provide an implicit [FromUri] naming convention. It supports [FromRoute], but the route must call out the variable name. You can see the snippet in dotnetcoreway.cs above. You can find a complete working sample in this repository on github.

 

NLog to DB with UserId

who-did-what_scaled

The Problem

A user (whether live or internal) allegedly has an issue with your ASP.NET web site that stores the user Id in the ASP.NET session. They describe the problem and the steps to reproduce, but even with that it would be nice to have more information about what specifically they did. Or better yet what the code did on their behalf.

For this, it helps to have more granular data than tracking. We will wade through the pile of application logs to find our smoking gun. If you’re using NLog odds are that you already have this.

Now there is a different problem, the sheer volume of log statements. Even for a relatively small site (~50 concurrent users) plucking out the relevant statements for your problem user becomes a problem.

The Solution

Simply add the userid or any other session variable to each log statement and then you can easily filter based on that. Wait a second though…I don’t want to have to edit each and every log statement. Fortunately thanks to NLog you don’t have to.

Install the NLog, NLog.Config, NLog.Schema and NLog.Web packages using the following commands. Install-Package_NLog

NLog.Config will stand up a shell configuration file with examples.

Install-Package NLog.Config

NLog.Web will add the ability to use ASP.NET session variables and other goodies with NLog.

Install-Package NLog.Web

Update the NLog.Config file like below to include the new value.

There you have it. Now you can easily filter log entries by user. You can find my code here.

Logging_Auth_Info_And_Session_Info_2017-06-15_22-55-34

References

  1. AspNetSession layout renderer
  2. NLog Database target
  3. AspNetSession layout renderer not working
  4. NLogUserIdToDB code

South Florida .NET Code Camp 2017

Thank you to all the volunteers, speakers and sponsors that came together to make South Florida .NET Code Camp 2017 happen. Thank you for providing Code for Fort Lauderdale with a community table to tell people about how we’re trying to improve our city and county. I enjoy meeting and talking with all the attendees. I learn a lot from those conversations. I’ve recorded some notes from one of the sessions that I was able to attend below.


Your Application: Understanding What Is and What Should Never Be
by David V. Corbin

Here is the powerpoint for this talk. When testing your application, it’s important to have a narrow focus. Take the simple example of calculating the slope of a line.

y = mx + b

m = delta y / delta x = rise / run

What happens when it is a vertical line? The run is 0. How does the program handle divide by 0?

The Testing Taxonomy contains Top level “Kingdoms”.

Transitory Testing

  • Thinking about the problem
  • Ad hoc execution
  • Local helper programs
  • No long term record
  • How can you possibly know what he did in his head?

Durable Testing

  • Durable Testing
    • Why do we skimp on Durable testing? Perceived cost is high. We’re not being effectively lazy. Maximize the amount of work not done. from Agile Manifesto. Once you get through the mindshift, it is easier for most things. Some things you have to pay to implement it.
  • Tests exist with expected results
  • Audit trail showing the test was done
  • Manual tests
  • Unit tests
    • Unit Tests and System Tests are the endpoints of a spectrum
  • Automated tests
  • System Testing

UI/Browser -> Logic -> code -> Logic -> DAL -> S.P. TSQL -> Data

Component Tests

  • API Tests
  • Integration Tests
  • Sub-system Tests

We never tried that set of inputs. Never did those two things at the same time. It worked in the last version! Get rid of regression errors permanently! “I hate the same pain twice.”

It’s important to understand current state of the application and the constraints of future state. For example, this should action not take longer than a given time period. Have some artifact about the constraints and that can be tested automatically. Testing should be a game in the mathematical sense. When there is a set of decisions with a desired outcome, Game Theory.

Where do we get value in our organization and in our situation?

How are we measuring our testing

  • Code coverage
  • Low numbers indicate large amounts of untested code
  • High numbers are often meaningless
  • Cyclomatic complexity
  • Absolute minimum test paths that you need to run
  • Does not detect data driven scenarios

Data Specific Considerations

  • Reveals many errors in logic/calculation
  • Can be hard to identify

Time specific considerations

  • Discovers problems that are often not found pre-production
  • Virtually impossible without special design considerations

IO rewriting

  • Multi-threaded and async operations
    • Often the most difficult to understand, categorize and test
    • Careful design is your best defense
    • Using the latest await/async
  • How to test if a collection is modified? You can with unit tests.

Negative Testing

  • The art of testing what is not there
  • Common problems
    • Unexpected field changes
    • Unexpected events
    • Unexpected allocations
    • Unexpected reference retention
  • Nobody achieves perfection.
    • Forget about always and never.
    • Exploratory testing is your best defense for catching the gaps.

Multiple Views with Long Term Focus

  • Deep understanding encompasses a range:
    • A wide view
    • A deep view
  • It is impossible to get to the point of Understanding Everything
  • One will never be Done
  • It is a continuing quest for understanding

What is Software Quality?

  • Grady Bosh (UML)
  • Look at what is not quality
    • Surprise
    • If things happen according to expectation, then you have your desired level of software quality
    • Understanding reduces surprises
    • There will always be bugs/defects/surprises
    • Increase in known issues is a good thing
  • One cannot test everything!
    • Don’t attempt to.
    • Create a simple prioritization Matrix.
    • Identify a small target for your next sprint.
    • Strive for continual improvement.
    • Add a robust definition of done.
    • Experiment and try to make each time a little bit better.

Jenkins Create TFS Label

Why?

Who needs this if there is already a TFS plugin for Jenkins and the feature has been completed? I couldn’t find a graphical guide on how to do it. There are a lot of configuration pages in Jenkins. I assume you have Jenkins and TFS playing together for this guide. You can follow the steps below to have Jenkins create a label in TFS.

How?

1. Open Jenkins

jenkins_home_2017-02-28_16-38-22
2. Open your project

jenkins_ryans_test_project_2017-02-28_16-40-15
3. Click “Configure”

4. Click “Post-build Actions”

jenkins_configure_project_2017-02-28_16-41-16

5. Click “Add post-build action”
6. Select “Create a label in TFVC” (TFVC = Team Foundation System Version Control)

jenkins_post_build_action_2017-02-28_16-43-22
7. Set the label as you see fit

jenkins_create_a_label_in_tfvc_2017-02-28_16-44-29
8. Click “Always” or “If the build is successful” depending upon when a label should be created

9. Click “Save”

10. Go back to your project

jenkins_ryans_test_project_2017-02-28_16-40-15

11. Click “Build Now”

12. Open your Build

13. Click “Console Output”

jenkins_console_output_2017-02-28_16-45-49

14. Go to TFS to see the newly created label

jenkins_label_in_tfs_2017-02-28_16-47-48

That’s it. Now you can trace your Jenkins builds back to a specific version in your TFS source control.

Making Chutzpah and TFS 2013 get along

If you’re like me, you got your tests running in Visual Studio(VS). Then you moved on to other priorities. When you returned, some of the javascript unit tests started failing. The problem is that you didn’t know when. You thought to yourself “this should be part of the application’s build process”.

First, head to a great article called “Javascript Unit Tests on Team Foundation Service with Chutzpah“. I completed most of the preliminary work and forged ahead. My javascript tests (using jasmine in my case) still failed.

I turned on tracing, because there’s got to be loads of useful information in those logs. I added the .runsettings file according to the chutzpah wiki. The .runsettings file looked like the below.

After doing this, I received the following error.

An exception occurred while invoking executor ‘executor://chutzpah-js/’: An error occurred while initializing the settings provider named ‘ChutzpahAdapterSettings’. Error: There is an error in XML document (4, 5).There is an error in XML document (4, 5).The string ‘True’ is not a valid Boolean value.

In an effort to rule out the obvious, I changed the “True” to “true” so it looked like this.

To my surprise, it worked took me to the next error. Chutzpah generates a trace file at

c:\users\{build_user}\AppData\Local\Temp

on the build agent machine. I opened up the trace file to find the following error.

vstest.executionengine.x86.exe Information: 0 : Time:11:57:10.4535821; Thread:10; Message:Chutzpah.json file not found given starting directory \content\js

I added the Chutzpah.json file using the wiki page as a reference. All javascript tests and the chutzpah.json must be set to copy always.

ReferenceError: Can’t find variable: {ClassName} in file:///D:/Builds/4/CI.Test/bin/content/js/file.test.js (line 24)
attemptSync@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:1886:28
run@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:1874:20
execute@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:1859:13
queueRunnerFactory@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:697:42
execute@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:359:28<mailto:execute@file:///C:/Users/svc_tfsbuild/AppData/Local/Temp/BuildAgent/4/Assemblies/TestFiles/jasmine/v2/jasmine.js:359:28>

I added references at the top of the file that took the TFS build agent src folder structure into account. I followed the same pattern as this stackoverflow question. Once I did that, all my javascript unit tests hummed along.

Build Failed: TFS out of space

Problem

As I looked forward to my weekend, Team Foundation Server(TFS) 2013 devised other plans. I chugged along making changes in the code, and then hit issues when I tried to queue new builds.

The error included the following message. You can view a screenshot below.

tf30063-you-are-not-authorized-to-access-microsoft-iis_8-5

Solution

This took me on a wild goose chase to here, here and here. None of these touched the issue in any meaningful way. A coworker of mine mentioned that it might be a space issue on the TFS server machine. I logged in to find the infamous blue pie of disk space.

TFS 2013 leverages IIS to communicate with the client, check in code and queue builds. With developers whacking at TFS day and night, it didn’t take much time for the IIS logs to grow to ~17 GB.

The size allocated for the drives never took this into account. In this case, the machine had a whopping 50 GB C drive. The IIS logs gobbled up the extra space until no one could write to the C drive.

To put out the fire, I deleted the IIS logs and watched TFS spring back to life. I came up with the following preventive measures going forward.

  1. Increase the drive size to 60 GB
  2. Configure IIS to put the logs on a different drive
  3. Disable IIS logging for the time being
  4. Create alerts to track the space on all drives for that machine

Powershell: Backup & Copy

Automating deployments can save you a great deal of headaches and even trauma in some cases. However I found it disappointing when I actually tried to put the best practice of automating deployments into practice.

I needed to make a backup copy of the production folder and then overwrite the live copy with the tested copy from the QA environment. Much to my dismay, I couldn’t find any publicly available and publicly licensed PowerShell scripts to do this job.

I created the below module to do just that.

Noteworthy Items

  1. Backup folder is datetime stamped in a consistent manner
    1. Makes troubleshooting  simpler
    2. More straight forward rollbacks
  2. Different domains necessitated the use of New-PSDrive. It does still work on the same domain though.
  3. Excluding the web.config to preserve environmental configuration/differences.
  4. Delete backups older than 7 days so your infrastructure team doesn’t have to ask you why the drive is full.

 

TFS Team Build with .NET 4.0 and 4.5

Disclaimer: I don’t recommend a solution that contains projects targeting different versions of the .NET framework. However there are legitimate cases (e.g. inherited code), that demand solutions rather than a rewrite. You may decide to use a later version of the .NET framework than some of the existing code. If you mix these projects in the same solution and use the same nuget packages in different projects, you will get the following error.

The type or namespace name could not be found (are you missing a using directive or an assembly reference?)

MSBuild builds your solution in an order that determines dependencies but leaves the rest to chance. When nuget package references are resolved, they include the .NET version. By default MSBuild builds the solution into a single output directory. This means that if project A references package X and targets .NET 4.0 and project B references package X and targets .NET 4.5 then the last one to be built will overwrite the previous one’s package X dll. This will cause the build to fail.

But fear not, there is a solution.

Update the Output Location

  1. Edit the build definition
  2. Navigate to 2. Build
  3. Change 4. Output Location from to AsConfigured

Yay! Our build works again, but our unit tests won’t be run as part of the build.

2032-sad-emotion-egg

Update our Test Run Settings

  1. Edit the build definition
  2. Navigate to 3. Test
  3. Go to 1. Automated Tests
  4. Expand 1. Test Source
  5. Click the ellipse on Run Settings
  6. Add ..\src\*\bin\*\*test*.dll to the default Test assembly file specification which is **\*test*.dll;**\*test*.appx

dotnet_versions_edit_test_run

This important step instructs msbuild to look for unit tests in the source directories bin folders in addition to the standard single bin for the solution.

Now your tests will actually get run as part of your build.

pi5ejoyrt

References

  1. StackOverflow: Using AsConfigured and still be able to get UnitTest results in TFS
  2. Override the TFS Team Build OutDir property in TFS 2013