SSIS Load Dll without GAC


SSIS isn’t always the easiest tool to debug or troubleshoot. It can be especially challenging to use script tasks and components due to their limited support of C#. By limited support I mean:

  1. Not being able to leverage NuGet
  2. Generating and Tearing down projects on the fly
  3. Limited to using the version of Visual Studio(VS) that aligns with the SQL Server version running the SSIS package

With this in mind, it can be very helpful to relegate all the additional work to a dll and leverage that over the a blend of SSIS and script tasks/components.

The standard recommendation is to load dll’s into the GAC. I personally don’t like this due to the fact that it’s shared at the machine level. This means you’re deployment could ruin my packages by upgrading/downgrading one of my dll’s. I’ve found a way to load assemblies not in the GAC online.

I have a few contentions with this solution

  1. Every dll must be explicitly added
  2. There is no logging
    • file not found
    • dll not included
    • permissions
    • etc.

I’d like to propose my own solution below.


This solution provides some notable benefits.

  1. You don’t need to explicitly state every dll
  2. You can setup a common location for the dll’s
  3. You can pass the dll’s in as a readonly parameter
  4. Any loaded or missing dll will get logged

Unfortunately this will only work for script tasks and not script components.


PCF Provision from AD Group


Provisioning environments and access isn’t something that I enjoy, but it’s really important. People may not be able to work or be as productive as they could be without their environments. People join and leave teams all the time.


This script pulls all the users from a given AD Group. It grants them OrgAuditor access to the PCF Org.


  1. PCF CLI
  2. PowerShell Active Directory module

Splunk HEC Gotcha


You’ve got Splunk installed and configured. You setup an HTTP Event Collector(HEC) and its appropriate index(es). You try to use the following to post to it,

but get the following response.


I was able to correct the issue by switching off the Enable indexer acknowledgement for that HTTP Event Collector Token.

  1. Go to Settings -> Data -> Data Inputs
  2. Splunk_HEC_Gotcha_Settings
  3. Click HTTP Event Collector
  4. Splunk_HEC_Gotcha_HTTP_Event_Collector
  5. Find the correct HTTP Event Collector
  6. Splunk_HEC_Gotcha_HTTP_Event_Collector_Edit
  7. Click Edit
  8. HEC_Enable_Indexer_Acknowledgement
  9. Uncheck Enable indexer acknowledgmenet
  10. Click Save
  11. Then you’ll get the below successful response

You can read more about this issue on Splunk Answers.

Configuring NuGet.Server

I wrote in a previous article about how to setup NuGet.Server for yourself. Here are some additions to the web.config file that I would recommend.

  1. requireApiKey makes sure that only trusted uploaders can contribute packages. While your private instance may be self-hosted and only internal, it’s still a best practice to restrict who can contribute packages.
  2. apiKey simply allows you to define the desired key that will be shared with NuGet package contributors.
  3. allowOverrideExistingPackageOnPush​ prevents an updated package overwriting an existing package. I am very against this, because it breaks the contract with the consumer. As the consumer, I should be able to count on the fact that once a NuGet package is published as a particular version, it won’t change.

Setting up Nuget.Server


As a software development company grows, there’s a natural need to re-use code. This has been true for a long time will remain so. There are many reasons.

  1. Reduce the maintenance costs through reducing lines of code
  2. Helping maintain consistency across projects
  3. Eliminating bugs in a single place

Open source libraries handle the lion’s share of this. There’s often domain or company specific code that ends up getting duplicated. The way we solve this problem has changed.

Today for .NET, I recommend using NuGet to package and version your re-usable code. Quite a few companies will be a little squeamish(with good reason) about posting any proprietary code publicly.


Fortunately, you can stand up your own private Nuget Server. I found the original instructions on Scott Hanselmann’s blog, but have added Elmah for help with troubleshooting.

  1. Open Visual Studio 2017
  2. Click File -> New Project
  3. Nuget_Server_Create_Project_2018-05-06_14-26-05
  4. Select .NET Framework 4.6
  5. Click OK
  6. Select Empty
  7. Nuget_Server_Web_Application_2018-05-06_14-28-46
  8. Click OK
  9. Open the Package Manager Console
  10. Install the Nuget.Server package with the command Install-Package NuGet.Server
  11. Nuget_Server_Install_Nuget_Server_2018-05-06_14-45-06
  12. Install elmah package with the command Install-Package elmah
  13. Nuget_Server_Install_Package_elmah_2018-05-06_14-46-27
  14. Right click on the project
  15. Click Manage NuGet Packages...
  16. Click Updates
  17. Check Select All
  18. Nuget_Server_Update_All_Packages_2018-05-06_14-48-47
  19. Click Update
  20. Click I Accept to accept all the license terms
  21. You may have to restart Visual Studio 2017 as part of this process
  22. Then run your project
  23. You should see a screen like below


Gotcha #1

Grant the account running the site read and write permissions on the folder being used to store the packages. By default, App Identity Pool did not have sufficient permissions. It’s configured in your web.config.


  • Fun Fact: Nuget must hash the packages, because just changing the filename and trying to reload will cause an error. You’ll get the below response.

Microsoft Windows [Version 10.0.15063] (c) 2017 Microsoft Corporation. All rights reserved. Clink v0.4.9 [git:2fd2c2] Copyright (c) 2012-2016 Martin Ridgers C:WINDOWSsystem32>cd C:UsersuserDownloadstesting nuget C:UsersuserDownloadstesting nuget>nuget push nuget.server.3.1.2.nupkg 6628896b-d6f9-48ac-a160-c32f9eb2cd78 -Source http://machine.awesome.domain:port/nuget Pushing nuget.server.3.1.2.nupkg to 'http://machine.awesome.domain:port/nuget'... PUT http://machine.awesome.domain:port/nuget/ NotAcceptable http://machine.awesome.domain:port/nuget/ 85ms Response status code does not indicate success: 406 (Not Acceptable). C:UsersuserDownloadstesting nuget>

Jenkins Run MSTest Unit Tests


You’ve worked hard creating unit tests using MSTest. That’s a great start. This doesn’t mean you’re finished though.

  1. How do you know that the tests are being run?


Running the tests

Jenkins makes this part easy. There are two different types of builds that you can choose from. I’ll share how to do run unit tests with MSTest for both types.

You can add this command within a build step of a Freestyle Project or a Pipeline Project. I am assuming the build will run on a windows node or master.


The location of MSTest will vary based on the install location and version of Visual Studio. The /resultsfile:"%WORKSPACE%\Results.trx" parameter stores the test output to the specified file. You can use the /testcontainer:"%WORKSPACE%\MyTests.dll" parameter to test many assemblies and add all results into a single results file.


No IFrame For You

Security isn’t easy, but its becoming more important. There’s lots of evidence explaining the dangers of missing any flaws. One of the items that got flagged on a project that allowed IFrames from any other site. The findings referenced the X-Frame-Options header. In my particular case, the business wanted to allow IFraming across domains. This ruled out using DENY or SAMEORIGIN. ALLOW-FROM would’ve fit the bill if it were supported. For MVC, you can leverage built in web.config values or ActionFilter Attributes.  I was supporting a webforms site though.

For my case, I had to write some custom code. It is below. IIS can leverage the produced HttpModule. Values from the web.config allow only specific sites to iframe the site the web.config belongs to. It assumes that many sites will be semi-colon delimited.

Your web.config file could look like below for example.


AD your Splunk


Splunk Enterprise offers a great solution for anyone that has legal or compliance reasons requiring an on-premise setup. It’s very useful for developers that would like to do testing in a locally destructive fashion. One of the keys to creating an easy to maintain environment is getting authentication and authorization right. In my case, the vast majority of users belong to a shared Active Directory(AD) Domain.

Who Cares?

Splunk Enterprise does offer its own store of users. The reason for managing them with AD is that when people require access changes(leaving/joining teams/the company) having them all in one place makes this much simpler. Particularly if other systems already use this as a point of reference.

Okay, How?

There are many sites that reference configuring Splunk Enterprise for AD authentication/authorization. I haven’t found any that go into enough detail to make it simple. I’ve attempted to do that below.


LDAP Configuration for User Roles

  1. Click Settings
  2. Under USERS AND AUTHENTICATION, Click Access controls
  3. Splunk_Enterprise_Settings.jpg
  4. Click Authentication Method

  5. Under External select LDAP

  6. Splunk_LDAP_Authentication_Method.png
  7. Click LDAP Settings

  8. Splunk_LDAP_Connection_Settings.png
  9. Pro Tip: Make sure the Group base DN for groups points to an OU with all the groups rather than the root

  10. Make sure the user base DN is the root of AD

  11. Splunk_LDAP_Connection_Settings_2.png

AD Mapping

  1. To Map Groups
  2. Under Actions Click Map Groups
  3. Splunk_LDAP_Strategies.png
  4. You can click any group from the group base dn you provided
  5. Then you can select the roles that a given AD Group will have



  1. Manage Splunk user roles with LDAP

Bitbucket Server Webhook to Jenkins


You have two great tools that you’d like to integrate. In this case, you’re using BitBucket and Jenkins. You can configure Jenkins to check BitBucket  for changes(aka polling) to make a build. But this is clunky and repetitive.


There is a better way. BitBucket offers a plugin called “Webhook to Jenkins for Bitbucket“. This plugin calls Jenkins for each new commit to a repository. This way Jenkins doesn’t call BitBucketBitBucket calls Jenkins. It’s The Hollywood Principle, “Don’t call us, we’ll call you”.

New Problem

Now like so many times in programming, your solution to one problem has created another. In debugging, this is progress. You need to know how to stitch this together. You’ll be able to configure it by clicking Edit (the pencil icon) to bring up the below screen. Once you enter all the information, click Trigger Jenkins to test the connection. You may see the following error.

Temporary failure in name resolution


New Solution

You may need to provide with the fully qualified domain name for the Jenkins instance. The machine name alone will not work(e.g. awesome_machine). You need to enter the fully qualified machine name in the Jenkins url. Assume the fully qualified machine name is awesome_machine.awesome.domain. This would make your url look like http://awesome_machine.awesome.domain:3456(assuming the port is 3456). Once you do that, you’ll get a new error.

New Problem (Again!)

Once you click Trigger Jenkins, you may get an error stating No Git jobs using the repository.Jenkins_Without_Poll_SCM_2017-10-06_14-39-58

New Solution (Again!)

To work around this, you can configure the trigger for the job to poll the scm without a schedule. You can do this by clicking Poll SCM and leaving the Schedule text area blank. You can see an example below.



It’s important to note that despite the above setting, Jenkins will never poll Git.


  1. Github: Debugging “Error: Jenkins Response: No git jobs using repository” #147
  2. Webhook to Jenkins for Bitbucket

Updating Jenkins Plugins with Powershell


Jenkins provides an outstanding open source continuous integration platform for a multitude of languages and technologies. It accomplishes this by allowing the community and other interested parties to contribute plugins. Many of these plugins are frequently updated, which is amazing! Even though Jenkins has a pretty nice user interface (UI) for updating plugins, it gets tedious since on a system of scale, there could be updates daily.



Fortunately for me, Jenkins provides a really straight forward command line interface (CLI). This allowed me to create a powershell script that will update all of the installed plugins to their lastest version. I configured this to run weekly and it’s been a huge time saver. The added benefit is that you get all the latest features for your plugins without doing anything.

I configured it to send an email out with the plugins that have been updated. I had to copy the powershell script into a gist to make it display correctly here, but here is the proper repository in case you are interested.