Thursday, September 14, 2017

How to find all SCCM packages that have "Allow this package to be transferred via multicast" enabled using Powershell

This post was based on a question I came across online that I thought might be simple, but it wasn't as straightforward as I had hoped when looking at the Powershell output for Get-CMPackage.

What the person was trying to do is list all packages that were enabled for transfer via multicast. I took a test package from my lab environment and listed all of it's properties using Powershell:

Get-CMPackage -Id $ID | Select-Object -Property *

Although there is no obvious property for multicast transfer, I could see that every time I checked or unchecked the box for multicast transfer, the value for the PkgFlags property was changed.

I took a look at the MSDN documentation for the SMS_PackageBaseClass and found that while there are some values listed for PkgFlags, there was no value listed for handling multicast transfer:

I stumbled across an old post on MyITForum that explained that the multicast value (27) was undocumented.

I was able to take that information and combine it with a post Greg Ramsey had made on checking package properties with Powershell and put together this short snippet of code:

Get-CMPackage | ForEach-Object {
  if ($_.pkgflags -eq ($_.pkgflags -bor 0x8000000)) {

If you run that bit of Powershell it will list the names of each package that is configured for Multicast.

Monday, July 3, 2017

Digging in to the new System Center Updates Publisher Preview

Lo and behold Microsoft has finally released a new version of the System Center Updates Publisher (AKA SCUP) that was last updated in 2011. It is now known simply as System Center Updates Publisher, and includes a release month indicator for each new version (currently June Preview). I don't see any info that indicates this is unsupported by Microsoft despite the Preview designation.

You can read the official blog post from Microsoft announcing the new version here.

The main focus for the update according to Microsoft was enabling the use of SCUP on Windows 10 and Server 2016, but I wanted to dig in to the tool a little bit and see what else was new under the hood, and I did in fact find that they fixed or at least improved some of the more annoying quirks that existed in the 2011 version.

First thing that I noticed was fixed was that when you launch the installer it now properly prompts for elevation in a UAC prompt, where the 2011 version simply threw an error that it needed administrator permissions in order to install and required you to launch the installation from an administrator command prompt. It also does a pre-requisite check for .NET 4.5.2 and will direct you to install it if it is missing.

The installation process itself is completely silent and I had to check the Start menu to see the new icon and program group.

One other change that I noticed is that the application is now 64-bit and installs in C:\Program Files\Microsoft\UpdatesPublisher by default, where the 2011 version was a 32-bit app.

Upon first launch, SCUP now performs an update check to see if there are any newer version, lending weight to the theory that they will be updating SCUP more frequently than they have previously done.The log file for SCUP is still located in the users temp folder (C:\Users\$username\AppData\Local\Temp\UpdatesPublisher.log) and here we can see the update check as it is happening:

Once you are in the main window for the application, you can see that most of the user interface remains largely the same with the exception that the main panes are now known as workspaces. Here is the old UI (top) and the new UI (bottom)

They did add some new sections to the options menu that are helpful. First up, here is the new advanced options page which has given you a button to change the database location, in the 2011 version this simply showed a read-only file path that could only be changed by modifying the config files directly:

Next, here are the new logging options, which let you configure a maximum log file size, as well as a slider with 6 (!) levels to set the amount of log detail you want. Why they went with a slider that has no context as to what you are getting when you choose one of the options I have no idea, but I'd wager that this will be improved in a future update. In my brief testing, this seemed to function more as an on/off switch where anything other than all the way to the right didn't seem to generate any log entries at all.

 Finally, there is an updates section to the options menu that allows you to opt out of checking for updates, or opting out of preview builds if you desire:

Another common issue that folks run in to when using SCUP in a large environment is that it is only able to be ran by one user at a time on a given machine, if a second user tries to launch the application it will simply fail to launch and give no indication that there was an error. I tested this scenario with the new release of SCUP and was pleasantly surprised to find that they have added an error dialog to give you an indication of what the problem is:

I also wanted to take a look at the format of the SCUP.exe.config file (now known as UpdatesPublisher.exe.config) to see if there were any changes made to it and was surprised to see that the structure of the file has changed significantly. You can see the embedded version of each file below with the old version on top and the new version on the bottom. The new config file (while still being written in XML) is significantly shorter and contains some new references to an "Entity Framework" that didn't exist before. It seems that most of the configuration data must have been moved in to the database itself.

While that is all of the changes in the new version of SCUP that I have noticed so far, I plan to spend some additional time investigating it in my lab environment over the next week or so, so check back for any updates to this post. As always, please reach out and connect with me using the Twitter or LinkedIn links on the right side of this page, I'd love to hear some feedback on my posts if they are helpful to you. Thanks! Matt

Thursday, June 29, 2017

Managing Configuration Manager BITS jobs with Powershell

I wanted to share some of the Powershell functions that I've created for managing the BITS jobs that are created when an SCCM client initiates a content download. I've added all of these to my Powershell profile so that I always have them loaded in my Powershell session. I should also mention that I don't consider myself a Powershell expert, so there are likely things being done in these functions which aren't considered best practice (using write-host for example) but they definitely get the job done.

All of these functions leverage PsExec since the BITS Powershell cmdlets don't support remote computer usage. They each expect to find a copy of PsExec.exe at the root of the C drive so make sure you have a copy placed there, or modify the functions to fit your use case. They all use the mandatory parameter -computername, so make sure you specify that when using the function.

First up is my function Get-BITSJobs, which simply gets a list of all of the SCCM BITS jobs on a remote computer and returns a list of them, including file count and size information:

Function Get-BITSJobs

& 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /list /allusers | Select-String -Pattern "CCMDTS Job"


Next is Set-BITSJobsForeground which will take all of the SCCM jobs on a remote computer and set them to foreground priority. This has often come in handy if a machine has been configure to throttle BITS download speeds, but for some reason I need that computer to finish it's downloads as fast as possible:

Function Set-BITSJobsForeground

[string]$jobs = & 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /list /allusers | Select-String -Pattern "CCMDTS Job"

If($jobs -ne "")
    $arrjobs = $jobs.Split("`{*`}") | select-String -Pattern "-"
    Foreach ($job in $arrjobs)
                 & 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /setpriority "`{$job`}" foreground        
Else {Write-Host "No Jobs"}

Next is Set-BITSJobsComplete, this will mark all SCCM jobs as completed, including any that are in error status. It then will restart the SCCM client if you use the parameter -RestartSCCMService after which the client will restart the downloads where they left off. I have generally used this for when a machine has it's downloads stuck in an error state and I need to get the jobs to start again:

Function Set-BITSJobsComplete

[string]$jobs = & 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /list /allusers | Select-String -Pattern "CCMDTS Job"

If($jobs -ne "")
    $arrjobs = $jobs.Split("`{*`}") | select-String -Pattern "-"
    Foreach ($job in $arrjobs)
         & 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /complete "`{$job`}"

    If ($RestartSCCMService)
      get-service -ComputerName $computername -Name CcmExec | Restart-Service
Else {Write-Host "No Jobs"}

Hopefully you will find these functions useful, I was using them often enough that it made it worth the effort to turn them in to functions. Try adding them to your Powershell profile so that they will always be accessible to you whenever you have an open Powershell window.

Thursday, June 22, 2017

Cloud Management Gateway - Finally connected

Hi All, Back with an update to my previous blog post regarding issues we experienced when setting up our cloud management gateway. I was finally able to work through my remaining problems with Microsoft Support, so I figured it would be helpful to share my findings. Here are a few things that we wound up doing: Removed and rebuilt the CMG using a new SSL certificate. The previous certificate, while it was able to build the instance was build with a CNG (cryptographic next generation) template which is not supported by Configuration Manager. After rebuilding the CMG, we began seeing this error in the SMS_CloudConnector.log file:

ERROR: Failed to build Http connection bc3945e0-708c-403d-881a-03469c4cc4a8 with server xxx.CLOUDAPP.NET:443. Exception: System.Net.WebException: The remote server returned an error: (990) BGB Session Ended.~~ at System.Net.HttpWebRequest.GetResponse()~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.HttpConnection.Send(Boolean isProxyData, Byte[] data, Int32& statusCode, Byte[]& payload)~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionBase.Start()~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionManager.MaintainConnections()

We decided to tackle the errors in SMS_CloudConnector.log that indicated the connector role was unable to connect on port 10140, even though according to the documentation that port (and the rest of the range, 10124-10156) were only required if running more than one VM instance for the CMG. This required a firewall change to allow the connection to be built. We also removed and re-added the Cloud Proxy Connector role and then we finally began seeing a connection being created and maintained in the log files. After that, our client was still showing errors in the locationservices.log on our test client, here are a few for example (and for easier Googling):

Failed to refresh Site Signing Certificate over HTTP with error 0x87d0027e. Failed to send management point list Location Request Message to XXX.CLOUDAPP.NET/CCM_Proxy_MutualAuth/72057594037928290 LSUpdateInternetManagementPoints: Failed to retrieve internet MPs from MP XXX.CLOUDAPP.NET/CCM_Proxy_MutualAuth/72057594037928290 with error 0x87d00231, retaining previous list.

We restarted the WWW Publishing Service on the CMG server (we likely could have just restarted the Cloud Management Gateway via the SCCM console as well) and after that our client was able to connect. I was able to deploy an application that I had previously distributed to our Cloud Distribution Point, then refresh policy on the client to begin the installation over the internet. I am still seeing some issues with performing software update scans against our Software Update Point/WSUS server. I'll make sure to make a new blog post with the solution to that once I have it figured out. Hope this helps someone out there struggling to get their CMG up and running. -Matt

Thursday, June 8, 2017

SCCM Cloud Management Gateway Deployment Notes

Hi All,

I've been working this week on getting the new Cloud Management Gateway that was introduced in Configuration Manager 1610 deployed. I ran in to a few issues during the deployment that I figured would be worth writing a blog post about, maybe they will help someone else out there if they encounter the same issue.

The first problem we had during the setup was manifested by the following errors in Cloudmgr.log:

ERROR: TaskManager: Task [CreateDeployment for service ORGCMG] has failed. Exception System.TypeLoadException, Could not load type 'System.Runtime.Diagnostics.ITraceSourceStringProvider' from assembly 'System.ServiceModel.Internals, Version=, Culture=neutral, PublicKeyToken=...

ERROR: Exception occured for service ORGCMG : System.TypeLoadException: Could not load type 'System.Runtime.Diagnostics.ITraceSourceStringProvider' from assembly 'System.ServiceModel.Internals, Version=, Culture=neutral, PublicKeyToken=.'.~~   at System.ServiceModel.Channels.TextMessageEncoderFactory..ctor(MessageVersion version, Encoding writeEncoding, Int32 maxReadPoolSize, Int32 maxWritePoolSize, XmlDictionaryReaderQuotas quotas)~~   at System.ServiceModel.Channels.HttpTransportDefaults.GetDefaultMessageEncoderFactory()~~   at System.ServiceModel.Channels.HttpChannelFactory`1..ctor(HttpTransportBindingElement bindingElement, BindingContext context)~~   at System.ServiceModel.Channels.HttpsChannelFactory`1..ctor(HttpsTransportBindingElement httpsBindingElement, BindingContext context)~~   at System.ServiceModel.Channels.HttpsTransportBindingElement.BuildChannelFactory[TChannel](BindingContext context)~~   at System.ServiceModel.Channels.Binding.BuildChannelFactory[TChannel](BindingParameterCollection parameters)~~   at System.ServiceModel.Channels.ServiceChannelFactory.BuildChannelFactory(ServiceEndpoint serviceEndpoint, Boolean useActiveAutoClose)~~   at System.ServiceModel.ChannelFactory.CreateFactory()~~   at System.ServiceModel.ChannelFactory.OnOpening()~~   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)~~   at System.ServiceModel.ChannelFactory.EnsureOpened()~~   at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via)~~   at Microsoft.ConfigurationManager.AzureManagement.ServiceManagementHelper.CreateServiceManagementChannel(ServiceEndpoint endpoint, X509Certificate2 cert)~~   at Microsoft.ConfigurationManager.AzureManagement.ManagementOperation.InitializeChannel(X509Certificate2 certificate)~~   at Microsoft.ConfigurationManager.CloudServicesManager.CreateDeploymentTask.CheckAzureAccess()~~   at Microsoft.ConfigurationManager.CloudServicesManager.CreateDeploymentTask.Start(Object taskState).

The strange thing here was that the true error message wasn't being displayed, just some error text that looked like it was coming from .net. The resolution for this wound up being that we were missing the .net 4.5.2 pre-requisite from our site server, so we weren't able to see the true error. I'm not sure how we managed to install the current branch version of SCCM with a missing pre-requisite, but it was definitely not there.

Once we install .net 4.5.2 we were getting some meaningful errors reported in CloudMgr.log:
ERROR: Communication exception occured. Http Status Code: BadRequest, Error Message: The private key for the remote desktop certificate cannot be accessed. This may happen for CNG certificates that are not supported for Remote Desktop., Exception Message: The remote server returned an unexpected response: (400) Bad Request..

This problem was found to be caused by the private key in our CMG certificate not being marked as exportable, even though the template we generated it with was configured with the option to export the private key. We confirmed the private key issue by running "certutil -store my" from a command prompt after importing the certificate.

We took this issue to our folks that manage our PKI and they were able to correct the problem with the certificate utility OpenSSL to export and then re-attach the private key. Once we then ran the CMG setup wizard with the corrected certificate it was able to communicate properly to Azure and spawn the instances for the service.

The next step is to add the Cloud Proxy Connector Role to a site system, typically I have heard recommendations that this service should be added to a management point server, so that is what we elected to do. Once we starting checking the SMS_Cloud_ProxyConnectory.log we were seeing a constant stream of errors for it failing to communicate with the Azure instances:

Here is the error text:
ERROR: Failed to build Tcp connection 41320a1b-5250-4f4f-b95a-0fccac4ef817 with server .CLOUDAPP.NET:10141. Exception: System.Net.WebException: TCP CONNECTION: Failed to authenticate with proxy server ---> System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond~~   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)~~   --- End of inner exception stack trace ---~~   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)~~   at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count)~~   at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult)~~   at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.TcpConnection.Connect()~~   --- End of inner exception stack trace ---~~   at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.TcpConnection.Connect()~~   at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionBase.Start()~~   at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionManager.MaintainConnections()

After looking at the errors being generated we noticed the connection attempts were all happening on ports 10140/10141/10125/10126. Looking over the CMG documentation for ports in use, it calls out that these ports are used when you are using multiple instances to run the CMG, but if you deploy just a single instance it will use only port 443. We were able to confirm with our security team that the ports for multiple instances were being blocked by our firewall. We removed our multiple instance CMG and set it up again this time specifying a single instance to build out in Azure and once the management point picked up the change in DNS resolution to the new IP address of the rebuilt instance it was able to connect successfully.

Now that I appeared to have a fully function CMG, I configured a test client to use internet communications only and tried to perform a policy check. This failed due to what I believe is an issue with the SSL certificate that we have installed on the management point. Once I have a solution to that issue I will update this post with more information, but I thought it would be helpful to others to get these first few errors I encountered out on the web and indexed by Google so they can be found.

Friday, May 26, 2017

Monitoring potentially dangerous deployments using System Center Configuration Manager + Powershell

I've always been concerned about the potential for applications or task sequences to be deployed accidentally to the wrong collection or inadvertently made mandatory when they should only be available as an optional install. Config Manager has no built in method to alert you about potentially dangerous deployments that have been created, so it's up to the community to build our own tools to add this functionality.

I had previously devised a method for generating email alerts utilizing WMI event subscriptions to query for every time a deployment was created and run a Powershell script to gather some data about the deployment and send an email alert if the deployment met some specific criteria. This method, while it works, generates additional overhead and resource consumption on the SCCM site server even when no deployments have been created due to the constant WMI queries.

When researching alternate methods to accomplish the same goal with less resource utilization, a fellow administrator recommended that I check out a feature in Config Manager which has existed in the product for some time, but I personally had never made use of: Status Filter Rules. I also did not find very much information online about other people making use of this feature, but I was able to find enough documentation to piece together a functional alerting system for deployments. Although this is only setup to send an email when there is a deployment to a collection with greater than a certain number of computers, the sky is the limit on what you do, you could just as easily set the script to automatically delete the deployment as well. In this instance I just wanted to have it give me a chance to catch these deployments to investigate whether or not they were done in error, all remediation would still need to be done manually. Depending on the size of your environment and the client policy check interval, it's still possible that clients could perform a policy update and install an application that was deployed in error.

There are 2 pieces to the alerting system, the Status Filter Rules themselves, and the Powershell scripts that are triggered by the rules. Since the rules are based upon Configuration Manager status messages, I created 2 separate rules, one for Application deployments, and one for Packages and Task Sequence deployments.

You access the Status Filter Rules from the Administration pane of the admin console, then by going to Overview>Site Configuration>Sites. Right click on your primary site where you want to setup the rule and select Status Filter Rules from the right click menu:

Once you are in the Status Filter Rules, click on the create button to setup a new rule. Once you are in the wizard there are 2 tabs, the first tab defines the trigger conditions you want to check and the second tab defines the actions that you want to take when the trigger occurs. Here are some screenshots from my rule to alert me on an application deployment:

You can see we are triggering this rule anytime the message ID 30226 is generated. 30226 means that an Application deployment has been created.

Here is the second tab, it is essentially just the "Run Program" field, with the following command to run a Powershell script that exists on the Site Server in the D:\Scripts\DeploymentMonitor directory:

c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -executionpolicy bypass D:\Scripts\DeploymentMonitor\ApplicationDeploymentMonitor.ps1 -AssignmentID %msgis02 -creator %msgis01

The %msgis02 parameter is the second field that is inserted in the status message, and in this instance it is equal to the deployment ID, %msgis01 is the username of the person who created the deployment. More info about the various parameters that can be passed to your program are available here:

Here is the content of ApplicationDeploymentMonitor.ps1, something to note is that I do not claim to be an expert at Powershell, this code works perfectly for me, but I'm sure there is a lot of room for improvement:

You need to configure the script values including site code, mail servers, and warning threshold to match what is appropriate for your environment. In my script, I want to get an email for any deployment to more than 500 computers, but you may want to adjust that lower or higher as needed.

Since we a different message ID for package and task sequence deployments, I created a second rule to alert me when those are deployed. Here are the screenshots of that rule, including message ID 30006:

Here is my command that I am running in order to pass the parameter to my Powershell script:

c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -executionpolicy bypass D:\Scripts\DeploymentMonitor\PackageDeploymentMonitor.ps1 -assignmentID %msgis02 -creator %msgis01

And here is the content of the PackageDeploymentMonitor.ps1 script:

Here is the text of an email alert that was generated during the testing of this script, so that you have an idea of what to expect:

TEST FileZilla FTPClient 3.5.2 is being Installed on 1 assets.

The deployment type is Available and will become available at 6/22/2016 3:27:00 PM.

The Assignment ID is {B7DC5AA8-C14C-4E54-AB3A-97776054645B}.

Hopefully this will be helpful to other administrators out there, If you've got feedback or questions about the setup of the alerting rule, please feel free to send me a message.

Welcome to my new blog!

Hi Everyone!

My name is Matt Atkinson and I'm a Systems Engineer for a large health care organization in Portland, Oregon.

I've created this blog to give myself a space to write about systems management technologies that I use in my day to day work, as well as a way to share solutions to problems that I encounter.

I spend the majority of my time working with Microsoft System Center Configuration Manager (ConfigMgr) to manage our environment that consists of over 85,000 devices spread over several states. You can find me frequently generating reports, managing client health, as well as implementing new features and maintaining the ConfigMgr infrastructure. I also work closely with my teammates who are responsible for application packaging, patching, and operating system deployment.