Tuesday, June 12, 2018

0x800706d9 - There are no more endpoints available from the endpoint mapper

Just adding a quick post with a recent issue we ran in to on some of our endpoint devices. When trying to download SCCM client policy, we were seeing the error message "0x800706d9 - There are no more endpoints available from the endpoint mapper" in the datatransferservice.log.

Coincidentally, these devices had been recently upgraded to Windows 10.

The root cause for the issue turned out to be that the Windows Firewall service was disabled. At some point a technician must have decided that was a necessary change, but (at least in Windows 10) BITS downloads will fail unless the Windows Firewall service is running.

Friday, March 9, 2018

Resolving issues with user policy downloads failing due to large Kerberos token sizes

Hi All!
First post in a long time, but we just solved an issue in our production environment that others may run in to so I figured I would share the solution.

We were having issues with some users not receiving an application that was being deployed to a user collection. We could tell that the users were not downloading the policy object that would install the application, because we could see the following errors in the policyagent.log file when attempting to perform a user policy refresh:
 Synchronous policy assignment request with correlation guid {109B537A-194F-4171-A803-  5022A6C7D27F} for User $UserGUIDHere completed with status 80070005

We could correlate those errors with the following messages in the IIS log on the management point:
  CCM_POST /ccm_system_windowsauth/request - 80 - $IPAddressHere ccmhttp - 401 2 5 0

We checked in with our Microsoft PFE and he said it looked like we were seeing issues due to the large Kerberos tokens some of our users have due to large numbers of group memberships. We have configured a GPO in our environment that increases the max token size, but he pointed out a link to us that matched up with what we were seeing:

The full path of the registry keys is HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters. The keys have to be added as DWORD's. Their description says

MaxFieldLength Sets an upper limit for each header. See MaxRequestBytes. This limit translates to approximately 32k characters for a URL. Default Value – 16384, Range 64 - 65534 (64k - 2) bytes

MaxRequestBytes -Determines the upper limit for the total size of the Request line and the headers. Its default setting is 16KB. If this value is lower than MaxFieldLength, the MaxFieldLength value is adjusted.

Default Value – 16384, Range 256 - 16777216 (16MB) bytes

We configured the registry keys with the following values:
MaxFieldLength: 65534
MaxRequestBytes: 16777216

We also had to reboot the server before the changes would take effect, simply restarting IIS was not enough to see a change in the client behavior.

After reboot we again tried to do a user policy refresh from the client and were successful and no longer saw the 80070005 errors in the policyagent.log

Thursday, September 14, 2017

How to find all SCCM packages that have "Allow this package to be transferred via multicast" enabled using Powershell

This post was based on a question I came across online that I thought might be simple, but it wasn't as straightforward as I had hoped when looking at the Powershell output for Get-CMPackage.

What the person was trying to do is list all packages that were enabled for transfer via multicast. I took a test package from my lab environment and listed all of it's properties using Powershell:

Get-CMPackage -Id $ID | Select-Object -Property *

Although there is no obvious property for multicast transfer, I could see that every time I checked or unchecked the box for multicast transfer, the value for the PkgFlags property was changed.

I took a look at the MSDN documentation for the SMS_PackageBaseClass and found that while there are some values listed for PkgFlags, there was no value listed for handling multicast transfer:

I stumbled across an old post on MyITForum that explained that the multicast value (27) was undocumented.

I was able to take that information and combine it with a post Greg Ramsey had made on checking package properties with Powershell and put together this short snippet of code:

Get-CMPackage | ForEach-Object {
  if ($_.pkgflags -eq ($_.pkgflags -bor 0x8000000)) {

If you run that bit of Powershell it will list the names of each package that is configured for Multicast.

Monday, July 3, 2017

Digging in to the new System Center Updates Publisher Preview

Lo and behold Microsoft has finally released a new version of the System Center Updates Publisher (AKA SCUP) that was last updated in 2011. It is now known simply as System Center Updates Publisher, and includes a release month indicator for each new version (currently June Preview). I don't see any info that indicates this is unsupported by Microsoft despite the Preview designation.

You can read the official blog post from Microsoft announcing the new version here.

The main focus for the update according to Microsoft was enabling the use of SCUP on Windows 10 and Server 2016, but I wanted to dig in to the tool a little bit and see what else was new under the hood, and I did in fact find that they fixed or at least improved some of the more annoying quirks that existed in the 2011 version.

First thing that I noticed was fixed was that when you launch the installer it now properly prompts for elevation in a UAC prompt, where the 2011 version simply threw an error that it needed administrator permissions in order to install and required you to launch the installation from an administrator command prompt. It also does a pre-requisite check for .NET 4.5.2 and will direct you to install it if it is missing.

The installation process itself is completely silent and I had to check the Start menu to see the new icon and program group.

One other change that I noticed is that the application is now 64-bit and installs in C:\Program Files\Microsoft\UpdatesPublisher by default, where the 2011 version was a 32-bit app.

Upon first launch, SCUP now performs an update check to see if there are any newer version, lending weight to the theory that they will be updating SCUP more frequently than they have previously done.The log file for SCUP is still located in the users temp folder (C:\Users\$username\AppData\Local\Temp\UpdatesPublisher.log) and here we can see the update check as it is happening:

Once you are in the main window for the application, you can see that most of the user interface remains largely the same with the exception that the main panes are now known as workspaces. Here is the old UI (top) and the new UI (bottom)

They did add some new sections to the options menu that are helpful. First up, here is the new advanced options page which has given you a button to change the database location, in the 2011 version this simply showed a read-only file path that could only be changed by modifying the config files directly:

Next, here are the new logging options, which let you configure a maximum log file size, as well as a slider with 6 (!) levels to set the amount of log detail you want. Why they went with a slider that has no context as to what you are getting when you choose one of the options I have no idea, but I'd wager that this will be improved in a future update. In my brief testing, this seemed to function more as an on/off switch where anything other than all the way to the right didn't seem to generate any log entries at all.

 Finally, there is an updates section to the options menu that allows you to opt out of checking for updates, or opting out of preview builds if you desire:

Another common issue that folks run in to when using SCUP in a large environment is that it is only able to be ran by one user at a time on a given machine, if a second user tries to launch the application it will simply fail to launch and give no indication that there was an error. I tested this scenario with the new release of SCUP and was pleasantly surprised to find that they have added an error dialog to give you an indication of what the problem is:

I also wanted to take a look at the format of the SCUP.exe.config file (now known as UpdatesPublisher.exe.config) to see if there were any changes made to it and was surprised to see that the structure of the file has changed significantly. You can see the embedded version of each file below with the old version on top and the new version on the bottom. The new config file (while still being written in XML) is significantly shorter and contains some new references to an "Entity Framework" that didn't exist before. It seems that most of the configuration data must have been moved in to the database itself.

While that is all of the changes in the new version of SCUP that I have noticed so far, I plan to spend some additional time investigating it in my lab environment over the next week or so, so check back for any updates to this post. As always, please reach out and connect with me using the Twitter or LinkedIn links on the right side of this page, I'd love to hear some feedback on my posts if they are helpful to you. Thanks! Matt

Thursday, June 29, 2017

Managing Configuration Manager BITS jobs with Powershell

I wanted to share some of the Powershell functions that I've created for managing the BITS jobs that are created when an SCCM client initiates a content download. I've added all of these to my Powershell profile so that I always have them loaded in my Powershell session. I should also mention that I don't consider myself a Powershell expert, so there are likely things being done in these functions which aren't considered best practice (using write-host for example) but they definitely get the job done.

All of these functions leverage PsExec since the BITS Powershell cmdlets don't support remote computer usage. They each expect to find a copy of PsExec.exe at the root of the C drive so make sure you have a copy placed there, or modify the functions to fit your use case. They all use the mandatory parameter -computername, so make sure you specify that when using the function.

First up is my function Get-BITSJobs, which simply gets a list of all of the SCCM BITS jobs on a remote computer and returns a list of them, including file count and size information:

Function Get-BITSJobs

& 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /list /allusers | Select-String -Pattern "CCMDTS Job"


Next is Set-BITSJobsForeground which will take all of the SCCM jobs on a remote computer and set them to foreground priority. This has often come in handy if a machine has been configure to throttle BITS download speeds, but for some reason I need that computer to finish it's downloads as fast as possible:

Function Set-BITSJobsForeground

[string]$jobs = & 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /list /allusers | Select-String -Pattern "CCMDTS Job"

If($jobs -ne "")
    $arrjobs = $jobs.Split("`{*`}") | select-String -Pattern "-"
    Foreach ($job in $arrjobs)
                 & 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /setpriority "`{$job`}" foreground        
Else {Write-Host "No Jobs"}

Next is Set-BITSJobsComplete, this will mark all SCCM jobs as completed, including any that are in error status. It then will restart the SCCM client if you use the parameter -RestartSCCMService after which the client will restart the downloads where they left off. I have generally used this for when a machine has it's downloads stuck in an error state and I need to get the jobs to start again:

Function Set-BITSJobsComplete

[string]$jobs = & 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /list /allusers | Select-String -Pattern "CCMDTS Job"

If($jobs -ne "")
    $arrjobs = $jobs.Split("`{*`}") | select-String -Pattern "-"
    Foreach ($job in $arrjobs)
         & 'C:\PsExec.exe' \\$Computername -s bitsadmin.exe /complete "`{$job`}"

    If ($RestartSCCMService)
      get-service -ComputerName $computername -Name CcmExec | Restart-Service
Else {Write-Host "No Jobs"}

Hopefully you will find these functions useful, I was using them often enough that it made it worth the effort to turn them in to functions. Try adding them to your Powershell profile so that they will always be accessible to you whenever you have an open Powershell window.

Thursday, June 22, 2017

Cloud Management Gateway - Finally connected

Hi All, Back with an update to my previous blog post regarding issues we experienced when setting up our cloud management gateway. I was finally able to work through my remaining problems with Microsoft Support, so I figured it would be helpful to share my findings. Here are a few things that we wound up doing: Removed and rebuilt the CMG using a new SSL certificate. The previous certificate, while it was able to build the instance was build with a CNG (cryptographic next generation) template which is not supported by Configuration Manager. After rebuilding the CMG, we began seeing this error in the SMS_CloudConnector.log file:

ERROR: Failed to build Http connection bc3945e0-708c-403d-881a-03469c4cc4a8 with server xxx.CLOUDAPP.NET:443. Exception: System.Net.WebException: The remote server returned an error: (990) BGB Session Ended.~~ at System.Net.HttpWebRequest.GetResponse()~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.HttpConnection.Send(Boolean isProxyData, Byte[] data, Int32& statusCode, Byte[]& payload)~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionBase.Start()~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionManager.MaintainConnections()

We decided to tackle the errors in SMS_CloudConnector.log that indicated the connector role was unable to connect on port 10140, even though according to the documentation that port (and the rest of the range, 10124-10156) were only required if running more than one VM instance for the CMG. This required a firewall change to allow the connection to be built. We also removed and re-added the Cloud Proxy Connector role and then we finally began seeing a connection being created and maintained in the log files. After that, our client was still showing errors in the locationservices.log on our test client, here are a few for example (and for easier Googling):

Failed to refresh Site Signing Certificate over HTTP with error 0x87d0027e. Failed to send management point list Location Request Message to XXX.CLOUDAPP.NET/CCM_Proxy_MutualAuth/72057594037928290 LSUpdateInternetManagementPoints: Failed to retrieve internet MPs from MP XXX.CLOUDAPP.NET/CCM_Proxy_MutualAuth/72057594037928290 with error 0x87d00231, retaining previous list.

We restarted the WWW Publishing Service on the CMG server (we likely could have just restarted the Cloud Management Gateway via the SCCM console as well) and after that our client was able to connect. I was able to deploy an application that I had previously distributed to our Cloud Distribution Point, then refresh policy on the client to begin the installation over the internet. I am still seeing some issues with performing software update scans against our Software Update Point/WSUS server. I'll make sure to make a new blog post with the solution to that once I have it figured out. Hope this helps someone out there struggling to get their CMG up and running. -Matt

Thursday, June 8, 2017

SCCM Cloud Management Gateway Deployment Notes

Hi All,

I've been working this week on getting the new Cloud Management Gateway that was introduced in Configuration Manager 1610 deployed. I ran in to a few issues during the deployment that I figured would be worth writing a blog post about, maybe they will help someone else out there if they encounter the same issue.

The first problem we had during the setup was manifested by the following errors in Cloudmgr.log:

ERROR: TaskManager: Task [CreateDeployment for service ORGCMG] has failed. Exception System.TypeLoadException, Could not load type 'System.Runtime.Diagnostics.ITraceSourceStringProvider' from assembly 'System.ServiceModel.Internals, Version=, Culture=neutral, PublicKeyToken=...

ERROR: Exception occured for service ORGCMG : System.TypeLoadException: Could not load type 'System.Runtime.Diagnostics.ITraceSourceStringProvider' from assembly 'System.ServiceModel.Internals, Version=, Culture=neutral, PublicKeyToken=.'.~~   at System.ServiceModel.Channels.TextMessageEncoderFactory..ctor(MessageVersion version, Encoding writeEncoding, Int32 maxReadPoolSize, Int32 maxWritePoolSize, XmlDictionaryReaderQuotas quotas)~~   at System.ServiceModel.Channels.HttpTransportDefaults.GetDefaultMessageEncoderFactory()~~   at System.ServiceModel.Channels.HttpChannelFactory`1..ctor(HttpTransportBindingElement bindingElement, BindingContext context)~~   at System.ServiceModel.Channels.HttpsChannelFactory`1..ctor(HttpsTransportBindingElement httpsBindingElement, BindingContext context)~~   at System.ServiceModel.Channels.HttpsTransportBindingElement.BuildChannelFactory[TChannel](BindingContext context)~~   at System.ServiceModel.Channels.Binding.BuildChannelFactory[TChannel](BindingParameterCollection parameters)~~   at System.ServiceModel.Channels.ServiceChannelFactory.BuildChannelFactory(ServiceEndpoint serviceEndpoint, Boolean useActiveAutoClose)~~   at System.ServiceModel.ChannelFactory.CreateFactory()~~   at System.ServiceModel.ChannelFactory.OnOpening()~~   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)~~   at System.ServiceModel.ChannelFactory.EnsureOpened()~~   at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via)~~   at Microsoft.ConfigurationManager.AzureManagement.ServiceManagementHelper.CreateServiceManagementChannel(ServiceEndpoint endpoint, X509Certificate2 cert)~~   at Microsoft.ConfigurationManager.AzureManagement.ManagementOperation.InitializeChannel(X509Certificate2 certificate)~~   at Microsoft.ConfigurationManager.CloudServicesManager.CreateDeploymentTask.CheckAzureAccess()~~   at Microsoft.ConfigurationManager.CloudServicesManager.CreateDeploymentTask.Start(Object taskState).

The strange thing here was that the true error message wasn't being displayed, just some error text that looked like it was coming from .net. The resolution for this wound up being that we were missing the .net 4.5.2 pre-requisite from our site server, so we weren't able to see the true error. I'm not sure how we managed to install the current branch version of SCCM with a missing pre-requisite, but it was definitely not there.

Once we install .net 4.5.2 we were getting some meaningful errors reported in CloudMgr.log:
ERROR: Communication exception occured. Http Status Code: BadRequest, Error Message: The private key for the remote desktop certificate cannot be accessed. This may happen for CNG certificates that are not supported for Remote Desktop., Exception Message: The remote server returned an unexpected response: (400) Bad Request..

This problem was found to be caused by the private key in our CMG certificate not being marked as exportable, even though the template we generated it with was configured with the option to export the private key. We confirmed the private key issue by running "certutil -store my" from a command prompt after importing the certificate.

We took this issue to our folks that manage our PKI and they were able to correct the problem with the certificate utility OpenSSL to export and then re-attach the private key. Once we then ran the CMG setup wizard with the corrected certificate it was able to communicate properly to Azure and spawn the instances for the service.

The next step is to add the Cloud Proxy Connector Role to a site system, typically I have heard recommendations that this service should be added to a management point server, so that is what we elected to do. Once we starting checking the SMS_Cloud_ProxyConnectory.log we were seeing a constant stream of errors for it failing to communicate with the Azure instances:

Here is the error text:
ERROR: Failed to build Tcp connection 41320a1b-5250-4f4f-b95a-0fccac4ef817 with server .CLOUDAPP.NET:10141. Exception: System.Net.WebException: TCP CONNECTION: Failed to authenticate with proxy server ---> System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond~~   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)~~   --- End of inner exception stack trace ---~~   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)~~   at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count)~~   at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest)~~   at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult)~~   at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.TcpConnection.Connect()~~   --- End of inner exception stack trace ---~~   at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.TcpConnection.Connect()~~   at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionBase.Start()~~   at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionManager.MaintainConnections()

After looking at the errors being generated we noticed the connection attempts were all happening on ports 10140/10141/10125/10126. Looking over the CMG documentation for ports in use, it calls out that these ports are used when you are using multiple instances to run the CMG, but if you deploy just a single instance it will use only port 443. We were able to confirm with our security team that the ports for multiple instances were being blocked by our firewall. We removed our multiple instance CMG and set it up again this time specifying a single instance to build out in Azure and once the management point picked up the change in DNS resolution to the new IP address of the rebuilt instance it was able to connect successfully.

Now that I appeared to have a fully function CMG, I configured a test client to use internet communications only and tried to perform a policy check. This failed due to what I believe is an issue with the SSL certificate that we have installed on the management point. Once I have a solution to that issue I will update this post with more information, but I thought it would be helpful to others to get these first few errors I encountered out on the web and indexed by Google so they can be found.