PowerShell is now available in Azure Function App (still in preview).

you can create a Powershell Function App like this:

using PowerShell modules

currently, you can’t install a powershell module inside Azure Function App, but you can upload your module to the script directory and access it from there. to do this, use Kudu:

browse to site / wwwroot:

then you can drag and drop your module ZIP-file to the right part of the explorer

the file will be uploaded and unpacked automatically

now you can reference this module directly from your function script read more

getting info about users password expiration

i work in a multi domain environment. each domain has different password expiration rules. unfortunattely there is no notification system for the password expiration, so i have to check manually how long my passwords are valid.

for this, i wrote this PowerShell function, which does work without the use of any additional module:

the result of this script looks like this:


some time ago, i had the problem that my 3Par storage was getting full. in this time i removed a lot of test vm’s from this storage, but nothing happens. the storage was still full. the reason was the mechanisme how vmware deletes files from a DataStore and the activated zero-detection feature on the storage. if you delete a virtual disk file on the vmware datastore, it will only marked as deleted, but the datas are still there in the same format. to get the storage’s zero detection work, we have to zero out the deleted part of the datastore manually.

you can do this with this script:


how to get the correct Virtual Disk for a VMware vm

Some times I have the problem, I need to resize or delete a VMware virtual disk, but I only know the guest’s drive letter. In vm’s where there are only one virtual disk, or where each virtual disk has a different size, this isn’t a problem. but if you have a vm with multiple virtual disk with exactly the same size, you can’t compare it between the guest Disk Manager and the virtual disk sizes. if your vm has more than one SCSI controller, the problem will increase.

Windows Disk Manager VMware VM settings

I searched long time to solve this problem, but I couldn’t find an easy solution for this. so I wrote this PowerShell script:

when you run the script, it will ask you for credentials and then shows you the informations about booth of your virtual disk and Windows drive :

Azure Automation Hybrid Worker behind a Firewall / Proxy

One nice feature of Azure Automation is the Hybrid Worker. With the Hybrid Worker you can execute Runbooks inside your onPremise infrastructure. according to the official documentation or at John Hennen’s post, you have to open your FireWall for outbound traffic to *.cloudapp.net for this ports 443,9354,30000-30199.

Azure Automation Hybrid Worker Traffic

When i told this requirement to our security-team, they weren’t very enthusiastic about the wildcard rule for *.cloudapp.net. So i have to search another solution.

To configure the Microsoft Monitoring Agent to use your proxy services, go to the Control Panel → System and Security → Microsoft Monitoring Agent and then go to the Proxy Settings tab:


after confiring the Proxy ettings, you can configure the Workspace ID and Key:

2016-03-22_15-24-09Now, you have to run one Job on the new Hybrid Worker, which will fail cause of additional needed firewall exceptions.

Inside the Hybrid Worker server in the directory %AllUsersProfile%\Microsoft\System Center\Orchestrator\7.2\SMA\Sandboxes\ you will find for each Runbook-Job a SubDirectory (for example 5hrotqyb.mz5). Inside this directory, there is one file with the file-extension *.SandboxID. Open this file, and you would found the Value “sandboxHubEndpointDetail”, inside which is the Server-URL. in my case net.tcp://oaas-prod-wes1.cloudapp.net:30016/AzureRunbookWorker/16/SandboxManager/12345678-1234-1234-1234-123456789012

now you should create an outbound firewall rule like this:

 Source: your internal Server IP
Destination URL: oaas-prod-wes1.cloudapp.net
Protocol: TCP
Destination Ports: 9354, and 30000-30199


daily Backup Report for SC DPM

i like System Center Dataprotection Manager, especially for the backup possibilites of remote Windows servers and the Online backup to Azure. But in my past, i lost the overview about succeded and failed backup jobs. The included reporting doesn’t helped me enough, and the alert notification was like spam. I needed a daily report with one view of all Jobs, Disks, Agents and other states. To reach this goal, i wrote my own PowerShell script, which i want to share here.

the report, sent by mail will look like this:

SC DPM daily report

the code for this is here:



document your SCOM 2012 environment

This is quite huge for me, because I’ve been working on this script for the last two months and finally decided, to release the script to public, although I know that it’s still far from complete.

Document your Operations Manager 2012 environment

You should know your environment and what happens in it and you should be able to show people what exactly has been configured in your environment. This is quite important for consultants, but also for admins. Consultants need to create a documentation after implementing Operations Manager 2012 at a costumer’s site and admins should be able to know how their environment looks like at any given time.

Usually people ‘only’ document the most important stuff or what they think is the most important stuff. Every documentation looks different and my believe is that it shouldn’t be that way. I’d like to have my documentations all look the same, no matter where I create them.

Like automation, I like standardization. That’s why, i created a PowerShell script for create my “SCOM 2012 configuration Report”.

Here is a preview of the report:

SCOM configuration report

Powershell cmdlets for Operations Manager 2012

Before I started to write this script I asked myself if I wanted to use WMI only or also the OM12 native cmdlets for Powershell. Both Technologies have their own Advantages and disadvantages.

This is a script that will ONLY get you information, it won’t change anything in your environment, that’s why I decided to use the Powershell cmdlets wherever possible and to avoid using WMI.

Version 0.1

I rate this script at version 0.1, because there are still some things missing. They will be added in time.

Maybee you will find some problems. tell me and I will try to correct them.


In order to run this script the following requirements have to be met:

  • Microsoft System Center 2012 Configuration Manager SP1 (at least)
  • Powershell 3.0
  • installed Operations Manager Admin GUI
  • read-access to the RMS
  • read-Access to all involved SCOM servers
  • Microsoft Word (best with an English installation)

How to execute the script

I added a comment-based help to the script, so you shouldn’t have any problems running it, but here’s one way to execute the script:


Thanks also to David, which gives me the idea of this script and has also a wonderful Inventory Script for SCCM here.

I’m hoping for lots of feedback on this script, because I can’t possibly test everything in my demo lab. So if you find an error or have issues with this script, please tell me!

Download the script here:

SCOM-configuration-report-v0.1.0.zip size: 22.8 kB, counter: 1007, updated: 11. April 2014

If you demote a Domain Controller, SCOM will generate a lot of alerts. By design, there is no automatically undiscovery for the Rules and Monitores for the Active Directory Roles.

Solution 1

This solution will remove all disabled Class instances from an existing object. it will not change any other properties of the object.

  • Open the Operations Management Shell
  • type in this command:

  • Stop the System Center Management service
  • Delete the folder C:\Program Files\System Center Operations Manager\Agent\Health Service State
  • Start the System Center Management service

Solution 2

This solution will clear only the agent cache. Sometime this will be sufficient, if the server discovery / undiscovery was already done well:

  • Stop the System Center Management service
  • Delete the folder C:\Program Files\System Center Operations Manager\Agent\Health Service State
  • Start the System Center Management service

Solution 3

This solution will remove the entire object and then recreate the object with it’s discovery. The new object wouldn’t be discovered as Domain Controller. The new object will have an new guid and any overwrites to the old object will be lost.

  • Stop the System Center Management service
  • Delete the folder C:\Program Files\System Center Operations Manager\Agent\Health Service State
  • Open the SCOM console
    • go to Administration → Agent Managed
    • delete the affected server
    • use the Discovery Wizard to redeploy the agent to the affected server
    • read more

  • extend the Active Directory Users class for SCSM

    Today I needed some additional fields for the Active Directory User class for an SCSM Service Offering. For example, I need the PrimarySmtpAddress, which exists in the AD as mail, but not in the SCSM class. in this post, I will describe, how I did it:

    We will need:

    • System Center Service Manager 2012 SP1
    • System Center Orchestrator 2012 SP 1
    • Service Manager Authoring Tool
    • Strong name key file

    Open the Service Manager Authoring Tool and click to the menu File New, to create a new Management Pack. Define a unique name for your management packs file name, in this Example: Josh.Test.Library.xml (Library mean’s that we will extend a library class)

    in the Class browser, select All Management Packs and search for the Domain User or Group class. if you expand this class, you will see the mail field, but it wouldn’t be accessibly (for example in your service requests). right click it and select View:


     Now you will see the Domain User and Group class in the Management Pack Explorer. Right click it and select Extend class:


    select your new Management Pack and click OK:


    now, you will see the new class, give it an unique name. normally I dont’t use special chars, spaces or dots for the new name:


    on some versions of the Authoring Tool, it will automatically create a new property, like Property_26. select the row and then click to the delete icon


    now you can add new properties. The new Property should have a unique internal name for your complete Service Manage environment ! normaly I use the Management Pack Name as a prefix for the Property internal name.


    After creating the property, you can change some property details, like the (display-)Name, Data Type, etc:


    Now you can save the Management Pack and build a sealed one. To seal your Management Pack, you will need a strong name key file. How you can create a strong name key file, is described here (thank you Marcel). right click the management pack and select Seal Management Pack:


    Choose the output directory and the key-file to use:


    Now, you can import the new management pack like every time.

    After importing, you will see the new field in the Extension tab of your Active Directory User CI:


    In the next step, you should import datas from Active Directory to the newly created Extensions using an Orchestrator Runbook. Your runbook should have this activities:

    • Get User activity from Active Directory
    • Get Object activity from SC 2012 Service Manager
      • select the newly created class
      • filter by UPN equals {User Principal Name from “Get User”}

      Update Object activity from SC 2012 Service Manager

    • select the newly created class
    • Object Guid = {SC Object Guid from “Get Object”}
    • add a field: PrimarySmtpAddress → {Email from “Get User”}
    • read more

    after renaming a Active Directory computer, the computer was automatically detected in SCOM with the correct new name. but after short time, I detected, that the originally agent was already there, but not reachable. so I deleted the old computer under Device Management – Agent Managed.

    normally after maximum 3 days, the agent isn’t visible in the computer view. the delay of 3 days is by design. so don’t delete the computer manually to early.

    If you still see the Computer showing up – even after 3 days – then in most cases, there is still a discovery associated with it. To find the discovery, use this query:

    in my case, I didn’t found any discovery for this computer and it was still available in the Windows Computer view one week after deletion of the agent.

    so I found a solution


    we need to check the database with this query for the orphaned entities:

    now we will mark this old entries as deleted:

    after this, make a refresh in your SCOM console. the orphaned Windows Computer shouldn’t be there anymore.