Send SMS with Gammu + Ubuntu

After reading this, you should be able to send SMS from commandline using Gammu on top of Ubuntu 16.04.

I have tried this with newer versions of Ubuntu with some minor bumps in the road, but with this exact setup, it should work flawlessly.

Hardware:
I run this as a VM in Proxmox. The VM is assigned one CPU core, 2048 megs of memory, and a small 10 gig drive.
My modem is a Wavecom Fasttrack Supreme 10. This is a COM device, but I have paired it with my VM host with a serial to USB dongle.
In Proxmox I just passed the USB device through to the VM. More on this further down.
Also, I have a simcard from my local carrier. This is SIM Pin protected.

Software:
Bone stock Ubuntu 16.04 server
Gammu sms server daemon
wvdial
… and that’s it really.

First off, you need to make sure it is updated.

sudo apt-get update && sudo apt-get upgrade

Then we can check the server if the USB device is detected.

frank@smsgw01:~$ lsusb
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd
Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

As you can see, the USB device is not present there. It’s because I haven’t assigned the device to the VM in Proxmox.
Shut down the VM, assigned the USB device, and booted it up again. Now it should look like this:

frank@smsgw01:~$ lsusb
Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd
Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

So, the USB device is attached and recognized.
The next step is to se if our OS can see this device as a GSM modem.
Install wvdial:

sudo apt-get install wvdial

When the installation is finished, ron wvdialconf to check for applicable modems:

frank@smsgw01:~$ sudo wvdialconf
Editing `/etc/wvdial.conf'.

Scanning your serial ports for a modem.

Modem Port Scan<*1>: S0   S1   S2   S3   S4   S5   S6   S7
Modem Port Scan<*1>: S8   S9   S10  S11  S12  S13  S14  S15
Modem Port Scan<*1>: S16  S17  S18  S19  S20  S21  S22  S23
Modem Port Scan<*1>: S24  S25  S26  S27  S28  S29  S30  S31
ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud
ttyUSB0<*1>: ATQ0 V1 E1 -- �ۓ��
ttyUSB0<*1>: failed with 9600 baud, next try: 115200 baud
ttyUSB0<*1>: ATQ0 V1 E1 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 Z -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK
ttyUSB0<*1>: Modem Identifier: ATI -- WAVECOM MODEM
ttyUSB0<*1>: Speed 230400: AT --
ttyUSB0<*1>: Speed 230400: AT --
ttyUSB0<*1>: Speed 230400: AT --
ttyUSB0<*1>: Max speed is 115200; that should be safe.
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK

Found a modem on /dev/ttyUSB0.
Modem configuration written to /etc/wvdial.conf.
ttyUSB0: Speed 115200; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"

Voilà – we have a working modem on /dev/ttyUSB0

Next, we’ll install Gammu.

sudo apt-get install gammu

When the installation is complete, use the gammu-config command to configure the communication with the GSM modem.

gammu-config

You should now see this:

Change everything to this,

and hit Save.

Now, if your SIM card is not Pin protected, you can try to send a SMS with the following command:

frank@smsgw01:~$ sudo gammu --sendsms TEXT <landcode and phonenumber> -text "lol"
If you want break, press Ctrl+C...
Sending SMS 1/1....waiting for network answer..OK, message reference=11

If your simcard is PIN Locked, check status, and give the modem a PIN with the following commands:

# sudo gammu getsecuritystatus
Waiting for Pin.
# sudo gammu entersecuritycode PIN -
Enter PIN code:

Congrats, you can now bug your frinds not only by texting on the phone, but also form commandline.

 

Having trouble?
I have had some issues with my gateway, and this mainfests itself in Gammu with this errormessage:

Can not access SIM card.

In this case, try the following:
1. Make sure your SIM card is properly inserted into SIM tray.
2. Make sure the SIM card is activated. Test if you can send and receive messages in normal phone.
3. Powercycle your device. Please remember that you must insert a SIM card when the device is SWITCHED OFF.

In my case, Powercycling the device has hepled.

Linux: Logfiles, timestamps and datematching

I recently came across someting at work that had me bothered for a while. A customer wanted to check some logfiles, where this one specific line is stamped ever so often. If the line is NOT stamped in a timely matter, they wanted to be notified.

Seems simple, right? Well, for me it wasn’t that easy. I rarely do scripts in bash – and when I do, I usually use a lot of time on something that would have taken me five minutes to write in powershell.

Nevertheless, here is what I came up with. Note that this most definitely can be done in simpler and better ways, but this works for me.

So. The logfile is huge, but the guys that “owns” it, only wanted to be notified when a certain string wasn’t beeing stamped in a timely manner.
The string in question is beeing stamped every minute – meaning, a threshold of not beeing written to in the last 10 minutes would do the trick here.

Example of the string we are interested in:

10:24:00,002 DEBUG [scheduled.jobs.ScheduledTasksJob] (EJB default - 4) Starting Scheduled Tasks Job

To manage this, we will be using cat, grep, tail, and awk.

cat to read the logfile
grep to grab the string we are looking for
tail to fetch the last occurcance of this string
awk to fetch the timestamp, which is in the beginning of the string, and split on comma in this case, to only fetch the actual time.
date to do some dateformat-juggeling.

#!/bin/bash

# var
logname="/tmp/file.log"
logcheck=$( cat $logname | grep 'Starting Scheduled Tasks Job' | tail -1 | awk -F ',' '{print $1}' )
logdate=$( date -d $logcheck +%H:%M:%S )
nowminus10=$( date -d "-10 minutes" +%H:%M:%S )
nowplus10=$( date -d "+10 minutes" +%H:%M:%S )
now=$( date +%H:%M:%S )

# construct
if [[ "$logdate" > "$nowminus10" && "$logdate" < "$nowplus10" ]];
then
	echo "HEALTHY: The logfile is beeing written to in a timely manner."
else
	echo "ERROR: The logfile is not beeing written to in a timely manner. The time now: $now. Last timestamp in log: $logdate."
fi

This just echoes out the result – it is up to you to do something useful with the script.

In my case, I use this with SCOM – more on this in the next article.

-F

Powershell: Create Event with parameters

This function will let you stamp events to the Windows EventLog, and feed the event with filterable parameterdata (which in the cases you use SCOM to sniff events, is pretty awesome).

I use this all the time in cases where a script should dump some kind of result to the eventlog, and using SCOM to fetch the event and trigger on special texts or values in the parameterdata returned.

function createParamEvent ()
  {
    <#
  .SYNOPSIS
  Function for creating events, just like create-event or eventcreate.exe - but with the added functionality to add up to 5 filterable parameters.
    .DESCRIPTION
  The function stamps Windows Events to the Windows Eventlog by your choice, but can also be fed up to 5 different parameters, where param1 contains the basic Event Description, and param2 to param5 contains data of your choosing.
    .EXAMPLE
  For Information events, use eventID 220:
  CreateParamEvent -source "TestSource" -evtID 220 -param1 "The server $hostname shut down at $timestamp" -param2 $hostname -param3 $timestamp -param4 "Some generic text"
  .EXAMPLE
  For Warning events:
  CreateParamEvent -source "TestSource" -evtID 221 -param1 "The server $hostname shut down at $timestamp" -param2 $hostname -param3 $timestamp -param4 "some generic text"
  .EXAMPLE
  For Error events, with the manditory param1 set. All parameters from param2 to param5 are not manditory:
  CreateParamEvent -source "TestSource" -evtID 222 -param1 "The server $hostname shut down at $timestamp"
  .EXAMPLE
  Alle andre eventID'er vil logges som information events.
  .PARAMETER evtID
  Mandatory: The logic in this function is based on the principle where the sample eventIDs (222, 221, 220) will throw an error corresponding to the Event Type (error, warning, information). Other EventIDs can be used, but will then be logged as an Information Event.
  .PARAMETER param1
  Mandatory: The full description in the event.
  .PARAMETER param2-5
  Use theese parameters to add additional useful information to the mix, for example additional information that can be pulled from the Event in SCOM.
  .Link
  https://vetasen.no
  .Notes
  - Param1 = Full description in the event
  - Source is mandetory
  - EventID is mandetory
  - Eventid 222 = Error event
  - Eventid 221 = Warning event
  - Eventid 220 = Information event
  - Param2 to 5 are optional.
  #>
  [Cmdletbinding()]
Param
          (
            [parameter(Mandatory=$true)][string]$evtID,
            [parameter(Mandatory=$true)][string]$param1,
            [parameter(Mandatory=$true)][string]$source,
            [string]$param2,
            [string]$param3,
            [string]$param4,
            [string]$param5
          )
    #Define the event log
    $evtlog = "Application"

    #Load the event source to the log if not already loaded.  This will fail if the event source is already assigned to a different log.
    if ([System.Diagnostics.EventLog]::SourceExists($source) -eq $false) {
        [System.Diagnostics.EventLog]::CreateEventSource($source, $evtlog)
    }

    if ($evtID -eq 222){
    $id = New-Object System.Diagnostics.EventInstance($evtID,1,1); #ERROR EVENT
    }
    elseif ($evtID -eq 221){
    $id = New-Object System.Diagnostics.EventInstance($evtID,1,2); #WARNING EVENT
    }
    else{
    $id = New-Object System.Diagnostics.EventInstance($evtID,1); #INFORMATION EVENT
    }    
    
    
    
    $evtObject = New-Object System.Diagnostics.EventLog;
    $evtObject.Log = $evtlog;
    $evtObject.Source = $source;
    $evtObject.WriteEvent($id, @($param1,$param2,$param3,$param4,$param5))
  }

 

SCCM Lab: Part 2 – DHCP and DNS Server roles on our AD controller

This part in the SCCM Lab series will cover installing and configuring the DHCP and DNS Server roles on our headless Windows 2019 server we configured in Part 1. Here we go.


The DNS server was created when AD DS role installed the root forest. We can see that the DNS role is installed using the Get-WindowsFeature command:

Get-WindowsFeature -name "DNS*

As you can see, the DNS Server feature is installed – BUT If your DNS server is not installed, you can install it with this command:

Install-WindowsFeature DNS -IncludeManagementTools

The DNS primary zone is created when the forest is generated. Next, the network ID and file entry is made:

Add-DnsServerPrimaryZone -NetworkID 10.0.1.0/24 -ZoneFile “10.0.1.2.in-addr.arpa.dns”

Next, the forwarder is added:

Add-DnsServerForwarder -IPAddress 8.8.8.8 -PassThru

You should now be able to test your dns server:

Test-DnsServer -IPAddress 10.0.1.2 -ZoneName "sccmlab.net"

That’s it for configuring DNS – now let’s look at the DHCP Server Feature


To do this, you have to set a static IP address on your server. This we covered in Part 1, but if you totally forgot, don’t worry. Set a static IP address like this:

New-NetIPAddress -InterfaceIndex 2 -IPAddress 10.0.1.2 -PrefixLength 24 -DefaultGateway 10.0.1.3

Next, install the DHCP Server Feature.

Install-WindowsFeature DHCP -IncludeManagementTools

After this, a security group is created using the netsh command. The service is then restarted. When the following command is run, the DHCP Administrators and DHCP Users security groups are created in Local Users and Groups on the DHCP server.

netsh dhcp add securitygroups
restart-service dhcpserver

Now that the DHCP role and security groups are installed, we need to configure the subnets, scope and exclusions. Configure the DHCP scope for the domain. This will be the addresses that are handed out the to network by DHCP.

AddDhcpServerV4Scope -name "Lab Servers Scope" -StartRange 10.0.1.10 -EndRange 10.0.1.30 -subnetmask 255.255.255.0 -State Active
Next, authorize the DHCP server to operate in the domain:
Set-DHCPServerv4OptionValue -ScopeID 10.0.1.0 -DnsDomain sccmlab.net -DnsServer 10.0.1.2 -Router 10.0.1.3

Finally, the DHCP Server is added to the DC:

Add-DhcpServerInDC -DnsName sccm-ad1.sccmlab.net -IpAddress 10.0.1.2

We can verify the DHCP Scope setting using this command:

Get-DhcpServerv4Scope

We can verify the existence of this DHCP server in this DC with the following command:

Get-DhcpServerInDC

Restart the DHCP service:

Restart-service dhcpserver

 

SCCM Lab: Part 1 – AD Controller

I am currently setting up a new SCCM testenvironment in my home-lab – this will be one of (possibly) many quick-n-dirty how-to’s for setting up a functioning SCCM lab.

First things first – the lab wil consist of Server 2019 and Windows 10 servers and clients.
The SCCM version used is SCCM current-branch 1902.

All servers will be headless.


I have already installed a box-fresh Windows 2019 Standard server, without GUI. Continuing on from that, we will do the following:

  • Configure network settings
  • Configure Local date/time
  • Configure the firewall to allow pingreplies

Using sconfig – name your server something. In this case, I am calling it sccm-ad1.
Next up, edit your network card settings – configure your adapter with a static IP address, and set DNS server to 127.0.0.1.

Return to the main menu, and configure your local Date and Time settings to your correct timezone.

Restart your server – and jump in to a powershell session.

I like my labcomputers replying to pingrequests, so to allow this – type in the following command (applies to IPV4):

New-NetFirewallRule -Displayname "Allow inbound ICMPv4" -direction Inbound -Protocol ICMPv4 -IcmpType 8 -remoteaddress <your subnet> -action allow

Restart your server.


Next up, we will install the AD Controller role.
Jump in to a Powershell session, and enter the following:

Get-WindowsFeature AD-Domain-Services | Install-WindowsFeature

let the installation finish, then enter the following:

Import-Module ADDSDeployment

Then, to install the new AD controller in our new forest, enter the following:

Install-ADDSForest

Continue by entering your domainname of choice, and a SafeMode password. After installation is finished, the server will restart and finish configuration.

Next I’ll create a new AD user in this domain, for administrating the environment. Do this by entering a Powershell session, and enter:

New-ADUser -name "Awesome" -Givenname "Awesome" -Surname "Sauce" -SamAccountName "Awesome" -UserPrincipalName "awesome@your.domain"

Test that you have successfully created the user by entering:

Get-AdUser Awesome

You will find the user is not active yet. Before enabling the user, set the password for that user.

Command to set password:

Set-ADAccountPassword 'CN=Awesome,CN=users,DC=sccmlab,DC=net' -Reset -NewPassword (ConvertTo-SecureString -AsPlainText “YourPassword” -Force)

Add the user to Domain Admins group

Add-AdGroupMember ‘Domain Admins’ Awesome

And there you have it – your very own Headless Windows Server 2019 AD Controller.

-F

OpenHab Automation: Turn on and off lights based on sunset and sunrise

So, for quite some time now I have been wanting to automate the exterior lights of our house. OpenHAB2 (OH2) lets you do this quite smoothly by using the Astro binding.

Start off by fetching the Astro Binding via the OH2 controlpanel. Configuration > Bindings.
Click the Plus icon, and select Bindings in the top table – searching for Astro will reveal the binding you want (I am currently on binding-astro – 2.4.0). Select Install, and let it finish. We will now continue in the configurationfiles.

In your Openhab-Conf folder, under Things, create an empty file and call it astro.things.
In this file, at a bare minimum, you should put in this string:

astro:sun:home [ geolocation="10.12345678,5.12345678", interval=300]

The first three statements are calling the Astro function, the SUN object, for the HOME location. The two statements in the square brackets are geolocation and polling interval – here you need to find your longtitude and latitude from a map site (google maps is fine), and replace the digits in the example above.
A pollinginterval of 300 seconds is sufficient for my own environment, and I rarely need to know where the sun is for as often as every 5 minutes – but feel free to adjust this at your own need.

Save the file, and see that you dont receive any errors in your logs.

Next up, head on over to your items folder. Here we will add the sun-items based on the astro thing value you just created.

Create an empty file, call it astro.items. Here you will add three strings:

DateTime    Current_DateTime     "Today [%1$tA, %1$td.%1$tm.%1$tY]"                <clock>  (Astro) {channel="ntp:ntp:local:dateTime"}
DateTime    Sunset_Time          "Sunset [%1$tH:%1$tM]"                            <sun>    (Astro) {channel="astro:sun:home:set#start"}
DateTime    Sunrise_Time         "Sunrise [%1$tH:%1$tM]"                           <sun>    (Astro) {channel="astro:sun:home:rise#start"}

The first item tells your environment based on your geolocation what time it is right now.
The second item tells you when the sun sets.
The third item tells you when the sun rises.

Based on the second and third item, we can create rules that, in this example, turns our porch- and streetlight on and off, based on the sunset and sunrise.

Head on over to your Rules folder, and create an empty file, calling it outside_lights.rules

The first rule will turn your lights on. It looks something like this:

rule "outside lights ON"

when
    Channel 'astro:sun:home:set#event' triggered START
then
    if (outside_lights.toggle.state == OFF)){
        logInfo("Outside_lights", "All lights are off, turning them on.")
        outside_lights.toggle.sendCommand(ON)
    }
    else {
        logInfo("Outside_lights", "All lights are on - doing nothing.")
    }
end

And for the rule that turns the lights off, we get something like this:

rule "outside lights OFF"

when
    Channel 'astro:sun:home:rise#event' triggered START
then
    if (outside_lights.toggle.state == ON)){
        logInfo("Outside_lights", "All lights are oN, turning them OFF.")
        outside_lights.toggle.sendCommand(OFF)
    }
    else {
        logInfo("Outside_lights", "All lights are off - doing nothing.")
    }
end

Save these in the same outside_lights.rules file, and save it. See that you don’t receive any errors in the log.

Remember to chagnge the rules to fit your light items – but this should be obvious 🙂

Now, if all is configured correctly, you will see that your lights will turn off and on, based on the sunrise and sunset in your area.

Happy automating!

-F

Install Grafana on Debian

Here I will show you how to install Grafana on a headless Debian server using the last current (at the time) stable release.

First, make sure your Debian server is up to date:

sudo apt-get update && apt-get upgrade

Download the Grafana stable release. I put theese packages in the /tmp folder:

cd /tmp/
sudo wget https://dl.grafana.com/oss/release/grafana_5.4.3_amd64.deb

Install adduser and libfontconfig:

sudo apt-get install -y adduser libfontconfig

Unpack grafana:

sudo dpkg -i grafana_5.4.3_amd64.deb

Start Grafana server:

systemctl unmask grafana-server.service
systemctl start grafana-server

You should now reach your server on the standard webport for Grafana:

http://your.ip.address.here:3000

Install InfluxDB on Debian

I use InfluxDB paired together with HomeAssistant to easy and effortlessly access data by query.
Also, this would be an ideal way to make data available to Grafana.

First of all, make sure that your Debian server is up to date:

sudo apt-get update && apt-get upgrade

Install Curl:

sudo apt-get install curl

Add the InFluxData repository:

curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/os-release
test $VERSION_ID = "7" && echo "deb https://repos.influxdata.com/debian wheezy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "8" && echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "9" && echo "deb https://repos.influxdata.com/debian stretch stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

Install InFluxDB Service:

sudo apt-get update && sudo apt-get install influxdb

Start InfluxDB Service (See last bracket if you are using Debian 8+)

sudo service influxdb start
sudo systemctl unmask influxdb.service
sudo systemctl start influxdb

And then you are done.
You should now reach your installation on the default port 8086 (in my case, it’s 8083 because of other hosted sites – more information here: https://docs.influxdata.com/influxdb/v1.7/administration/ports/)

http://your.ip.address.here:8086/

Datawarehouse Database Cleanup SQL query

IMPORTANT: Always perform a FULL Backup of the database before doing anything to it !!!

This article applies to SCOM 2007, 2012 as well as 2016 (haven’t tested 1807 yet).

Somtimes you may have event storms where you end up having old entries in the Data Warehouse database i.e data that is older than the grooming threshold. This may happen because the grooming processes can’t keep up because they run on a regular interval but only delete a fixed number of rows per run.

The following SQL query may also be valuable in case you end up with the issue of SQL Timeouts from the Data Warehouse database when the StandardDataSetMaintenance stored procedure is executed by the RMS.

More on that issue here: http://blogs.technet.com/b/kevinholman/archive/2010/08/30/the-31552-event-or-why-is-my-data-warehouse-server-consuming-so-much-cpu.aspx

To check if this is the case for you, run this SQL Query on the Data Warehouse database:

DECLARE    @MaxDataAgeDays INT,    @DataSetName NVARCHAR(150) 
SET @DataSetName = 'Event' 
SELECT @MaxDataAgeDays = MAX(MaxDataAgeDays) 
FROM StandardDatasetAggregation 
WHERE DatasetId = (    
SELECT DatasetId    
FROM StandardDataset    
WHERE SchemaName = @DataSetName ) 
SELECT COUNT(*) 
FROM EventCategory 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
SELECT COUNT(*) 
FROM EventChannel 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
SELECT COUNT(*) 
FROM EventLoggingComputer 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
SELECT COUNT(*) 
FROM EventPublisher 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
SELECT COUNT(*) 
FROM EventUserName 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
SELECT COUNT(*) 
FROM ManagedEntityProperty 
WHERE ToDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
SELECT COUNT(*) 
FROM RelationshipProperty 
WHERE ToDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE())

Now if you get any results here it means that you are experiencing the issue. So you might want to clean these up manually to help out SCOM.

So execute this SQL Query on the Data Warehouse database to clean the old entries

DECLARE    @MaxDataAgeDays 
INT,    @DataSetName 
NVARCHAR(150) 
SET @DataSetName = 'Event' 
SELECT @MaxDataAgeDays = MAX(MaxDataAgeDays) 
FROM StandardDatasetAggregation 
WHERE DatasetId = (    
SELECT DatasetId    
FROM StandardDataset    
WHERE SchemaName = @DataSetName ) 
DELETE EventCategory 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
OPTION(RECOMPILE) 
DELETE EventChannel 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
OPTION(RECOMPILE) 
DELETE EventLoggingComputer 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
OPTION(RECOMPILE) 
DELETE EventPublisher
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
OPTION(RECOMPILE) 
DELETE EventUserName 
WHERE LastReceivedDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
OPTION(RECOMPILE) 
DELETE ManagedEntityProperty 
WHERE ToDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE()) 
OPTION(RECOMPILE) DELETE RelationshipProperty 
WHERE ToDateTime < DATEADD(DAY, -@MaxDataAgeDays, GETUTCDATE())
OPTION(RECOMPILE)

After running this query you will hopefully experience better performance.

Thanks to  https://scompanion.wordpress.com/ for initially writing this article.

Update: SCOM: October 2016 patch makes your Console crash

Awesome Microsoft, way to go with QA when you make your own core products crash…

console_crash

It seems that the bundled October patch for Windows Server 2008x and 2012x makes the SCOM Console crash when viewing different state views.
The patches mentioned:

Server 2008 – https://support.microsoft.com/en-us/kb/3192391
Server 2012 – https://support.microsoft.com/en-us/kb/3192392

After uninstalling theese in my environment, the Console started working again.

This has now been officially aknowlegded, and MS is working on a solution:
https://blogs.technet.microsoft.com/germanageability/2016/10/13/october-2016-windows-patch-kb3192392-might-cause-scom-2012r2-console-to-crash/

Follow the MOMTeam blog for more info on when the fix will arrive:
https://blogs.technet.microsoft.com/momteam/

Update!

The product group released a hotfix for this issue: https://support.microsoft.com/en-us/kb/3200006

 

– F