Quantcast
Channel: Category Name
Viewing all 1120 articles
Browse latest View live

Hybrid MIM reporting now available in Azure Active Directory

$
0
0

Howdy folks,

Today, I’m happy to announce the general availability of the Microsoft Identity Manager hybrid reporting, which enables the reporting view within Azure Active Directory’s (Azure AD) audit activity reports.

I’ve invited one of our Engineering Managers, David Steadman, to tell you more about this feature and share details on how to enable it. Read on for additional information from him, and be sure to tell us what you think!
Best regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

———————

As Azure Active Directory continues to make hybrid easier to enable key Identity and Access Management scenarios, our customers increasingly want to have more visibility and control into who has access to what across their hybrid environments. For many organizations today, user access to resources both on-premises and in the cloud is controlled by security group memberships. Customers rely upon Microsoft Identity Manager (MIM) to manage these security groups,and the reporting around this management is something we’ve been working to improve.


We’re happy to let you know that, as of today, Microsoft Identity Manager’s hybrid reporting solution enables this view within the Azure AD audit activity reports. With this feature, you can now monitor activity around self-service group management activity and self-service password reset occurring either on-premises with MIM or in the cloud.

These reports can be consumed within the Azure portal or in Power BI, and can also be exported to generate custom views.


The reports should be visible within one hour of enabling the reporting. Get started today by following the steps listed in our documentation on hybrid identity management audit reporting.

We value your feedback, so please let us know of your experience and suggestions for how we could make this better!

Thanks

David Steadman (Twitter: @TheMIMGuy)

Senior Engineering Manager

Microsoft Identity Division


The Case of Multiple DCs Logging Event 1168 Internal Error: An Active Directory Domain Services Error Has Occurred

$
0
0

Hello Everyone, my name is Zoheb Shaikh and I’m a Premier Field Engineer out of Malaysia. Today for my first post on AskPFEPlat, I wanted to share something interesting with you that I came across recently caused by a KRBTGT_RODC account deletion.

Before I talk more about the issue, I would like to share a bit of background about KRBTGT account and its use briefly. I could try to explain what the krbtgt account is, but here is a short article on the KDC and the krbtgt to take a look at:

http://msdn.microsoft.com/en-us/library/windows/desktop/aa378170(v=vs.85).aspx4

“All instances of the KDC within a domain use the domain account for the security principal “krbtgt”. Clients address messages to a domain’s KDC by including both the service’s principal name, “krbtgt”, and the name of the domain. Both items of information are also used in tickets to identify the issuing authority. For information about name forms and addressing conventions, see RFC 4120.”

Likewise, a snip for the RODC krbtgt_##### account:

http://technet.microsoft.com/en-us/library/cc753223(v=WS.10).aspx

“The RODC is advertised as the Key Distribution Center (KDC) for the branch office. The RODC uses a different krbtgt account and password than the KDC on a writable domain controller uses when it signs or encrypts ticket-granting ticket (TGT) requests. This provides cryptographic isolation between KDCs in different branches, which prevents a compromised RODC from issuing service tickets to resources in other branches or a hub site.”

The krbtgt_##### account is unique to each RODC and minimizes impact if the RODC is compromised. The RODC does not have the krbtgt secret. It only has its own krbtgt_##### secret (and other accounts you have allowed). Thus, when removing a compromised RODC, the domain krbtgt account is not lost.

Getting back to the scenario, the customer had multiple DC’s running 2012 R2 and 3 Read Only Domain Controllers (RODC). We observed that the writable DC’s were flooded with the Event IDs 1168 stating “Internal error: An Active Directory Domain Services error has occurred”. They were not experiencing any functional loss because of this, but were worried about the h`ealth of the Domain Controllers.

Log Name: Directory Service
Source: Microsoft-Windows-ActiveDirectory_DomainService
Date: 6/2/2017 3:18:01 AM
Event ID: 1168
Task Category: Internal Processing
Level: Error
Keywords: Classic
User: ContosocontosoRODC$
Computer: ContosoDC.contoso.local
Description:
Internal error: An Active Directory Domain Services error has occurred.
Additional Data
Error value (decimal):
8995
Error value (hex):
2323
Internal ID:
124013b

So we asked, what changes have been made recently?

In this case, the customer was unsure about what exactly happened, and these events seem to have started out of nowhere. They reported no major changes done for AD in the past 2 months and suspected that this might be an underlying problem for a long time.

So, we investigated the events and when we looked at it granularly we found that the event 1168 was coming from a RODC:

Keywords: Classic

User: ContosocontosoRODC$

Computer: ContosoDC.contoso.local

Then we checked one of the RODC’s and could not see any reference to these. So, we turned up the Active Directory Diagnostics to 5 and saw an event Id Event 1084. (Refer blog for enabling Active Directory Diagnostic logging https://technet.microsoft.com/en-us/library/cc961809.aspx)

Event ID: 1084

Internal event: Active Directory Domain Services could not update the following object with changes received from the following source directory service. This is because an error occurred during the application of the changes to Active Directory Domain Services on the directory service.

Object:

CN=krbtgt_37540ADEL:1gc5th4-88yy-4194-th65-avf12a8621324,CN=Deleted Objects,DC=contoso,DC=local

Object GUID:

0e8478c5-3605-4e8c-8497-1e730c959516

Source directory service:

b137e78d-e45f-4e88-aaee-379dd9b7e66f._msdcs.contoso.local

From this error, it was clear that this was caused by krbtgt_RODC account deletion and the customer said that they may have run a script to delete Disabled accounts.

So, we proposed below options to resolve this issue

  1. Restore the KRBTGT_RODC account from Active Directory Recycle Bin if it was enabled.
  2. Restore the KRBTGT_RODC account from a System State backup
  3. Demote and repromoted RODC as KRBTGT_RODC account is unique for each RODC

To reproduce this error in lab we followed the below steps: –

  1. Promoted a RODC in the environment
  2. Changed the attribute ms-ds-krbTGT-Link Removed the user (cn=krbtgt_37540,dc=contoso,dc=com) “key distribution center for service account > Put the value to <not Set>

  3. Once done checked events and it gave us the same event (1168)

  4. Added the ms-ds-krbTGT-Link value back to KRBTGT_RODC account and the event stopped coming.

If you have a RODC in your environment, do keep this in mind. Thanks for reading, and hope this helps!

 

Zoheb

 

Protect multiple cloud app instances using Microsoft Cloud App Security

$
0
0

This post is authored by Arbel Zinger, Program Managers, Microsoft Cloud App Security.

Several organizations use multi instances of the same cloud applications for different business reasons. As a security professional, you need to have visibility into each of these instances and have the option to control each one. Were happy to announce that Microsoft Cloud App Security can now support and control multiple instances of the cloud apps.

Create multi-instance support policies

Lets start with a common scenario: the marketing team and the sales team in an organization use the same CRM cloud application, but with two different instances. Why?

  • Marketing data might be shared with many people including public relations teams, partners or customers, while sales data (the pipeline, the number leads, etc.) is mostly classified and should be kept internal.
  • Also, there may be different CRM instances for different geographies, where one region may have stricter information protection rules.

With Microsoft Cloud App Security you can create a policy enforcing that any file from the European CRM instance cannot be shared publicly and you can govern this data automatically through this policy. Or you can set a policy to automatically label each file that is copied from the US CRM instance to the Europe CRM instance as sensitive, using Azure Information Protection labels.

Figure 1. Creating a policy

Another common use case scenario is when a development team is working on a test environment vs. a production environment. With multi-instance support policies in Microsoft Cloud App Security, you can provide even more granular and stricter controls for your production environment.

Connecting multiple user accounts to one identity

Considering that users may connect to different instances of the same app, using different user names, Microsoft Cloud App Security knows to connect between an account to the specific user, a person, to help you with investigating alerts in a user-focused way.

Figure 2. Example of multiple accounts for a single user

If you have Microsoft Cloud App Security or Office 365 Cloud App Security deployed, you will see these features already enabled in your tenant. If not, you can try how this service helps you with providing visibility, data control and threat protection to your cloud apps.

Learn more and provide feedback

If you would like to learn more, visit the technical documentation for Microsoft Cloud App Security and Office 365 Cloud App Security.

We love hearing your feedback. Let us know what you think at Microsoft Cloud App Security Tech Community.

Faster VMware Backups with SC 1801 DPM

$
0
0

Given the diversity of your workloads running in on-prem data centres and in the Azure public cloud, Azure Backup gives you an array of options to meet your data protection needs. With DPM and Azure Backup, you can backup your

  • VMs running in Azure
  • on-premise System State and Files and folders directly to cloud, and
  • applications and VMs to on-premise disks, and to Azure.

With System Center 1801 release, DPM has added support for backing up VMs running on VMware. With support for protecting VMware VMs, DPM now provides a complete solution to protect your private cloud running on Hyper-V and VMware VMs. As with DPM 2016, you can backup applications such as SQL Server, SharePoint, Exchange and files and folders. VMWare VM backups are supported using Modern Backup Storage and thus can leverage the backup efficiencies of  MBS: upto 50% storage savings, faster backups , and workload aware storage.  You can use DPM to backup the VMware VMs to disks for short term retention and fast RTO, and to Azure to meet your long-term retention and offsite copy needs.

Key Benefits

  1. Agentless VMware Backup

DPM uses VMware’s VADP API  to protect VMware VMs remotely without installing agents on vCenter or ESXi servers. This frees admins from the hassle of managing agents for VMware VM backup.

  1. Backups agnostic of where VMs are running

First class integration with VMware allows customers to backup VMs stored in different storage targets like NFS and cluster storage seamlessly without any extra manual steps.

  1. Folder level auto-protection

vCenter’s capability to organize VMs in folders helps customers in managing large environments with ease. DPM can discover and protect at folder level, which will protect all current VMs in the folder, and any new VMs that are added to it in the future.

  1. Item Level Recovery

When backing up Windows VMs running on VMWare, during recoveries, you can recover the  files that need to be recovered and where they need to be recovered to. By eliminating the need to restore the whole VM, you can perform restores faster. Network and storage bandwidth requirements are also minimized.

  1. High storage efficiencies with no over-allocation

With Modern Backup Storage (MBS), DPM can  grow and shrink backup storage consumption inline with production servers by leveraging VHDX as backup. Thus, MBS helps in reducing overall storage consumption by upto 50%.

  1. Backup storage optimization with Workload Aware Storage

Workload-aware storage  gives you the flexibility to choose appropriate storage for a given data source type. For ex., SQL DBs can be stored on highly performant hardware, while storing VMWware VMs that are backed up once a day on low-cost JBODs.

Learn more about backing up your VMWare VMs using Modern Backup Storage.

You can get SC 1801 DPM up and running in ten minutes by downloading Evaluation VHD.  Questions? Reach out to us at AskAzureBackupTeam@microsoft.com.

Related links and additional content

EMS and Pradeo integration ensures risk free devices access company resources

$
0
0

This post is authored by MayunkJain, Senior Product Marketing Manager, Microsoft 365 Security.

Microsoft and Pradeo are delighted to announce the integration between Microsoft Enterprise Mobility + Security (EMS), available as part of the Microsoft 365 modern workplace, and Pradeo Security solution. Pradeo mobile security expertise provides organizations with an advanced, automated and adaptive management of mobile security for both iOS and Android devices. This partnership allows organizations to further ensure that only trusted devices are allowed to access company resources.

Pradeo and Microsoft EMS team up to deliver a comprehensive, intelligent mobile security solution

Pradeo Security for Mobile Threat Defense integrates with Microsoft Intune to protect devices from leaky and malicious applications, device manipulation and network exploits before they become a problem. This new integration makes it easy to apply Pradeos threat defense technology as an additional input into Intunes device compliance settings for the EMS Conditional Access evaluation. When a threat is detected, Pradeo immediately applies on-device protections and notifies Intune to mark the device as non-compliant and trigger the appropriate conditional access controls, ensuring that company data stays protected. Once the threat is mitigated, the device compliance status is updated and access is reinstated.

Pradeos unique 360 real-time threat detection technology, based on patented artificial intelligence (AI) process combines multiple layers of real-time analysis and machine learning to take on-device actions, dynamically leverage Microsoft Intune to update device status and conditional access controls.

This integration will be generally available later this quarter.

Find more on Pradeo Security solution at www.pradeo.com.

Please note, any necessary licenses for Pradeo products must be purchased separately from EMS licenses.

 

Infrastructure + Security: Noteworthy News (February, 2018)

$
0
0

Hi there! Stanislav Belov is back to bring you the next issue of the Infrastructure + Security: Noteworthy News series! As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis. Enjoy!

Microsoft Azure
Protect machines using managed disks between Azure regions using Azure Site Recovery
We are happy to announce that Azure Site Recovery (ASR) now provides you the ability to setup Disaster Recovery (DR) for IaaS VMs using managed disks. With this feature, ASR fulfills an important requirement to become an all-encompassing DR solution for all of your production applications hosted on laaS VMs in Azure, including applications hosted on VMs with managed disks.
Public preview: “What If” tool for Azure AD Conditional Access policies
We’ve received lot of feedback about the user impact of Conditional Access. Specifically, with this much power at your fingertips, you need a way to see how CA policies will impact a user under various sign-in conditions.
We heard you and released the public preview of the “What If” tool for Conditional Access. The What If tool helps you understand the impact of the policies on a user sign-in, under conditions you specify. Rather than waiting to hear from your user about what happened, you can simply use the What If tool.
Windows Server
Windows Defender Antivirus in Windows 10 and Windows Server 2016

Windows Defender Antivirus is a built-in antimalware solution that provides security and antimalware management for desktops, portable computers, and servers. This library of documentation is aimed for enterprise security administrators who are either considering deployment, or have already deployed and are wanting to manage and configure Windows Defender AV on PC endpoints in their network.

Windows Client
New OneDrive for Business feature: Files Restore
Files Restore is a complete self-service recovery solution that allows administrators and end users to restore files from any point in time during the last 30 days. If a user suspects their files have been compromised, they can investigate file changes and allow content owners to go back in time to any second in the last 30 days. Now your users and your administrators can rewind changes using activity data to find the exact moment to revert to.
Control the health of Windows 10-based devices
This article details an end-to-end solution that helps you protect high-value assets by enforcing, controlling, and reporting the health of Windows 10-based devices.
Security
Windows Defender ATP support for Windows 7 and Windows 8.1
Starting this summer, customers moving to Windows 10 can add Windows Defender ATP Endpoint Detection & Response (EDR) functionality to their Windows 7, and Windows 8.1 devices, and get a holistic view across their endpoints.
How artificial intelligence stopped an Emotet outbreak
At 12:46 a.m. local time on February 3, a Windows 7 Pro customer in North Carolina became the first would-be victim of a new malware attack campaign for Trojan:Win32/Emotet. In the next 30 minutes, the campaign tried to attack over a thousand potential victims, all of whom were instantly and automatically protected by Windows Defender AV.
Cyber resilience for the modern enterprise
Many organizations are undergoing a digital transformation that leverages a mix of cloud and on-premises assets to increase business efficiency and growth. While increased dependence on technology is necessary for this transformation, and to position the business for success, it does pose risks from security threats. An organization cannot afford to wait until after users and systems have been compromised; it must be proactive. Microsoft helps multiple global enterprises mitigate business impact by offering prescriptive guidance, as well as partnering with them to build a cyber resiliency plan and roadmap.
Retire Those Old Legacy Protocols
There has been a lot of work by enterprises to protect their infrastructure with patching and server hardening, but one area that is often overlooked when it comes to credential theft and that is legacy protocol retirement. These legacy protocols were built when there wasn’t the understanding of security requirements that our modern enterprises need today. Attack Surface Reduction can be achieved by disabling support for insecure legacy protocols: TLS 1.0 and 1.1, SMBv1, LM/NTLMv1, Digest, etc.
Overview of Petya, a rapid cyberattack
In the first blog post of this 3-part series, we introduced what rapid cyberattacks are and illustrated how they are different in terms of execution and outcome. Next, we will go into some more details on the Petya (aka NotPetya) attack.
Vulnerabilities and Updates
Update 1802 for Configuration Manager Technical Preview Branch – Available Now!
We are excited to let you know that update 1802 for the Technical Preview Branch of System Center Configuration Manager has been released. Technical Preview Branch releases give you an opportunity to try out new Configuration Manager features in a test environment before they are made generally available.
Inside the MSRC – The Monthly Security Update Releases
So how do we decide what goes into a monthly security release? That decision largely rides on required customer action and risk. Required customer action is realized through products where customers need to take action to protect themselves against a vulnerability. For consumers, protection is accomplished through automatic updates.
Support Lifecycle
Changes to Office and Windows servicing and support
On Thursday, February 1, 2018, Microsoft made an announcement that includes, among other things, information regarding support End of Life for the Windows 7 Operating System.
Microsoft Premier Support News
A new service, Security: Cloud App Security – Fundamentals leverages Microsoft Services experience to help customers quickly and efficiently begin productive use of Microsoft Cloud App Security (MCAS). The MCAS service helps you gain visibility and control over cloud apps in use, and detect and limit data leaving the organization uncontrolled. This offering provides you with education and assistance with MCAS setup, features and capabilities, and recommended practices.
Three new Onboarding services have been released – On-Demand Assessment – Windows Client: Remote Engineer, OnDemand Assessment – Windows Client: Onsite Engineer and On-Demand Assessment – Exchange Server: Onsite Engineer.
On-Demand Assessments are the latest generation of assessments hosted on the Operations Management Suite (OMS) platform. Getting help from Microsoft when you need it just got easier than ever before. By sharing a workspace with your Microsoft Engineer using OMS, you will have a secure and efficient way of sharing data to resolve your issues faster. OMS automatically collects and provides the answers that Microsoft Support needs to get you back to your business as quickly as possible, whether you are in the cloud or on-premises. With OMS, tasks can run in the background to provide Microsoft Support with the information they need to get you back up and running faster.

Sneak Peek: Taking a Spin with Enhanced Linux VMs

$
0
0

Whether you’re a developer or an IT admin, virtual machines are familiar tools that allow users to run entirely separate operating system instances on a host. And despite being a separate OS, we feel there’s a great importance in having a VM experience that feels tightly integrated with the host. We invested in making the Windows client VM experience first-class, and users really liked it. Our users asked us to go further: they wanted that same first-class experience on Linux VMs as well.

As we thought about how we could deliver a better-quality experience–one that achieved closer parity with Windows clients–we found an opportunity to collaborate with the open source folks at XRDP, who have implemented Microsoft’s RDP protocol on Linux.

We’re partnering with Canonical on the upcoming Ubuntu 18.04 release to make this experience a reality, and we’re working to provide a solution that works out of the box. Hyper-V’s Quick Create VM gallery  is the perfect vehicle to deliver such an experience. With only 3 mouse clicks, users will be able to get an Ubuntu VM running that offers clipboard functionality, drive redirection, and much more.

But you don’t have to wait until the release of Ubuntu 18.04 to try out the improved Linux VM experience. Read on to learn how you can get a sneak peek!

Disclaimer: This feature is under development. This tutorial outlines steps to have an enhanced Ubuntu experience in 16.04. Our TARGET experience will be with 18.04. There may be some bugs you discover in 16.04–and that’s okay! We want to gather this data so we can make the 18.04 experience great. 

A Call for Testing

We’ve chosen Canonical’s next LTS release, Bionic Beaver, to be the focal point of our investments. In the lead up to the official release of 18.04, we’d like to begin getting feedback on how satisfied users are with the general experience. The experience we’re working towards in Ubuntu 18.04 can be set up in Ubuntu 16.04 (with a few extra steps). We will walk through how to set up an Ubuntu 16.04 VM running in Hyper-V with Enhanced Session Mode.

In the future, you can expect to be able to find an Ubuntu 18.04 image sitting in the Hyper-V Quick Create galley 😊

NOTE: In order to participate in this tutorial, you need to be on Insider Builds, running at minimum Insider Build No. 17063

Tutorial

Grab the Ubuntu 16.04 ISO from Canonical’s website, found at releases.ubuntu.com. Provision the VM as you normally would and step through the installation process. We created a set of scripts to perform all the heavy lifting to set up your environment appropriately. Once your VM is fully operational, we’ll be executing the following commands inside of it.

#Get the scripts from GitHub
$ sudo apt-get update
$ sudo apt install git
$ git clone https://github.com/jterry75/xrdp-init.git ~/xrdp-init
$ cd ~/xrdp-init/ubuntu/16.04/

#Make the scripts executable and run them...
$ sudo chmod +x install.sh
$ sudo chmod +x config-user.sh
$ sudo ./install.sh

Install.sh will need to be run twice in order for the script to execute fully (it must perform a reboot mid-script). That is, once your VM reboots, you’ll need to change dir into the location of the script and run again. Once you’ve finished running the install.sh script, you’ll need to run config-user.sh

$ sudo ./config-user.sh

After you’ve run your scripts, shut down your VM. On your host machine in a powershell prompt, execute this command:

Set-VM -VMName <your_vm_name>  -EnhancedSessionTransportType HvSocket

Now, when you boot your VM, you will be greeted with an option to connect and adjust your display size. This will be an indication that you’re running in an enhanced session mode. Click “connect” and you’re complete.

What are the Benefits?

These are the features that you get with the new enhanced session mode:

  • Better mouse experience
  • Integrated clipboard
  • Window Resizing
  • Drive Redirection

We encourage you to log any issues you discover to GitHub. This will also give you an idea of already identified issues.

How does this work?

The technology behind this mode is actually the same as how we achieve an enhanced session mode in Windows. It relies on the RDP protocol, implemented on Linux by the open source folks at XRDP, over Hyper-V sockets to light up all the great features that give the VM an integrated feel. Hyper-V sockets, or hv_sock, supply a byte-stream based communication mechanism between the host partition and the guest VM. Think of it as similar to TCP, except it’s going over an optimized transport layer called VMBus. We contributed changes which would allow XRDP to utilize hv_sock.

The scripts we executed did the following:

  • Installs the “Linux-azure” kernel to the VM. This carries the hv_sock bits that we need.
  • Downloads the XRDP source code and compiles it with the hv_sock feature turned on (the published XRDP package in 16.04 doesn’t have this set, so we must compile from source).
  • Builds and installs xorgxrdp.
  • Configures the user session for RDP
  • Launches the XRDP service

As we mentioned earlier, the steps described above are for Ubuntu 16.04, which will look a little different from 18.04. In fact, with Ubuntu 18.04 shipping with the 4.15 linux kernel (which already carries the hv_sock bits), we won’t need to apply the linux-azure kernel. The version of XRDP that ships as available in 18.04 is already compiled with hv_sock feature turned on, so there’s no more need to build xrdp/xorgxrdp—a simple “apt install” will bring in all the feature goodness!

If you’re not flighting insider builds, you can look forward to having this enhanced VM experience via the VM gallery when Ubuntu 18.04 is released at the end of April. Leave a comment below on your experience or tweet me with your thoughts!

Cheers,

Craig Wilhite (@CraigWilhite)

Introducing Azure Advanced Threat Protection

$
0
0

The nature and requirements of security have changed as the frequency and severity of cyber attacks have grown dramatically. With the increase in sophistication and velocity of these attacks, current IT security tools provide limited protection when user credentials, either on-premises or in the cloud, are compromised. And when there is an incident, responding to it in real-time is almost impossible.

Many of you have deployed Advanced Threat Analytics (ATA), our on-premises solution to help detect suspicious activity. Today Microsoft is excited to announce that Azure Advanced Threat Protection (ATP) is now generally available. Azure ATP is a cloud-based security solution that helps you detect and investigate security incidents across your networks. It supports the most demanding workloads of security analytics for the modern enterprise.

What is Azure ATP?

For security operators, analysts, and professionals who are struggling to detect advanced attacks in a hybrid environment, Azure ATP is a threat protection solution that helps:

  • Detect and identify suspicious user and device activity with learning-based analytics
  • Leverage threat intelligence across the cloud and on-premises environments
  • Protect user identities and credentials stored in Active Directory
  • Provide clear attack information on a simple timeline for fast triaging
  • Monitor multiple entry points through integration with Windows Defender Advanced Threat Protection

Azure ATP is able to detect advanced malicious attacks leveraging both cloud and on-premises signals, reducing false positives, and providing an end-to-end investigation experience including across endpoint and identity with Windows Defender ATP integration.

Detecting attacks

Azure ATP monitors entity (user, device, resources) behavior to create a baseline and then detects anomalies with the adaptive built-in intelligence, giving you insights into your identity and network traffic so you can quickly respond.

As shown in the diagram below, a typical attack will be launched against an entity such as a user or their device, and then quickly look to move laterally until they gain access to valuable assets.

To help combat this, Azure ATP is shipped with a set of deterministic models that identify both common and newly discovered implementations of attacker techniques such as Pass-the-Hash, Overpass-the-Hash, Golden Ticket, and others.

Investigation

Azure ATP is designed to reduce the noise from alerts and provides only relevant and important suspicious activities with a simple, real-time view of the attack timeline. This allows you to focus on what matters, leveraging the intelligence provided by our analytics.

Additionally, seamless integration with the powerful features of Windows Defender Advanced Threat Protection provides yet another layer of security through detecting and protecting against advanced persistent threats on the operating system itself. Azure ATPs attack timeline is functional, clear and convenient.

Cloud-based intelligence

Leveraging the scale and intelligence of Azure, when we detect a new possible threat or attack method, we can automatically update all active tenants. This means that your threat detection capabilities are always up to date.

Azure ATP is a part of Microsoft 365s Enterprise Mobility + Security E5 suite, you can learn more about Azure ATP here, and when you are ready, start a trial!

Adam Hall (on behalf of the entire Azure ATP team)


Latest SAML Vulnerability : Not present in Azure AD and ADFS

$
0
0

Howdy folks,

Recently a security vulnerability was discovered in a number of SAML SSO implementations which makes it possible for a signed SAML token to be manipulated to impersonate another user or to change the scope of a users authorization in some circumstances. The vulnerability is described in the finders blog, here. Many of you have been asking whether this affects Microsoft identity servers and services.

We can confirm that Microsoft Azure Active Directory, Azure Active Directory B2C and Microsoft Windows Server Active Directory Federation Services (ADFS) are NOT affected by this vulnerability. The Microsoft account system is also NOT affected.Additionally, we can confirm that neither the Windows Identity Foundation (WIF) nor the ASP.NET WS-Federation middleware have this vulnerability.

While Azure Active Directory and ADFS arent affected by this for incoming SAML tokens, you should ensure that any applications you use that consume SAML tokens issued by arent affected. We recommend you contact providers of your SAML based applications.

Best Regards,
Alex

PKI Basics: How to Manage the Certificate Store

$
0
0

Hello all! Nathan Penn and Jason McClure here to cover some PKI basics, techniques to effectively manage certificate stores, and also provide a script we developed to deal with common certificate store issue we have encountered in several enterprise environments (certificate truncation due to too many installed certificate authorities).

PKI Basics

To get started we need to review some core concepts of how PKI works. As you browse secure sites on the Internet and/or within your organization, your computer leverages certificates to build trust with the remote site it is communicating with. Some of these certificates are local and installed on your computer, while some are installed on the remote site. If we were to browse to https://support.microsoft.com we would notice:

The lock lets us know that the communication between our computer and the remote site is encrypted. But why, and how do we establish that trust? When we typed https://support.microsoft.com, the site on the other end sent its certificate that looks like this:

Certificate Chain

We won’t go into the process the owner of the site went through to get the certificate, as the process varies for certificates used inside an organization versus certificates used for sites exposed to the Internet. Regardless of the process used by the site to get the certificate, the Certificate Chain, also called the Certification Path, is what establishes the trust relationship between the computer and the remote site and is shown below.

As you can see, the certificate chain is a hierarchal collection of certificates that leads from the certificate the site is using (support.microsoft.com), back to a root of trust, the Trusted Root Certification Authority (CA). In the above example, DigiCert Baltimore Root is the Trusted Root CA. All certificates in between the site’s certificate and the Trusted Root CA certificate, are Intermediate Certificate Authority certificates. To establish the trust relationship between a computer and the remote site, the computer must have the entirety of the certificate chain installed within what is referred to as the local Certificate Store. When this happens, a trust can be established and you get the lock icon shown above. But, if we are missing certs or they are in the incorrect location we start to see this error:

Certificate Store

The certificate store is separated into two primary components, a Computer store & a User store. The primary difference being that certificates loaded into the Computer store become global to all users on the computer, while certificates loaded into the User store are only accessible to the logged on user. To keep things simple, we will focus solely on the Computer store in this post. Leveraging the Certificates MMC (certmgr.msc), we have a convenient interface to quickly and visually identify the certificates currently loaded into the local Certificate Store. This tool also provides us the capability to efficiently review what certificates have been loaded, and if the certificates have been loaded into the correct location. This means we have the ability to view the certificates that have been loaded as Trusted Root CAs, Intermediate CAs, and/or both (hmmm… that doesn’t sound right).

Identifying a Trusted Root CA from an Intermediate CA

Identifying a Root CA from an Intermediate CA is a fairly simple concept to understand once explained. Trusted Root CAs are the certificate authority that establishes the top level of the hierarchy of trust. By definition this means that any certificate that belongs to a Trusted Root CA is generated, or issued, by itself. Understanding this makes identifying a Trusted Root CA certificate exceptionally easy to identify as the “Issued To” and “Issued By” attributes will always match.

Alternatively, an Intermediate CA is a Certificate Authority that builds upon the trust of some other CA. This can be either, another Intermediate CA, or a Trusted Root CA. Understanding this makes identifying an Intermediate CA certificate just as easy as the “Issued To” and “Issued By” attributes must be different.

To sum up a Trusted Root CA is issued by itself, while an Intermediate CA is issued by something else. Simple stuff, right?

Managing the Certificate Store

We know about remote site certificates, the certificate chain they rely on, the local certificate store, and the difference between Root CAs and Intermediate CAs now. But what about managing it all? On individual systems that are not domain joined, managing certificates can be easily accomplished through the same local Certificates MMC shown previously. In addition to being able to view the certificates currently loaded, the console provides the capability to import new, and delete existing certificates that are located within.

On a domain joined systems it is recommended to manage PKI at the enterprise level (which may explain why we named one of the MMC Enterprise PKI). This is done through the Group Policy MMC (gpmc.msc), and we would typically make the changes to a single policy linked at the domain level. Using this approach, we can ensure that all systems in the domain have the same certificates loaded and in the appropriate store. It also provides the ability to add new certificates and remove unnecessary certificates as needed.

Too Many Certs

On several occasions both of us have gone into enterprise environments experiencing authentication oddities, and after a little analysis trace the issue to an Schannel event 36885.

This event is caused by the number of certificates loaded into the computer’s Trusted Root Certificate Authorities (TRCA) and Intermediate Certificate Authorities (ICA) stores. The most important part of the above warning is the following: “Currently, this server trusts so many certificate authorities that the list has grown too long. This list has thus been truncated.” Unfortunately, here is what we don’t know: Where was the list truncated, which certificate authorities did it grab, which certificate authorities did it NOT grab, and do I have all the certs that will be needed to build any of the given certificate chains for the requests that will be made?

At this point many of you are asking, “How many is too many?” The answer to this is it depends, as the limitation is based on the size of the store which is limited to 16 kilobytes and not the number of certificates.

In December 2012, KB931125 was released and intended only for client SKUs. However, it was also offered for Server SKUs for a short time on Windows Update and WSUS. This package installed all TRCAs enrolled in the Microsoft Trusted Root Program (more than 330). While we offer a fix it tool for individual systems here (https://support.microsoft.com/en-us/help/2801679/ssl-tls-communication-problems-after-you-install-kb-931125), this wasn’t identified as an issue in several environments.

On a small scale, customers that experience certificate bloat issues can leverage the Certificate MMC to deal with the issue on individual systems. Unfortunately, the ability to clear the certificate store on clients and servers on a targeted and massive scale with minimal effort does not exist. On a larger scale, customers would be required to leverage the Microsoft built-in “Certutil” application via a script. This technique requires the scripter to identify and code in the thumbprint of every certificate that is to be purged on each system (also very labor intensive).

Introducing CertPurge

Overview of Script

CertPurge will remove all locally installed certificates from the Trusted Root Certification Authorities, Intermediate Certification Authorities, and Third-Party Root Certification Authorities stores on the local machine.   Only certificates that are being deployed to the machine from Group Policy will remain.

What it solves

The ability to clear the certificate store on clients and servers on a targeted and massive scale with minimal effort.  This is needed to handle certificate bloat issues that can ultimately result in authentication issues.  On a small scale, customers that experience certificate bloat issues can leverage the built-in certificate MMC to deal with the issue on a system by system basis as a manual process.  On a larger scale, customers would be required to leverage the Microsoft built-in “Certutil” application via a script. This technique requires the scripter to identify and code in the thumbprint of every certificate that is to be purged on each system (also very labor intensive).

How it works

CertPurge scans the following registry locations (“HKEY_LOCAL_MACHINESOFTWAREMicrosoftSystemCertificates” & “HKEY_LOCAL_MACHINESOFTWAREMicrosoftEnterpriseCertificates”) and builds an array for all entries found under the Trusted Root Certification Authorities, Intermediate Certification Authorities, and Third-Party Root Certification Authorities paths.  CertPurge then leverages the array to delete every subkey.

Backout Mechanisms

Prior to performing any operations (i.e. building array, purging certificates), CertPurge generates a backup of the “HKEY_LOCAL_MACHINESOFTWAREMicrosoftSystemCertificates” & “HKEY_LOCAL_MACHINESOFTWAREMicrosoftEnterpriseCertificates” paths in their entirety into a .reg file stored in the c:windows directory.  In the event that required certificates are purged, an administrator can import the backup files and restore all purged certificates.  (NOTE:  This is a manual process, so testing prior to implementation on a mass scale is highly recommended).

Why certificates pushed via GPO are not affected

Certificates pushed via GPO are stored in the “HKEY_LOCAL_MACHINESOFTWAREPoliciesMicrosoftSystemCertificates” path.  As CertPurge does not target this location, all certificates deployed via GPO are unaffected.

What to do if not all required certificates are being published via GPO

KB 293781 details the certificates that are required for the operating system to operate correctly. Removal of the certificates identified in the article may limit functionality of the operating system or may cause the computer to fail.  Ensure at a minimum that these certificates are published via a GPO prior to implementing the CertPurge applicationscript.  If a required certificate (either one from the KB, or one specific to the customer environment) is purged, that is not being deployed via GPO, the recommended approach is as follows

1.       Restore certificates to an individual machine using the backup registry file,

2.       Leveraging the Certificate MMC, export the required certificates to file,

3.       Update the GPO that is deploying certificates by importing the required certificates,

4.       Rerun CertPurge on machine identified in step 1 to re-purge all certificates,

5.       Execute a GPUpdate on machine identified in step 1 to receive updated GPO certificate deployment,

6.       TEST!!!

7. Did we mention Test?

The Goods

CloseOut and Additional Resources

At this point, hopefully we all understand some of the basics, what a certificate chain is, the difference in a Root certificate and an Intermediate/Issuing certificate, and where those certificates should be located on our systems. Also, we now have a method for cleaning things up things in bulk should things get out of control and you need to rebaseline systems in mass. Let us know what you all think, and if there is another area you want us to expand on next.

Additional Resources:

https://support.microsoft.com/en-us/help/2464556/failed-tls-connection-between-unified-communications-peers-generates-a

https://support.microsoft.com/en-us/help/2801679/ssl-tls-communication-problems-after-you-install-kb-931125

https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn265983(v=ws.11)

Disclaimer

The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

Download CertPurge.ps1 here

On-demand webinar: Identity-driven unified endpoint management with EMS

$
0
0

This post is authored by Vladimir Petrosyan, Sr. Product Marketing Manager.

Microsoft Enterprise Mobility + Security (EMS) provides an identity-driven unified endpoint management (UEM) solution that offers a holistic approach to solve mobility and security challenges as organizations go through their digital transformation.

Watch this free webinar to see how EMS can help you manage and secure company data across your iOS, macOS, Android, and Windows 10 devices.

This one-hour session includes:

  • Securing access to company email, files, and apps stored in the cloud and on-premises
  • Protecting company data on all endpoints
  • Modernizing Windows 10 management

Azure AD and ADFS best practices: Defending against password spray attacks

$
0
0

Howdy folks,

As long as we’ve had passwords, people have tried to guess them. In this blog, we’re going to talk about a common attack which has become MUCH more frequent recently and some best practices for defending against it. This attack is commonly called password spray.

In a password spray attack, the bad guys try the most common passwords across many different accounts and services to gain access to any password protected assets they can find. Usually these span many different organizations and identity providers. For example, an attacker will use a commonly available toolkit like Mailsniper to enumerate all of the users in several organizations and then try “P@$$w0rd” and “Password1” against all of those accounts. To give you the idea, an attack might look like:

Target User Target Password
User1@org1.com Password1
User2@org1.com Password1
User1@org2.com Password1
User2@org2.com Password1
User1@org1.com P@$$w0rd
User2@org1.com P@$$w0rd
User1@org2.com P@$$w0rd
User2@org2.com P@$$w0rd

This attack pattern evades most detection techniques because from the vantage point of an individual user or company, the attack just looks like an isolated failed login.

For attackers, it’s a numbers game: they know that there are some passwords out there that are very common. Even though these most common passwords account for only 0.5-1.0% of accounts, the attacker will get a few successes for every thousand accounts attacked, and that’s enough to be effective.

They use the accounts to get data from emails, harvest contact info, and send phishing links or just expand the password spray target group. The attackers don’t care much about who those initial targets arejust that they have some success that they can leverage.

The good news is that Microsoft has many tools already implemented and available to blunt these attacks, and more are coming soon. Read on to see what you can do now and in the coming months to stop password spray attacks.

Four easy steps to disrupt password spray attacks

Step 1: Use cloud authentication

In the cloud, we see billions of sign-ins to Microsoft systems every day. Our security detection algorithms allow us to detect and block attacks as they’re happening. Because these are real time detection and protection systems driven from the cloud, they are available only when doing Azure AD authentication in the cloud (including Pass-Through Authentication).

Smart Lockout

In the cloud, we use Smart Lockout to differentiate between sign-in attempts that look like they’re from the valid user and sign-ins from what may be an attacker. We can lock out the attacker while letting the valid user continue using the account. This prevents denial-of-service on the user and stops overzealous password spray attacks. This applies to all Azure AD sign-ins regardless of license level and to all Microsoft account sign-ins.

Tenants using Active Directory Federation Services (ADFS) will be able to use Smart Lockout natively in ADFS in Windows Server 2016 starting in March 2018look for this ability to come via Windows Update.

IP Lockout

IP lockout works by analyzing those billions of sign-ins to assess the quality of traffic from each IP address hitting Microsoft’s systems. With that analysis, IP lockout finds IP addresses acting maliciously and blocks those sign-ins in real-time.

Attack Simulations

Now available in public preview, Attack Simulator as part of Office 365 Threat Intelligence enables customers to launch simulated attacks on their own end users, determine how their users behave in the event of an attack, and update policies and ensure that appropriate security tools are in place to protect your organization from threats like password spray attacks.

Things we recommend you do ASAP:

  1. If you’re using cloud authentication, you’re covered
  2. If you’re using ADFS or another hybrid scenario, look for an ADFS upgrade in March 2018 for Smart Lockout
  3. Use Attack Simulator to proactively evaluate your security posture and make adjustments

Step 2: Use multi-factor authentication

A password is the key to accessing an account, but in a successful password spray attack, the attacker has guessed the correct password. To stop them, we need to use something more than just a password to distinguish between the account owner and the attacker. The three ways to do this are below.

Risk-based multi-factor authentication

Azure AD Identity Protection uses the sign-in data mentioned above and adds on advanced machine learning and algorithmic detection to risk score every sign-in that comes in to the system. This enables enterprise customers to create policies in Identity Protection that prompt a user to authenticate with a second factor if and only if there’s risk detected for the user or for the session. This lessens the burden on your users and puts blocks in the way of the bad guys. Learn more about Azure AD Identity Protection here.

Always-on multi-factor authentication

For even more security, you can use Azure MFA to require multi-factor authentication for your users all the time, both in cloud authentication and ADFS. While this requires end users to always have their devices and to more frequently perform multi-factor authentication, it provides the most security for your enterprise. This should be enabled for every admin in an organization. Learn more about Azure Multi-Factor Authentication here, and how to configure Azure MFA for ADFS.

Azure MFA as primary authentication

In ADFS 2016, you have the ability use Azure MFA as primary authentication for passwordless authentication. This is a great tool to guard against password spray and password theft attacks: if there’s no password, it can’t be guessed. This works great for all types of devices with various form factors. Additionally, you can now use password as the second factor only after your OTP has been validated with Azure MFA. Learn more about using password as the second factor here.

Things we recommend you do ASAP:

  1. We strongly recommend enabling always-on multi-factor authentication for all admins in your organization, especially subscription owners and tenant admins. Seriously, go do this right now.
  2. For the best experience for the rest of your users, we recommend risk-based multi-factor authentication, which is available with Azure AD Premium P2 licenses.
  3. Otherwise, use Azure MFA for cloud authentication and ADFS.
  4. In ADFS, upgrade to ADFS on Windows Server 2016 to use Azure MFA as primary authentication, especially for all your extranet access.

Step 3: Better passwords for everyone

Even with all the above, a key component of password spray defense is for all users to have passwords that are hard to guess. It’s often difficult for users to know how to create hard-to-guess passwords. Microsoft helps you make this happen with these tools.

Banned passwords

In Azure AD, every password change and reset runs through a banned password checker. When a new password is submitted, it’s fuzzy-matched against a list of words that no one, ever, should have in their password (and l33t-sp3@k spelling doesn’t help). If it matches, it’s rejected, and the user is asked to choose a password that’s harder to guess. We build the list of the most commonly attacked passwords and update it frequently.

Custom banned passwords

To make banned passwords even better, we’re going to allow tenants to customize their banned password lists. Admins can choose words common to their organizationfamous employees and founders, products, locations, regional icons, etc.and prevent them from being used in their users’ passwords. This list will be enforced in addition to the global list, so you don’t have to choose one or the other. It’s in limited preview now and will be rolling out this year.

Banned passwords for on-premises changes

This spring, we’re launching a tool to let enterprise admins ban passwords in hybrid Azure AD-Active Directory environments. Banned password lists will be synchronized from the cloud to your on-premises environments and enforced on every domain controller with the agent. This helps admins ensure users’ passwords are harder to guess no matter wherecloud or on-premisesthe user changes her password. This launched to limited private preview in February 2018 and will go to GA this year.

Change how you think about passwords

A lot of common conceptions about what makes a good password are wrong. Usually something that should help mathematically actually results in predictable user behavior: for example, requiring certain character types and periodic password changes both result in specific password patterns. Read our password guidance whitepaper for way more detail. If you’re using Active Directory with PTA or ADFS, update your password policies. If you’re using cloud managed accounts, consider setting your passwords to never expire.

Things we recommend you do ASAP:

  1. When it’s released, install the Microsoft banned password tool on-premises to help your users create better passwords.
  2. Review your password policies and consider setting them to never expire so your users don’t use seasonal patterns to create their passwords.

Step 4: More awesome features in ADFS and Active Directory

If you’re using hybrid authentication with ADFS and Active Directory, there are more steps you can take to secure your environment against password spray attacks.

The first step: for organizations running ADFS 2.0 or Windows Server 2012, plan to move to ADFS in Windows Server 2016 as soon as possible. The latest version will be updated more quickly with a richer set of capabilities such as extranet lockout. And remember: we’ve made it really easy to upgrade from Windows Server 2012R2 to 2016.

Block legacy authentication from the Extranet

Legacy authentication protocols don’t have the ability to enforce MFA, so the best approach is to block them from the extranet. This will prevent password spray attackers from exploiting the lack of MFA on those protocols.

Enable ADFS Web Application Proxy Extranet Lockout

If you do not have extranet lockout in place at the ADFS Web Application proxy, you should enable it as soon as possible to protect your users from potential password brute force compromise.

Deploy Azure AD Connect Health for ADFS

Azure AD Connect Health captures IP addresses recorded in the ADFS logs for bad username/password requests, gives you additional reporting on an array of scenarios, and provides additional insight to support engineers when opening assisted support cases.

To deploy, download the latest version of the Azure AD Connect Health Agent for ADFS on all ADFS Servers (2.6.491.0). ADFS servers must run Windows Server 2012 R2 with KB 3134222 installed or Windows Server 2016.

Use non-password-based access methods

Without a password, a password can’t be guessed. These non-password-based authentication methods are available for ADFS and the Web Application Proxy:

  1. Certificate based authentication allows username/password endpoints to be blocked completely at the firewall. Learn more about certificate based authentication in ADFS
  2. Azure MFA, as mentioned above, can be used to as a second factor in cloud authentication and ADFS 2012 R2 and 2016. But, it also can be used as a primary factor in ADFS 2016 to completely stop the possibility of password spray. Learn how to configure Azure MFA with ADFS here
  3. Windows Hello for Business, available in Windows 10 and supported by ADFS in Windows Server 2016, enables completely password-free access, including from the extranet, based on strong cryptographic keys tied to both the user and the device. This is available for corporate-managed devices that are Azure AD joined or Hybrid Azure AD joined as well as personal devices via “Add Work or School Account” from the Settings app. Get more information about Hello for Business.

Things we recommend you do ASAP:

  1. Upgrade to ADFS 2016 for faster updates
  2. Block legacy authentication from the extranet.
  3. Deploy Azure AD Connect Health agents for ADFS on all your ADFS servers.
  4. Consider using a password-less primary authentication method such as Azure MFA, certificates, or Windows Hello for Business.

Bonus: Protecting your Microsoft accounts

If you’re a Microsoft account user:

  • Great news, you’re protected already! Microsoft accounts also have Smart Lockout, IP lockout, risk-based two-step verification, banned passwords, and more.
  • But, take two minutes to go to the Microsoft account Security page and choose “Update your security info” to review your security info used for risk-based two-step verification
  • Consider turning on always-on two-step verification here to give your account the most security possible.

The best defense is following the recommendations in this blog

Password spray is a serious threat to every service on the Internet that uses passwords but taking the steps in this blog will give you maximum protection against this attack vector. And, because many kinds of attacks share similar traits, these are just good protection suggestions, period. Your security is always our utmost priority, and we’re continually working hard to develop new, advanced protections against password spray and every other type of attack out there. Use the ones above today and check back frequently for new tools to defend against the bad guys out there on the Internet.

I hope you’ll find this information useful. As always, we’d love to hear any feedback or suggestions you have.

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

EMS news roundup: February 2018

$
0
0

Heres a quick recap of news and announcements for EMS last month:

Decentralized digital identities and blockchain: the future as we see it

Microsoft is committed to working closely with our customers, partners, and the community to unlock the next generation of digital identity-based experiences. People need a secure, encrypted digital hub where they can store their identity data and easily control access to it. Were excited to partner with so many people in the industry who are making incredible contributions to this space. Read the full post.

Protect multiple cloud-app instances using Microsoft Cloud App Security

Different teams within your organization may be using multiple instances of the same cloud apps. Now Cloud App Security can support and control multiple instances of cloud apps. Read the full post.

Hybrid MIM reporting now available in Azure Active Directory

You can now view hybrid reporting within the Azure AD audit activity reports. This feature will help you monitor activity across your hybrid environment. Read the full post.

EMS and Pradeo integration coming soon

Soon youll be able to inform your device compliance settings in Intune with unique real-time threat detection technology from Pradeo Security for Mobile Threat Defense, strengthening your Conditional Access policies. Read the full post.

Coming April 2 for App Devs: user consent experience improvements

On April 2, 2018, well turn on a set of improvements in the user consent experience for applications that work with Azure AD and Microsoft account service. Soon links to applications own terms of use and privacy statement will be included in the user experience. To help you prepare for these changes, read the full post.

Microsoft Cloud App Security threat protection just got better

New policies in the Cloud App Security admin experience can help you address different detection and use case scenarios with innovation in visibility, control, and protection of your cloud apps. Read the full post.

Bridge Windows modernization through co-management

Move workloads from your traditional, domain-joined and ConfigMgr-managed solutions to modern management at your own pace with co-management. Avanade shares its co-management experience. Read the full post.

Simplified application management using wildcards in Azure AD Application Proxy

Now you can use wildcards to publish many apps at once, reducing administrative work and opportunities for error with an improved management experience. Read the full post.

Print to corporate printers from Azure AD-joined Windows 10 devices

With the release of Hybrid Cloud Print, a solution specifically built for your Azure AD-joined and Intune-managed devices, your users can print from anywhere with an internet connection. This is a key step to going cloud-only for managing Windows 10 laptops. Read the full post.

Azure AD naming policy for Office 365 groups now in public preview

New features in Azure Active Directory make it easier for IT to manage Office 365 groups created by employees, including Groups Naming Policy, which enforces consistent naming conventions and lets you block certain words. Read the full post.

New information protection capabilities across devices, apps, on-premises, and cloud

New releases and enhancements to our broad offering of information protection capabilities can help you prepare for GDPR or continue strengthening your security strategy. Check out the list of new releases and previewsread the full post.

Powershell release speeds application deployment in Azure AD Application Proxy

Use PowerShell to deploy your on-premises applications faster and manage them more easily. This is great for our customers who deploy multiple Application Proxy appsyou can now automate that process. Read the full post.

The What If tool for Azure AD Conditional Access is now in public preview

To help you predict how conditional access policies will affect your users under various sign-in conditions, weve created the What If tool. Now you can simulate user sign-ins to test drive your policies under conditions you specify and generate simulation reports to help you troubleshoot. Read the full post.

February Azure AD B2C feature updates

New updates to Azure AD B2C service include new options for the end user experience. Check out the list of updates for Februaryread the full post.

Update 1802 for Configuration Manager Technical Preview Branch is now available

This months ConfigMgr preview features include improvements in OSD, improvements in Cloud Management Gateway, features for Windows 10 and Office 365, improvements in Software Center, site server high availability, and other improvements. Read the full post.

Improvements to the protection stack in Azure Information Protection

$
0
0

Were constantly striving to make the process of protecting information easier and simpler for both users and admins. To help with the initial step in protecting your information, we’re happy to announce that starting February 2018 all Azure Information Protection eligible tenants will have Azure Information Protection on by default. Any organization which has Office E3 and above or EMS E3 and above service plans can now get a head start in protecting information through Azure Information Protection.

The new version of Office 365 Message Encryption which was announced at Microsoft Ignite 2017, leveraged the encryption and protection capabilities of Azure Information Protection. We have continued to make significant improvements in the product since its initial launch and are excited to announce new capabilities in both Office 365 Message Encryption and Azure Information Protection.

Protection on by default

Starting February 2018, Microsoft will enable the protection capability in Azure Information Protection automatically for our new Office 365 E3 or above subscription. Tenant administrators can check the protection status in the Office 365 administrator portal.

EMS E3/E5 subscription and Azure Information Protection P1, P2 plans offer standardized and approachable labels and classification taxonomy. The default global policy will now configures Azure Information Protection based encryption and rights management for the following sublabels:

  • Confidential All Employees
  • Confidential Recipients Only
  • Highly Confidential All Employees
  • Highly Confidential Recipients Only

Please refer to our documentation for more details.

Office Message Encryption on by default

Along with enabling the protection service, Microsoft has now enabled the Office 365 Message Encryption capabilities by default for any new Office E3 or above subscription.

Richer collaboration specifically for email scenarios

Azure Information Protections powerful classification andlabelling capabilities enabled organizations to easily collaborate within and across organizational boundaries. Administrators could create labels which were backed by protection policies which promoted group-collaboration (e.g finance@contoso.com) and cross company-collaboration (e.g fabrikam.com). However, until now, the groups and users specified in the label definitions (e.g fabrikam.com, finance@contoso.com) needed to be part of the AAD identity fabric.

Since Microsoft Ignite 2017, Office 365 Message Encryption has enabled organizations to send Azure Information Protection encrypted and rights managed emails to anyone with any email address. However, administrators expressed their frustration on their inability to create effective Azure Information Protection labels which was backed with protection that could include non-AAD users and groups. With this month’s update of the Azure Information Protection service, administrators can now include non-AAD domains in the template definition which would specifically assist in cross-company or non-AAD collaboration scenarios of Office 365 Message Encryption. In the snip below, Contosos administrator has defined a custom protection permission for recipients who have a gmail.com address, hotmail.com address and onpremcompany.com address.

New policy Encrypt-Only

Do Not Forward has been the only out-of-box and default policy which was available to our customers. While Do Not Forward is very useful in securing the content (recipients cannot forward, print, edit, copy content), customers have indicated that it is far too restrictive and does not help in todays collaborative environment.

We are releasing a new out-of-the-box policy called Encrypt-only. With this policy, users can send encrypted email to any recipient, whether they are inside or outside the organization, and the protection follows the lifecycle of the email. However, unlike Do Not Forward, recipients can copy, print and forward the email. Encryption will follow the forwarded mail and no one other than the original sender can remove the protection of the email. This new policy provides more flexibility in the type of protection that can be applied to your sensitive emails. You can learn more about the Encrypt-Only policy here.

A few questions you might have:

How does this announcement for enabling Azure Information Protection by default affect existing Office 365 tenants?

There is no impact to existing Office 365 tenant. They would still need to enable Azure Information Protection manually through Office 365 or through PowerShell cmdlets.

However, for tenants who have enabled Azure Information Protection, Office 365 Message Encryption will be enabled by default.

How does it affect tenants who wish to migrate from AD RMS to Azure Information Protection?

Going forward, if you are creating a cloud subscription for migrating from AD RMS to Azure RMS, please manually disable the Rights Management service before starting the migration.

Will SharePoint Online IRM feature also be configured automatically?

No, that still needs to be done manually.

We feel these updates will reduce the work admins need to do to secure emails within organizations. Let us know if you have any feedback and well try our best to improve your experiences. Engage with us on Yammer or Twitter and let us know whats important to you by voting on UserVoice!

Update Rollup 2 for Configuration Manager Current Branch 1710 is now available

$
0
0

A second update rollup for System Center Configuration Manager current branch, version 1710, is now available. This update is available for installation in the Updates and Servicing node of the Configuration Manager console. Please note that if the Service Connection Point is in offline mode, you must re-import the update so that it is listed in the Configuration Manager console. Refer to the Install in-console Updates for System Center Configuration Manager topic for details.

For complete information about this update rollup for Configuration Manager current branch v1710, including the list of issues that are fixed, please see the following:

4086143 – Update rollup 2 for System Center Configuration Manager current branch, version 1710 (https://support.microsoft.com/help/4086143)

This update replaces the following previously released updates. If KB4086143 is installed, the previously released updates will no longer appear as applicable in the Updates and Servicing node of the Configuration Manager console:

  • 4057517 Update rollup for System Center Configuration Manager current branch, version 1710
  • 4088970 Automatic enrollment for co-managed device fails in System Center Configuration Manager current branch, version 1710

Default Code Integrity policy for Windows Server

$
0
0

After Windows Defender Application Control (WDAC, formerly known as Code Integrity) was released in Windows Server 2016, I wrote a blog post on it, it was a very effective way to do application whitelisting, and get secure!

When engaging with customers to get their feedback and help deploy WDAC, the consistent feedback has been “it’s great, but it’s too hard to deploy it.” We listened, and created a few default policies, which balance the security and operational management effort.

Those policies are stored under “C:WindowsschemasCodeIntegrityExamplePolicies” on any Windows OS post 1709 release. I recommend two policies for Windows Server:

AllowMicrosoft: this CI policy allows all the files signed by Microsoft. If you are running Server applications such as SQL, Exchange, or the server is monitored by agents published by Microsoft, you should start with this policy.

DefaultWindows: this policy only allows the files which are shipped in Windows and doesn’t permit other applications released by Microsoft (such as Office). This is a good policy to use if the Server is dedicated for inbox server roles/features, such as Hyper-V.

There are known applications in Windows that might be used to bypass WDAC, the full list is published on this page. I recommend adding them to the deny list in the Code Integrity policy. Please note, the list is updated periodically with newly identified applications, you should review the list and add them to your CI policy if determined fit.

To make it easy for you, I created two copies of the default CI policies that you can download (the follow CI policy is designed for the next release of Windows Server, you can also modify it to remove the new policy rule options for Windows Server 2016:

AllowMicrosoft_DenyBypassApps_Audit.xml

DefaultWindows_DenyBypassApps_Audit.xml

One of the new rule options in the above policy file is “Update policy no reboot”. On generic servers, it’s often user needs to be add software, when do so, the CI policy might require a change to cover the application, this rule option allows the CI policy to be updated without machine reboot. This option is added post Windows Server 2016. If you want to use the policies on Windows server 2016, you will need to remove them.

Adding additional publishers

Most of the customers have their own applications developed internally or acquired externally to manage the servers, i.e. the above default policies are not enough. To allow these apps (or even drivers) to load, you will need to modify the CI policy to cover them. There are a couple of approaches:

  1. Adding publisher: If you want to trust all the applications created by the vendor, you can add their publisher to the CI policy. Run the following cmdlet to extract the publishers to an xml file:
New-CIPolicy -FilePath <appCI.xml> -Level Publisher -ScanPath <the path which the app is installed>  -UserPEs
  1. Adding FilePublisher: If you want to only trust the installed application (not all the applications from this vendor), you can add FilePublisher to the CI policy to the CI policy. Run the following cmdlet to extract the file list and its publisher to an xml file:
New-CIPolicy -FilePath <appCI.xml> -Level FilePublisher -ScanPath <the path which the app is installed>  -UserPEs

You can mix and match different applications (from different vendors) with the two approaches described above.

For example, you have an active development team internally, you want to add your enterprise signer/publisher in the CI, so that all the enterprise signed applications can run in the environment; you can also enable a device driver by adding the FilePublisher rule for that driver.

For the same publisher, you need to choose between to trust the publisher or trust the FilePublisher. For example, if you contracted a vendor company to develop an application, and you decide to trust the vendor company by adding their publisher to the CI, later if you want to change to trust only that application, you should remove the vendor publisher from the CI first, then add the application FilePublisher rule.

Merge CI policies

With all the CI files generated to cover the additional file/publisher information, you will then merge them with the default CI policies by running:

Merge-CIPolicy -OutputFilePath ‘Serverdefault-audit.xml’ -PolicyPaths ‘.AllowMicrosoft_DenyByPassApps_Audit’,’additionalCI1.xml’, ‘CI2.xml’

Deploy CI policy

CI deployment and ongoing monitoring is covered in this blog post. You can also reference this page for group policy deployment. Below is a quick reference for your convenience:

The created CI policy is in audit mode, you can change it to enforced mode by running:

Set-RuleOptions -FilePath C:CIFilePublisher.xml -Option 3 -Delete

The XML file created by New-CIPolicy can’t be consumed by the system yet. To deploy the policy, it needs to be converted to binary format and copied to the CodeIntegrity folder under System32.

Run the following cmdlet to convert the xml file:

ConvertFrom-CIPolicy C:CIFilePublisher.xml C:CIFilePublisher.bin

Deploy CI policy:

Copy-Item C:CIFilePublisher.bin C:WindowsSystem32CodeIntegritySiPolicy.p7b

Reboot the server to allow code integrity service to load the policy.

 

I hope the default server CI policies can help you to speed up the deployment, and as always, you can share your feedback by email with us or submit and vote on requests through the User Voice.

The Adventure Begins: Plan and Establish Hybrid Identity with Azure AD Connect (Microsoft Enterprise Mobility and Security)

$
0
0

Greetings and salutations fellow Internet travelers! Michael Hildebrand here…as some of you might recall, I used to pen quite a few posts here, but a while back, I changed roles within Microsoft and ‘Hilde – PFE’ was no longer.

Since leaving the ranks of PFE, I’ve spent the last couple of years focused on enterprise mobility and security technologies. Recently, I was chatting with the fine folks who keep the wheels on this blog when I asked “Hey – how about a series of guest-posts from me?” They said if I paid them $5, I could get some air-time, so here we are.

My intentions are simple – through a series of posts, I’ll provide high-level discussion/context around the modern Microsoft mobility and security platform to “paint you a picture” (or a Visio) of where we are today then I’ll move on to ‘the doing.’ I’ll discuss how to transform from ‘on-prem’ to ‘hybrid-enabled’ to ‘hybrid-excited.’ I’ll start that journey off in this post by establishing the foundation – hybrid identity – then, in subsequent posts, I’ll work through enabling additional services that address common enterprise scenarios. Along the way, I’ll provide job aids, tips and traps from the field.

It continues to be a very exciting time in IT and I look forward to chatting with you once more. Let’s roll.

Azure AD – Identity for the cloud era

The hub of Microsoft’s modern productivity platform is identity; it is the control point for productivity, access control and security. Azure Active Directory (AAD) is Microsoft’s identity service for the cloud-enabled org.

If you want more depth (or a refresher) about what Azure Active Directory is, there’s no shortage of content out there. I’ll be lazy and just recommend a read of my prior post about “Azure AD for the old-school AD Admin.” It’s from two years ago – which makes it about 2x older in ‘cloud years’ – and as such, it suffers a bit from ‘blog decay’ on some specifics (UIs and then-current capabilities), but the concepts are still accurate. So, go give that a read and then come on back … I’ll wait right here for you.

The Clouds, they are a-changin’

As an “evergreen” cloud service, AAD sees continuous updates/improvements in the service and capability set. Service updates roll out approximately every month – so, we’re at around 36 +/- AAD service updates since my Jan 2015 article.

To stay on top of AAD updates, changes and news, the EMS blog (Link) is always a good first stop.

If you like “Release Notes” style content, starting last September (2017), the ‘What’s new in AAD’ archive is available – https://docs.microsoft.com/en-us/azure/active-directory/whats-new.

Recently, a change to the AAD Portal homepage added a filterable ‘What’s new in Azure AD’ section –

Also, the O365 Message Center has a category for “Identity Management Service” messages:


An Ambitious Plan

Here’s the plan for this post, this series and some details about my “current state” environment:

  • I’m starting out with an on-prem, single AD forest w/ two domains (contoso.lab and corp.contoso.lab)
    • Basically, the blue rounded-corner box in the Visio picture above:

  • In this post, I’m going to establish a hybrid identity system, and bridge on-prem AD to an AAD tenant via Azure AD Connect (AAD Connect)
    • Choose password hash for the authentication method
      • This enables password hash sync from AD to AAD
    • Filter the sync system to limit what gets sync’d from AD to AAD
    • Prepare AD for eventual registration of Domain-Joined Windows PCs from AD to AAD
  • In subsequent posts, I’ll build on this foundation, covering topics such as custom branding for the cloud services, self-service password reset, device registration, Conditional Access and who knows what other EMS topics.
    • I’ll be assigning homework, too, lest yee not fall asleep
  • I’ll end up with an integrated, hybrid platform for secure productivity and management
  • These are pretty bold ambitions – but we’ll get there, and the beauty of the cloud services model is that “getting there” isn’t nearly as hard as that list makes it seem.

Now let’s get down to brass tacks. For the rest of this post, I’ll focus on considerations, planning and pre-reqs for getting Azure AD Connect up and running and then I’ll walk through the setup and configuration of AD and AAD Connect to integrate an on-prem AD forest with an on-line AAD tenant.

  • If you already have AAD Connect up and running, KUDOS! Read-on, though, as you might find some helpful tips or details you weren’t aware of or didn’t consider.

NOTE – As with most blogs, this isn’t official, sanctioned Microsoft guidance. This is information based on my experiences; your mileage may vary.

Overall AAD Connect Planning

Microsoft has done a lot of work to gather/list pre-reqs for AAD Connect. Save yourself some avoidable heartburn; go read them … ALL of them:

AAD Connect has two install options to consider – Express and Custom: https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-select-installation

  • The Express install of Azure AD Connect can get you hybrid-enabled in around 4 clicks. It’s easy and simple – but not very flexible. Express setup requires an Enterprise Admin credential to perform all of the AD changes and you don’t have a lot of control over those changes (i.e. naming service accounts, where in AD they go, which OUs get permissions changes, etc).

  • The Custom install of Azure AD Connect provides more flexibility, such as allowing you to pre-create the service accounts (per your AD naming/location standards) as well as assign scoped AD permissions as part of the pre-work before installing AAD Connect.

Consider AAD Connect ‘Automatic Upgrade’ to keep AAD Connect up-to-date automatically:

Service accounts

AAD Connect uses a service account model to sync objects/attributes between AD and AAD. There are two service accounts needed on-prem (one for the sync service/DB and one for AD access) – and one service account needed in AAD.

Service account details:

  • Sync service account – this is for the sync service and database

  • AD access service account – this is a Domain User in the AD directory(ies) you want to sync.
    • An ordinary, low-privilege Domain User AD account with read access to AD is all that is needed for AAD Connect to sync AD to AAD for basic activities.
    • There are notable exceptions that require elevated permissions and two I’ll cover here are password hash sync and password writeback (for self-service password reset/account unlock)

    TIP – Create your AD access service account in AD and assign any custom permissions to it BEFORE you install AAD Connect.

    TIP – This account itself doesn’t need to sync to AAD and can/should reside in a ‘Service Account’ OU, with your other service accounts, filtered from sync.

    TIP – Make sure you secure, manage and audit this service account, as with any service account.

  • AAD cloud access account
    • This is a limited, cloud-only account in Azure AD, created by the AADC install process, which sets a long, complex password that is set to not expire.

    TIP – The username of this account is derived from the AAD Connect server name

    • For example, my AAD Connect server is named “CORP-AADC01” so the AAD service account ID will be something like “Sync_CORP-AADC01_1@mycorp.onmicrosoft.com”

    TIP – This account won’t be seen anywhere in AD; it’s only part of AAD and the sync system. You can see it in the configuration pages of the Synchronization Service Manager tool – screen snip below.

    • The Synchronization Service Manager tool is sometimes used for advanced sync settings and is out of scope for this article; I strongly urge you to not wander around in there.

    TIP – The ID can also be seen in the AAD portal ‘Users’ section.


Planning on-prem sync filtering

You can limit what users, groups, contacts and devices are sync’d between on-prem AD and Azure AD. This is known as ‘filtering’ and can be done based on forest, domain, OU or even object attribute values. Also, for a pilot or PoC, you can filter only the members of a single AD group.

TIP – Thoroughly plan/test a sync filtering strategy to understand what will/won’t sync. In prod, do it once; do it right.

Read this link for more information/details about sync filtering:

Points to consider:

  • Not everything in AD is sync’d, even if you don’t filter –
    • For example, DNS zones don’t get sync’d. GPOs don’t get sync’d. Objects with the “isCriticalSystemObject” attribute equal to “true” won’t sync – so many sensitive AD objects won’t sync (i.e. Domain Admins group in AD)
    • However, unless filtered, some objects may sync that you don’t need/want in AAD (i.e. the DNS Admins group in AD, your service account OU, etc.)
  • Any OU that has/will have Windows 10 PCs that you want to register/sync to AAD (called ‘Hybrid Azure AD Join’) should be selected for sync, as Azure AD Connect plays a part in sync’ing Win 10 PCs to Azure.
    • Azure AD Connect does not play a part in sync’ing pre-Win 10 PCs; they can sync/register in AAD on their own (after you install an update/MSI to those OSes), regardless of their OU being targeted or not
    • We’ll get into the weeds of Hybrid Azure AD Join, AAD Join and Azure Device Registration Service in a later post
  • For a pilot, you can simplify what gets sync’d by selecting a single group in AD to sync
    • Use a “flat” Global Security group in AD; any nested groups within it won’t sync
    • If you also setup OU filtering, be sure the target group and its members (users, Windows 10 PCs, etc.) are all in OUs that are in-scope for sync – OU filtering is evaluated before the group filter.
    • You can’t browse for the group via the wizard – you need to type in the group name or DN attribute (the ‘resolve’ button will verify it, though)
    • The UI option to filter by group only appears in the initial setup of AAD Connect. If you don’t select it during the first run, it won’t show up in the UI in subsequent runs of the tool.

    TIP – Group-filtered sync isn’t supported for production implementations

  • New OUs/subOUs that are created after you’ve setup your sync filtering in AAD Connect may be sync’d by default. If so, this may be an unwelcome surprise.
    • I’ll cover more on this later in the AAD Connect configuration section

UPNs and email addresses – should they be the same?

In a word, yes. The best experience for your users (seamless SSO with minimal login prompts or pop-ups/sign in errors, etc.) will be achieved when the on-prem UPN matches the AAD UPN, as well as the primary email address (and SIP address for overall consistency). This assumes there is an on-prem UPN suffix in AD that matches the publicly routable domain that your org owns (i.e. … @microsoft.com).

“Ok, but is it required?” No, but over time, it will make lives better with less confused users who make fewer helpdesk calls and are happier with IT.

Points to consider:

  • Recall the pre-requisites doc/link – it lists a line-item to add any custom domain(s); go through the process to add and ‘verify’ your public domain names (called ‘custom’ domains in O365/AAD) before setting up AAD Connect. There is a step during AAD Connect setup that will poll on-prem AD for UPN suffixes and AAD for matching verified custom domains. This is visible in my step-by-step later.
  • To avoid additional work and potential issues, it is strongly recommended that you address UPN/ID issues BEFORE you install AAD Connect

AAD Connect – Install and configuration

I basically break this phase up into three sections:

  1. AAD Connect server setup/tools install
  2. On-prem AD config
  3. Initial sync config
  4. AAD Connect server setup and Tools Install
    1. On my AAD Connect server (these steps are for a WS 2012 R2 x64 instance – again, read all the AAD Connect pre-reqs from the link above; your specific steps may vary):
      1. Disable IE Enhanced Security Config and enable Cookies in the IE browser settings
      2. Install the RSAT AD tools – via Server Manager or PowerShell <from elevated PoSh>
        1. Add-WindowsFeature RSAT-AD-Powershell
      3. Download and update to WMF 5.0 then install AAD PowerShell v1
          1. Reboot
        1. Open elevated PowerShell and run Install-Module -Name PowerShellGet -Force
        2. From same PowerShell console, run Install-Module -Name MSOnline
      4. Download AAD Connect (AzureADConnect.msi) and install it on the target AAD Connect server
        1. https://www.microsoft.com/en-us/download/details.aspx?id=47594
      5. As soon as the install completes, the AAD Connect configuration wizard will auto-initiate – don’t run through it; exit/close out of the tool/wizard.
      6. The AAD Connect setup installs the sync service and several pre-reqs, and copies some PowerShell scripts/functions locally
  5. On-prem AD config
    1. Prepare on-prem AD for Azure AD integration (I’ll also initialize AD for Azure AD Device Registration Service – AzDRS)
      1. Use PowerShell to establish the Service Connection Point (SCP) object and associated attributes in AD – More info
      1. This process creates an object in on-prem AD with pointers to the associated on-line AAD tenant name and GUID – this information is used by several AD <-> AAD integrations such as AAD device registration, device write-back, etc.
        1. For example, this information is used by Windows domain-joined PCs to “find” the connected AAD tenant and register there (aka “Hybrid Azure AD Join.”)
      2. From the AAD Connect server:
        1. Run a PowerShell window as an Enterprise Admin account (this process needs to create a container in the Configuration partition in the AD forest):
        2. Import-Module -Name “C:Program FilesMicrosoft Azure Active Directory ConnectAdPrepAdSyncPrep.psm1” <press enter>
        3. Initialize-ADSyncDomainJoinedComputerSync <press enter>
        4. PowerShell will prompt for AdConnectorAccount : enter the AD access service account and press enter
          1. The format is “domainID” – CORPSRV-AADC
        5. A logon box will pop-up; enter the AzureADCredentials

          1. This should be a Global Admin ID from Azure AD
          2. The format is upn-style – admin@woodgroove.onmicrosoft.com
      3. Verified results:


  1. Review/verify/edit the AD access service account has permissions for the desired Azure AD services/features (see above Service Accounts section)
    1. Remember, password hash sync and self-service password reset (SSPR) each require unique manual permissions edits in AD    
      1. This is a commonly missed step or not done correctly

TIP – You can enable SSPR/pwd writeback without enabling password hash sync; you can offer your users self-service password reset even if you’re not ready to sync passwords to Azure AD.

  1. Initial Sync config

    Let’s take a breath, pause and recap: AAD Connect is installed and several on-prem decisions and configurations have been completed (sync filtering decisions, service accounts created, custom permissions assigned, ‘Service Connection Point’ container created and verified in AD, etc.).

    1. Next, I establish the core AD > Azure AD sync configuration and start actually sync’ing objects to AAD.
      1. From the AAD Connect server, launch the AAD Connect tool/wizard, agree to the license terms checkbox and click ‘Continue.’
      2. We’re doing ‘Customize’ (vs ‘Express’) for the reasons mentioned above (i.e. more flexibility in creating/naming/locating the service accounts)

      1. On the “Install required components” screen, leave all boxes blank – AAD Connect will setup the sync service and a ‘virtual’ service account on the AAD Connect server. This ID and password are system-managed and won’t require any on-going management. Click ‘Install.’

      1. Next, select the User sign-in/authentication method. My thinking has evolved over time on this aspect. I started out favoring federation with ADFS and on-prem passwords/auth, then I moved on to “Pass-through authentication” (PTA) and on-prem passwords/auth (I still really like PTA if there’s a need to keep password hashes on-prem).

        However, now I’ve seen the light and “Password Synchronization” is my preferred choice. This is by far, the simplest solution and I’m comfortable w/ the security of password hash sync/storage. This is usually referred to as ‘password hash sync’ or PHS since AAD Connect takes the on-prem password hash value, processes it with additional hashing, then syncs that value to AAD. Also, with PHS, I get more complete coverage from the AAD Identity Protection capability and Azure-cloud levels of high-availability.

        Here’s a great blog about the auth choices and decision: Sam D’s auth choice blog.

        1. Also select the check box to “Enable single sign-on”

      1. On the “Connect to Azure AD” screen, enter an Azure AD global admin account (which isn’t saved; it’s only used during setup). Use a cloud-only ID from the tenant – i.e. admin@mycorp.onmicrosoft.com. This sets up the Azure AD tenant for sync and creates the AAD cloud access service account mentioned above in the service accounts section.


  1. On the “Connect your directories” screen, select/verify the target AD forest(s) and click “Add Directory” then select to “Use existing AD account.” Enter the AD access service account credentials (from the above service accounts section) and click OK, then click Next.

TIP – You don’t select the specific domains/OUs you want to sync here; that’s done in a later step

  1. Review/select the Azure AD sign-in configuration – hopefully keeping the default which sets the on-prem UPN value as the login ID for Azure AD.

TIP – In the long red box above, you see I have a UPN suffix in AD that matches a verified custom domain name that I registered in my AAD; this is due to the pre-work that I mentioned in the UPN section above.

TIP – If you haven’t verified a custom domain, you’ll see an option to ‘Continue without any verified domains’ (i.e. for a test or PoC environment)

  1. On the “Domain and OU filtering” screen, select “Sync selected domains and OUs” and select the domains/OUs to sync – or select “Sync all domains and OUs” if that’s how you want to roll.
    1. Remember, even if an entire forest/domain is selected, not everything in the domain will sync.

Repeated TIP – Thoroughly plan/test a sync filtering strategy to understand what will/won’t sync. In prod, do it once; do it right.


TIP – As mentioned above in the sync planning section, recall that as/if new OUs/subOUs are created, they might be sync’d to AAD automatically.

Here’s how to adjust your sync settings to control new OU sync:

The checkbox “state” in this UI indicates if new OUs will sync or not:

  1. If you DO NOT want subsequent new sub OUs to sync (my personal preference), clear all the check marks then click the deepest level, specific OU boxes you want to sync. The parent domain and OU box(es) will flip to solid gray, without a checkmark
    1. In this state:
      1. Only the selected OUs under CORPORATE will sync (white box with black checkmark)
      2. New OUs created anywhere will not sync

    1. If you DO want subsequent new sub OUs to sync, click the parent domain/OU box so it has a black checkmark (all sub-OUs will also get checked). Now, de-select the sub OU box(es) you don’t want to sync, leaving the desired OUs checked. The parent OU box will turn gray with a black checkmark.
      1. In this state:
        1. The selected OUs under CORPORATE will sync (white box with black checkmark).
        2. New sub OUs created under the corp.contoso.lab domain and/or the CORPORATE OU will sync

    2. You can also configure a mixed state:
      1. In this state:
        1. New sub OUs created directly under corp.contoso.lab will not sync (gray box without black checkmark)
        2. The selected OUs under CORPORATE will sync (white box with black checkmark).
        3. New sub OUs created under the CORPORATE OU will sync (gray box with black checkmark)

        Example:

  • New ‘Sync-Test-OU’ was created in AD.
  • The new ‘Sync-Test-OU’ was added to sync filtering without making any changes to AAD Connect

TIP – Recapping:

  • White box without checkmark – won’t sync
  • White box with black checkmark – will sync
  • Gray box without checkmark – new sub OUs won’t sync
  • Gray box with black checkmark – new sub OUs will sync
  1. Review the unique identifier page for the sync configuration – the default is fine for my setup. Click Next.

  2. On the “Filter users and devices” screen, choose ‘Synchronize all users and devices’

TIP – Even though the UI states this will synchronize all users and devices, that isn’t really what happens. This option will sync all users, groups, contacts and Win 10 computer accounts “within the scope of any filtering you defined.”

  1. If you decided earlier that you want to use group filtering for sync (i.e. for a PoC), you choose ‘Synchronize selected’ here and enter the group name or DN and click ‘resolve’ to verify it.
    1. If you don’t see this screen or if you are considering this, review the above details about group filtering – it is a common area of confusion and unexpected results/behavior.

On the “Optional features” screen, verify all “Optional features” except Password synchronization are blank and click Next.

  • The “Password synchronization” option is checked and grayed out due to the earlier selection of “Password synchronization” for User sign-in.

  1. On the “Enable single sign-on” screen, click ‘Enter credentials’ then enter Domain Admin credentials for the domain(s) where your SSSO users reside (don’t be confused like I was when the pop-up asked for “Forest Credentials” – it’s asking for a Domain Admin ID).
  2. Click OK. Then click Next.

    1. This step creates a computer account called “AZUREADSSOACC” and puts it in the built-in COMPUTERS container in the target domain(s).
    2. Don’t pre-create this account – let AAD Connect do it, as it populates some specific attributes/values for this computer account.
      1. You can move the computer account to an OU of your choice and I’d recommend you configure it for protection from accidental deletion (right-click > properties > object tab).

  3. On the “Ready to configure” page, verify the ‘Start the synchronization process…’ option is checked (default) and click ‘Install.’ Click Exit after the ‘Configuration complete’ page displays.

  4. Review the Application Event Log on the AAD Connect server for related events.

  5. Sign in/refresh the Azure/AAD portal
    1. Verify sync by looking for your targeted on-prem objects in AAD and review the Azure AD Connect section of the Azure/AAD portal for successful sync messages.
      1. On-prem users sync’d are listed with a ‘SOURCE’ of ‘Windows Server AD’
      2. On-prem groups sync’d are listed with a ‘Membership type’ of ‘Synced’

TIP – Subsequent delta synchronizations occur approx. every 30 min (and every 2 min the password hash sync process runs, if you’ve enabled it); previous versions.

TIP – You can easily trigger a sync via PowerShell at any time. I use a quick one-liner straight from the ‘Run’ dialog box on my AAD Connect server after making on-prem AD changes that I want to sync right away:

powershell –ExecutionPolicy Bypass Start-AdSyncSyncCycle

TIP – To avoid surprises with Automatic Upgrade of AAD Connect, now is a good time to review/verify the state of it for your AAD Connect via PowerShell:

Get-ADSyncAutoUpgrade

HOMEWORK – Go school yourself about AAD Connect Health – I think you’ll like it

If you’re a visual person, like me, here’s where we are on our plan:

Ok folks, there you have it … a brief refresher on AAD as the ID hub of our modern productivity and security platform, a sizeable collection of “points to consider” when planning AD sync and then a walk-through of setting up AAD Connect to hybrid-enable a sample Active Directory forest.

Hopefully, that level of detail was helpful.

Tune in next time when I’ll continue the march towards ‘hybrid-excited.’

Cheers!

“Welcome back, (Hilde) Kotter”

P.S. Did anyone catch how the title of this post pays homage to the awesome movie “Remo Williams: The Adventure Begins”?

System Center 1801 Operations Manager – Enhanced log file monitoring for Linux Servers

$
0
0

System Center Operations Manager 1801 has enhanced log file monitoring capabilities for Linux Servers.

  • Operations Manager now supports Fluentd, an Open source Data collector.
  • Customers can also leverage Fluentd capabilities and plugins published by the Fluentd community to get enhanced customizable log file monitoring.
  • The existing OMI based monitoring for currently supported Linux workloads will continue to work as it is today. 

With this release we have added support for the following log file monitoring capabilities

  • Support for wildcard characters in log file name and path.
  • Support for new match patterns for customizable log search like simple match, exclusive match, correlated match, repeated correlation and exclusive correlation. We have released 6 new filter plugins for customizable log search.
  • Support for generic Fluentd plugins published by the fluentd community. System Center Operations Manager 1801 would include a convertor plugin which would convert the fluentd data from generic plugins to the format specific for SCOM log file monitoring.

Architecture

Below are few architectural changes in the SCOM Management server and the SCOM Linux agent to support Fluentd.

The new Linux SCOM agent would include a Fluentd agent (as shown in the above picture (1)).

Users would define the log file names, match pattern and the event to be generated on pattern match along with the event description in the Fluentd Configuration file.

On match of a log record, Fluentd would send the event to the System Center Operations Manager External Datasource service on the SCOM Management Server / Gateway (2).This is a Windows REST based service which would receive the event and send it to a dedicated custom Event log channel Microsoft.Linux.OMED.EventDataSource (3).

User would need to import a management pack (4) which would look for events in this custom event channel and generate alerts accordingly

User Workflow:

On Linux Server:

On SCOM Management Server:

User needs to follow the below steps on the Management Server 

 

Step 1:

User would need to import the latest Linux Management pack (shipped with the SCOM 1801 binaries) and install the new SCOM agent on the Linux Servers.

Users can install the agent either manually or through discovery wizard (recommended). For detailed steps, refer here.

Step 2:

Author Fluentd configuration file and place it on the Linux Servers

Customers need to author a Fluentd configuration file and can use any of the existing enterprise tools like Chef/Puppet to place the configuration file to the Linux server.

Recommended practice is to copy the configuration into /etc/opt/microsoft/omsagent/scom/conf/omsagent.d directory on all Linux servers and include the configuration file directory as @include directive in the master configuration file /etc/opt/microsoft/omsagent/scom/conf/omsagent.conf

The Fluentd configuration file is where the user should define the input, output and the behavior (match processing) of Fluentd. This is done by defining the following in the configuration file:

Source directive:

Fluentd’s input sources are defined in the source directive using desired input plugins. Users would need to define the log file names along with the file path here in this directive. Wild card characters are support both in file name and path.

Filter directive:

Filter directive is the chained processing pipeline. Users would need to define the match pattern and the events that are to be generated on a match here in this section. We have released the following filter plugins with this release

  • filter_scom_simple_match,
  • filter_scom_excl_match
  • filter_scom_cor_match
  • filter_scom_repeated_cor
  • filter_scom_excl_correlation
  • filter_scom_converter

Match directive:

Users define the output processing in Match directive. We have released “out_scom” match plugin which would send the events generated by Fluentd to the System Center Operations Manager External Datasource service on the SCOM Management Server/Gateway.

For more detailed instructions on how to author a Fluentd configuration file, refer here.

Step 3:

On SCOM Management server: Import Management pack and enable OMED Service

On Management Server User needs to do the following:

1)      Start OMED service (refer here).

2)      Import Management pack for log file monitoring.

User can import the sample Management pack (reference here ), save this as an xml file and import it in SCOM console. This Management pack has a rule that looks for all events from the new data source Microsoft.Linux.OMED.EventDataSource and generates alerts accordingly. The Alert severity and priority are set in the management pack. The Alert description is obtained from the event description which would be defined by the user in the Fluentd configuration file.

If users are interested to generate alerts only for specific events generated, they could author their own custom management pack using VSAE.

Example Scenario:

User would like to monitor the following scenarios

1)      Apache http server URL monitoring

Scenario: Monitor a web URL hosted on Apache http server and generate alerts on SCOM Management server if the URL has any issues.

Log to be monitored: User monitors Apache http server access.log for error code. If the log receives any code other than 200 (success code) an event will be sent to SCOM Management Server.

2)      Authentication failure

Scenario: If a user tries to access a server more than 5 times with an incorrect password, an alert would be sent to the SCOM server alerting an unauthorized user trying to intrude.

Log to be monitored: User monitors Linux Server auth.log for authentication failure error messages. If the messages exceeds 5 times in 10 seconds and event will be sent to SCOM Management server.

Sample Configuration File:

The OMEDService on SCOM Management server would receive an event on match of a log record along with the log record context. User would need to import a management pack on SCOM server which would generate alert when there is an event received from Linux Server.

Events on the SCOM Management Server:

 Generated Alert on the Management Server:

The Alert context will contain the log record which will have more details on the error code received while trying to access the URL.

Other Sample User Scenarios:

For more detailed steps look at the online documentation.

Feedback:

We’d love to hear your feedback on this new feature. Feel free to send your feedback to scxtech@microsoft.com.

Office 365 Integration fails with “Cannot connect to Microsoft online services” in Windows Server 2012 R2 Essentials

$
0
0

We have found a new issue with Windows Server Essentials Dashboard integration wizard with Microsoft Office 365. The Integrate with Microsoft Office 365 wizard may fail to complete with the following error:

In the C:ProgramDataMicrosoftWindows ServerLogsSharedServiceHost-EmailProviderServiceConfig.log, we may find the following exception:

[7812] 170920.160416.7958: BecWebServiceAdapter: Connect to BECWS failed due to known exception : System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at https://bws902-relay.microsoftonline.com/ProvisioningWebservice.svc?Redir=1098557810&Time=636356539931802459 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. —> System.Net.WebException: Unable to connect to the remote server —> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 157.56.55.77:443

We can see the provisioning end point that the wizard is trying to reach, by running the command: ipconfig /displaydns

However, when we attempt to browse that URL (provisioning web service) in a browser, it may fail with the following exception:

Additionally, when we attempt to do a telnet test to this remote server through the port 443, it fails:

The issue occurs due to a web exception when the Bec Web Service API tries to reach out to the remote endpoint for provisioning purpose. The address is written to the following registry key on the server:

HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows ServerProductivityO365IntegrationSettings

Name: BecEndpointAddress

Type: String value

Resolution: To resolve the issue, follow these steps:

1. Launch the registry editor console and take a backup of the following key:

HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows ServerProductivityO365Integration

2. Click HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows ServerProductivityO365IntegrationSettings, on the right pane delete the BecEndPointAddress key and click Yes

3. Exit the registry editor console and proceed to run the Integrate with Microsoft Office 365 wizard

Office 365 Integration fails with “Cannot connect to Microsoft online services” in Windows Server 2012 R2 Essentials

$
0
0

We have found a new issue with Windows Server Essentials Dashboard integration wizard with Microsoft Office 365. The Integrate with Microsoft Office 365 wizard may fail to complete with the following error:

In the C:ProgramDataMicrosoftWindows ServerLogsSharedServiceHost-EmailProviderServiceConfig.log, we may find the following exception:

BecWebServiceAdapter: Connect to BECWS failed due to known exception : System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at https://bws902-relay.microsoftonline.com/ProvisioningWebservice.svc?Redir=1098557810&Time=636356539931802459 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. —> System.Net.WebException: Unable to connect to the remote server —> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 157.56.55.77:443

We can see the provisioning endpoint that the wizard is trying to reach, by running the command: ipconfig /displaydns

However, when we attempt to browse that URL (provisioning web service) in a browser, it may fail with the following exception:

Additionally, when we attempt to do a telnet test to this remote server through the port 443, it fails:

The issue occurs due to a web exception when the Bec Web Service API tries to reach out to the remote endpoint for provisioning purpose. The address is written to the following registry key on the server:

HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows ServerProductivityO365IntegrationSettings

Name: BecEndpointAddress

Type: String value

Resolution: To resolve the issue, follow these steps:

  1. Launch the registry editor console and take a backup of the following key:

HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows ServerProductivityO365Integration

  1. Click HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows ServerProductivityO365IntegrationSettings, on the right pane delete the BecEndPointAddress key and click Yes

  1. Exit the registry editor console and proceed to run the Integrate with Microsoft Office 365 wizard

 

Viewing all 1120 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>