Configuring Windows 10 Core Services
Configuring Core Networking
Hello, and welcome to Configuring Windows 19 Core Services. I'm Glenn Weadock, and I'll be your guide through the subject of setting up Windows 10 networking, storage, data access, applications, and remote management. It's a pleasure to be here, we're going to get to know Windows 10 very well, and maybe crack a few bad jokes in the process. You might be interested in this course because Windows 10 is coming to your organization and you'd like to get ready for the changes it will bring, or maybe you're involved in operating system decisions at your place of business, such as whether or when to roll out Windows 10. Or perhaps you're interested in prepping for the Microsoft Windows 10 certification exam: 70-698, or maybe even all of the above. If you're here because you need to work with Windows 10, that's great, because I'm going to present as many practical tips as I can. This course is the third in a series of four that will familiarize you with just about every aspect of Windows 10. The first course had to do mainly with deploying Windows, and the second one with getting your hardware working and becoming familiar with the OS from both a user and administrator standpoints. If you're mainly interested in getting the Microsoft certification, that's great too because the topics in these four courses line up very closely with the topics in Microsoft's published outline for the exam. The combination of discussion and live demos is a tried and true approach, and I hope you enjoy it. The approach I'll take is to talk about something and then show it to you. You can follow along with the demos if you'd like, and I'll give you some tips on creating a virtual machine network just like the one I'll be using in the demos. There are five modules in this course. This one: Configuring Core Networking, focuses on assembling Windows 10's network plumbing. Configuring Physical Storage looks at discs, partitions, volumes, storage spaces, and removable storage. The third module: Configuring Data Access, explores how to share files in Windows via the file system, and via OneDrive, including permissions. Configuring Applications covers deployment, configuration, and updating of both Windows Store apps and traditional desktop apps. And finally, Configuring Remote Management is a hugely useful module that explores the tools we have for connecting to Windows 10 from a distance: Remote Desktop, Remote Assistance, MMC Consoles, and PowerShell. In the first module, we'll look at six topics. Details on how you can set up a lab environment so that you can perform the demos that I perform in this course and experiment even further on your own, a review of Ipv4 and IPv6 essentials, how Name Resolution works in Windows 10, setting up the firewall and when to consider using connection security rules, a few notes on wireless networks, and a survey of remote networking solutions in Windows: Virtual Private Networks and DirectAccess. So let's get started with a look at the demo environment so you can set it up for yourself if you want to follow along with my demonstrations.
If you'd like to set up your own lab environment, first, be aware that if you've already been through the previous course in this learning track on implementing Windows 10, then the lab setup is the same as for this course, and you can jump directly to the IPv4 and IPv6 discussion. Otherwise, your first step will be to obtain a copy of an evaluation version of Server 2016. I recommend using the Datacenter edition, but the Standard edition will also work. You will probably want to get the version with the Desktop Experience, as opposed to Server Core. The 180-day eval from Microsoft is available at this writing at the URL shown here, and you can also look at Jason Helmick's GitHub site for a slick, automatic lab environment creation tool. You'll also need an evaluation copy of Windows 10 for the workstation VMs, preferably the Enterprise edition. As with the server product, at the writing you can get a 180-eval version from Microsoft at this URL. You can also get this at the GitHub site, at the same address just mentioned. Some of you may already have evaluation versions or even full versions of these operating systems that you can play with, and that's fine too. I just suggest that whatever you use, set it up in an environment completely separate from your production environment at work or your network at home if you have one, so that if anything goes awry, you won't have to deal with damage to your work or home network. When you set up your Virtual Machine Host, it should ideally be running Server 2016, so you can fully experiment with new fancy features, like Virtual TPMs. I'd suggest 16 GB of RAM, although you could conceivably get by with even less than around 75 GB of disk space, Solid State Drives being preferable for speed reasons. You could run your host on Server 2012R2 or even Windows 10, but you might not be able to see all the features discussed in this course. You'll need the Hyper-V Server Role on the host or the Client Hyper-V feature if you're using Windows 10, so you'll want a 64-bit version of Windows, and you'll want hardware with hardware-assisted virtualization, which you can check ahead of time with the Sysinterals tool, Coreinfo. Other platforms should be fine if you can't use Hyper-V for some reason, or if you're familiar with them. You'll also need to provide internet connectivity for some of the demos, such as those involving the download of goodies onto your VMs. When it comes time to build your Virtual Machines, I recommend using dynamically expanding hard drives as opposed to fixed size virtual drives. I also recommend using the VHDX format over the older VHD format and using SSDs rather than spinning discs. Set up your guests for multiprocessor support if that's available on your hardware, so things will work faster. Somewhere between 1 and 2GB of Ram per guest is fine, and if you want to use Dynamic Memory in Hyper-V that's fine too. In this course you only need three server VMs and two client VMs, and you could probably do most of the labs with even less. Next, as to the setup of the VMs, now, this is the setup that I used, you can certainly do things differently, everything that follows is just in the nature of a suggestion, so don't feel constrained by these details. As to the Virtual Switches, I created two privates ones, and one external. The Globomantics Denver switch can be a private network type, and the Globomantics internet switch will be an external type so that for example, our WSUS server can retrieve updates from Microsoft. The Globomantics Internet Private switch is the one that we can use when exploring remote connections, such as VPNs. Now, GM-DC1 is going to function as a domain controller, file server, and DNS server. If you want to explore all the nuances of VPN authentication, then you could add Active Directory Certificate Services. There is a configuration wizard, and for our purposes, you can mostly just accept the default settings, although you should not do that in real life. Pluralsight has some good courses on configuring ADCS, if you want to learn more about that. The workstations are GM-WS1 and GM-WS2, and they'll be our Windows 10 systems. Finally, GM-WSUS is our update server and GM-RAS1 is our remote access server. You won't need all of these VMs for this course, so if you're not pursuing the entire learning track, you could dispense with GM-WSUS and possibly also GM-RAS1, although it is handy to have a gateway to the public internet. The assignments we'll make for the Virtual Switches are simply Denver for the domain controller and the workstation WS1, Denver plus the private internet switch for the workstation WS2 and the RAS server, and Denver plus the external internet switch for the WSUS server. As a side note, you might want to add the external internet switch to DC1, WS1, and WS2 on a temporary basis if you want to get those VMs up to date via Windows update. A few quick setup notes for DC1. You'll install and configure Active Directory using whatever domain name you want. I used globomantics.local. Specify when you add ADDS that you also want to install DNS. Install file and storage services, and create a file share or two for verifying connectivity and create a couple of groups and a user in each one so that you can log on with different domain accounts. The IP settings that I used are given here. For GM-WS1 and WS2 running Windows 10, join the domain, globomantics.local, and here are the IP settings that I used. For the Denver network connection for WS1 and for WS2. The only difference being the IP address for the Denver connection and these settings for the private internet connection. For the GM-WSUS virtual machine, also join the domain globomantics.local, here are the IP settings for the Denver network and the IP settings for the external internet connection can be automatically assigned presuming you've got DHCP and a router connected to that network. Finally, for GM-RAS1, also join the domain globomantics.local and set the IP details as given here for the Denver private network with the IP settings for the private internet switch. Now, once you have your VMs built and configured, you might take a few extra steps to download some files that we discuss in the course. One fairly easy way to accomplish this would be to share a folder on GM-WSUS, which has an external interface to the public internet, and download to that folder with the intent of connecting to the share from the workstations. The WADK would be something to get, it includes a variety of tools covered in this course, you can find it quickly by Googling the terms WADK and Windows 10. There are earlier versions floating around, which you should avoid. The Remote Server Administration Tools, or RSAT, should be deployed on one or both workstations, and you'll see me mention other software as we go through the course. To work with OneDrive, I suggest creating a new Microsoft account for yourself at the URL shown, even if you already have a Microsoft account. You can have multiple accounts, that's fine, and they're free, so I always have one for testing that's separate from the ones I use for personal and work purposes. You can use this test Microsoft account to provide credentials that will let you obtain trial subscriptions to other Microsoft services as well, and the same reasoning applies. And, there you have it. There's no better way to learn Windows 10 than to work with it yourself, and the good news is that doing so is easier than ever with the current state of VM technology.
IPv4 and IPv6
In this clip, we'll go over the basics of IP addressing: both the four and six flavors. Fun fact: IPv5, or more properly, the internet stream protocol or ST was developed in the 1990s for streaming media, but shared the address space limitations of IPv4 and so it was basically skipped in favor of IPv6. Fun fact number two: None of this information in this clip has changed since the days of Windows 7 and 8, so if you already know it, there's no Windows 10 specific knowledge here. IPv4 addresses consist of four 8-bit number groupings called octets. Octo meaning 8, adding up to a 32-bit address space, which used to feel like a lot before the internet caught on beyond the defense and academic communities. An IPv4 address actually holds two pieces of information: an ID for the network, and an ID for the host, host being a term simply meaning some device with an IP address. The dividing line between the network ID and the host ID is specified by something called the subnet mask, which is also made up of four octets, but which has a sequence of consecutive 1s, followed by a sequence of consecutive 0s in binary notation. Here's how the subnet mask works. Let's take the IPv4 address of 22.214.171.124, which we can see here translates out to this long binary value. Here's the subnet mask: 255.255.0.0, which translates out to a string of 1s, followed by a string of 0s. If we think of the ones in the subnet mask as permitting the bits in the top line to flow down and create the network ID, we can see that the first octets comprise the network ID. The 0s in the subnet mask correspond to the bits in the host ID, so, this is a nice, simple example; 198.168 is the network ID and 1.50 identifies the specific device on that network or the host. Now, back in the day, we used classful addresses in which the network ID had a predetermined length: Class A addresses have an 8 bit network ID, Class B addresses have a 16-bit network ID, and Class C has a 24 bit network ID. Nowadays though, we rarely use classful addressing, instead preferring CIDR, or Classless Inter-Domain Routing notation, which was codified in a request for comment document way back in 1993, and which specifies that the subnet mask doesn't have to be on 8-bit boundaries, but instead have a variable number of bits for the network and host IDs. For example, 192.168.1.50/16 means that the subnet mask has 16-bits, but it could have more or fewer bits if we wanted to have larger or smaller network designs. So why have variable length subnet masks? To make the best use of those 32-bits so that they fit your situation. The more bits you allocate for the network ID, the fewer you have for host IDs. So, if you need a lot of networks but not all that many hosts, you can go that direction. Conversely, if you need more hosts but fewer networks, you can do that too. So, what happens when a computer on one subnet needs to communicate with a computer on another subnet? That's where the computer's default gateway setting comes into play. It names the nearside interface on a local router, which can forward the communications to the remote subnet. In this picture, if GM-WS1 wants to communicate with GM-DC1, it must first contact its default gateway, which is the router interface with the address 126.96.36.199. IP addresses can be assigned to hosts in various different ways. With static configuration, we can dictate to the host what address it must have, along with the subnet mask, default gateway, and optionally, a preferred DNS server. More commonly, Dynamic Host Control Protocol, or DHCP will lease an IP address, subnet mask, default gateway, and preferred DNS server, among other possible values to a client requesting an address. If no DHCP server is available, Windows 10 will assign itself a value in the automatic private IP addressing range shown here, also known as a link local address, so that it can communicate on the network with other computers that have also given themselves APIPA values. Many organizations use private IP addresses for the majority of their internally networked computers. As formalized in RFC 1918, these are address ranges that are not used on the public internet and are not directly addressable from that public internet. The 10.0.0.0 class A private network with this range of values provides for a very large number of hosts and a relatively smaller number of networks. The 172.16 class B private network has a 12-bit subnet mask, and the 192.168 class C private network has a 16-bit subnet mask for more networks and fewer hosts. Computers with these private addresses can still communicate on the public internet through a technology called NAT, Network Address Translation, but they don't deplete the global store of IPv4 addresses. IPv6 differs from IPv4 in having a much larger address space, 128 bits vs. 32, consisting of 8 groups of 4 hexadecimal digits. IPv6 also uses more efficient routing and can achieve so-called stateless configuration in which a given host's address can be generated automatically with the help of an IPv6 aware router. Stateful configuration with DHCP is still supported. The host ID is always 64-bits long in the IPv6 scheme. Now, there are various different types of IPv6 addresses. Link-local addresses are used on the local subnet and have the prefix FE80. These are analogous to the automatic private IP addresses in IPv4. Unique local addresses are routable but not on the public internet. These have the prefix FC00 and are analogous to private IPv4 address ranges. Finally, global addresses are routable on the public internet and analogous to public IPv4 addresses. The tools that we can use to configure and troubleshoot IP addresses, both four and six, include IPCONFIG, the old standby that provides a quick look at essential settings with the /all parameter. NETSH or Network Shell with which we can set IP parameters such as the address and the subnet mask, we can export and import wireless profiles, we can make firewall settings here as well. Then there's PING, which lets us check communication with other computers as long as ICMP packets are not blocked by a firewall, and PATHPING, like a combination of PING and TRACERT, which let's us examine the hops that packets take across routers. Over time my expectation is that PowerShell will supplant all of the above with equivalent commands that may not be as familiar or as concise, but that at least have fairly descriptive names. Get-NetIPConfiguration does much of what IPConfig does, Test-NetConnection is a replacement for a PING, Set-NetIPAddress performs some NETSH functionality, and so forth. Well now, let's take a look at using some of these tools to view and change IPv4 settings. I'm on a Globomantics workstation here, GM-WS2. So, let's open up a Command Prompt, and let's run ipconfig /all. Now if we focus on the Ethernet adapter Denver Network, we can see that the address is a static one, DHCP Enabled is set to No, the address is 172.20.1.61 with a 16-bit Subnet Mask. The Default Gateway is 172.20.1.1, and the preferred DNS Server is 172.20.1.50, which is also our domain controller GM-DC1. Now, let's open a PowerShell window from a tile on the Start menu, and here I can run Get-NetIpConfiguration-detailed, and we can see most of the same information that we just saw with ipconfig /all. Note the InterfaceIndex of 15, we'll need to use that later if we want to modify the IP address in PowerShell. Let's bounce back to the Command Prompt for a moment, and let's see if we can communicate with our preferred DNS server with the ping command. And we get a quick set of four replies indicating successful communication. Over on the PowerShell side, let's type Test-NetConnection, and we get another successful result. Now let's say that we want to change the static IP address to .60 instead of .61. Now we might try to do this in PowerShell, for example with this cmdlet, but that just returns an error because this command is kind of counterintuitive. All you can really do with it is change some of characteristics of an IP address, such as the subnet mask. Now, if I were king, a cmdlet named Set-NetIPAddress would actually be able to set the IP address, but I'm not king, so if we want to change the IP from 61 to 60, we have to use two different cmdlets: Remove and New as follows. We'll type Y to confirm, and here we get a permission denied message. so we really need to have an elevated PowerShell prompt in order to make changes. You notice that we were able to view the configuration in a regular PowerShell prompt, but let's close this one out and start up a PowerShell prompt with admin rights, and now I think we'll have better luck. So again, Y to confirm and another Y, and now let's set the new address. And voila, the interface now shows up with a new value. To prove it, we'll right-click the network icon in the taskbar and choose Open Network and Sharing Center, I'll click Change adapter settings, and then right-click Denver Network. If we choose Properties, and double-click IPv4, we can see that the IP address is now 60. Of course we can change it as well as various other IP parameters here as well. The Advanced button lets us add IP addresses and gateways, change DNS settings, more on that coming up in the next clip, and specify a WINS server if you have applications that still need NetBIOS name resolution. For now, I'll just cancel my way out. Oh by the way, the Obtain an IP address automatically radio button on the IPv4 Properties page is your way of telling Windows 10 to use DHCP or failing that, APIPA. I'll click Cancel here, and we can see in the Properties page that IPv6 here can be enabled or disabled separately from IPv4. But that setting is per interface. IPv6 and IPv4 can coexist without affecting each other. Microsoft refers to this as a dual stack.
In this clip, we'll discuss Name Resolution in Windows, which these days basically means DNS. Name Resolution is the process of associating or correlating IP addresses, which are either numerical as in IPv4, or alphanumerical as in IPv6, with friendly names, aka host names, that are easier for people to remember. Names like GM-WS1. These associations can be forward in direction, in which case we know the host name but we need the IP address, or reverse in direction, in which we know the IP address and need the host name. The vast majority of the time Windows uses DNS, but it can also use NetBIOS. So, given that the preferred method for name resolution is DNS, what does DNS actually do? Its primary role is to correlate IP addresses to friendly host names, and it does so by means of a hierarchical database of domain names. One that we're all familiar with from the public internet: forward lookups use A or in the case of IPv6 quad A records, reverse lookups use pointer or PTR records. Interestingly, there's another kind of resource record in the DNS world, the SRV record that helps Windows systems find out where certain types of network roles live, such as domain controllers, global catalog servers, activation servers, and so forth. Now the older NetBIOS name resolution uses a service called NetBIOS server TCP/IP, also known as simply NetBT. So here's an example of a NetBIOS name: gm-ws1. A DNS name for the same computer: gm-ws1.globomantics.local, showing the hierarchy of domains, and an IP address again for this same machine. Now by the way, if we provide Windows with the simple NetBIOS name: gm-ws1, it will automatically append the DNS suffix that is set for that network interface in the interface's IP Properties pages. Let's take a quick look at these DNS resource records and where they live. We're on GM-WS1, a Windows 10 workstation that's been equipped with the Remote Server Administration Tools, or RSAT. So if I click the Start button and scroll down to Windows Administrative Tools, I'll see a lot of administrative consoles that came with the RSAT, including DNS. Now expanding the Forward Lookup Zones node reveals a folder for globomantics.local, and if we expand that, we see a few subfolders, but we can also see a number of A records or forward lookup records for the various computers in the globomantics.local domain. If I click Reverse Lookup Zones and click the somewhat oddly named node beneath it: 1.20.172.in-addr.arpa, that reflects notation that harks back to the early days of the internet when it was developed by the Advanced Research Projects Agency of the US Department of Defense, I can see a number of PTR records here, which are used for reverse lookups if I know the IP address, but need the host name. Now, I'll show you some of those service locator records. So if I click msds.globomantics.local in the navigation pane here, I can see various subfolders, let's navigate to dc for domain controller, and sites, and Denver, and tcp. So here I can see that there's a Kerberos server, that's the authentication protocol for Active Directory, at gm-dc1.globomantics.local. So that's a dc where I can log in if I'm a Windows 10 workstation, and it's in the Denver site, so Windows can tell if that's the same site that the workstation is in. Our server classes go into a lot more detail on these SRV records, but for this course it's enough to know that they're out there and they help Windows find needed services. So finally, where do all these records live? Well they can live in text files sitting on the local hard drive, but it's more common for them to live in the Active Directory database. So if I right-click the globomantics.local zone for example, in the navigation pane, and if I choose Properties, I'll see the message that Data is stored in Active Directory, telling me that this is an Active Directory integrated zone living on the domain controller in which the DNS data is actually stored in the Active Directory database. So how does DNS name resolution work? Well, when Windows starts up, it loads the contents of a text file named HOSTS into the DNS cache memory. Most organizations don't use this file, but you could put commonly accessed server data into it, for example, in a Windows 10 image. Caching occurs in DNS both at the client level and the server level to make name resolution faster at the risk of occasionally getting a stale or outdated record. Windows 10 knows where it's preferred DNS server is, and perhaps one or more alternate DNS servers, because that information is typically provided by a DHCP server along with the IP address lease. Now if a Windows 10 machine asks about a computer that's not in the local domain, DNS servers can forward the name resolution request up in the domain structure, or down, or even out to the public internet. Finally, Dynamic DNS simply means that Windows computers that change their IP address or host name will automatically update the DNS database so the administrators don't have to do so. The tools that we have available for examining and troubleshooting name resolution include IPCONFIG, which, for example, we can use to show the contents of the DNS client cache, and clear out that client cache if we think it might contain a stale record. NSLOOKUP is actually the primary DNS troubleshooting tool and it allows us to perform forward and reverse lookups, specify the types of resource records we'd like to see, and so forth. PowerShell provides more modern versions of all these commands, including Get-DnsClientCache, analogous to IPCONFIG /displaydns, Clear-DnsClientCache, which is analogous to IPCONFIG /flushdns, and Resolve-DnsName, which is analogous to nslookup and so on. Let's now take a quick look at how to test DNS name resolution in Windows networks. We can view the current client DNS cache a couple of different ways to see what lookups have been performed recently. So I'm at a Command Prompt here and I can simply type ipconfig /displaydns, and we can see the various recent lookups that are still in the cache. Now, if we open a PowerShell window, we can say the same basic information using Get-DnsClientCache. We can perform DNS lookups from the command line using the nslookup tool from the Command Prompt. For example, if I want to find the IP address of gm-ws2, the reply consists of two pairs of lines, the first pair tells us the host name and IP address of the DNS server that is answering the name resolution query, and the second pair of lines tells us the answer to the question we were asking, namely, that gm-ws2 is at 172.20.1.60. By the way, notice that nslookup appended the suffix globomantics.local to the name that we supplied. Where did that come from? Well we can see the answer with ipconfig /all. Which shows us on the sixth line that the DNS Suffix Search List is globomantics.local. Now we can change that if we want in the IP Properties page. For example, if we'd like to add more suffixes for Windows to append in searching for a computer name or host name for which we do not supply the fully-qualified domain name.
Windows Firewall and IPSec
Networks are great, but they're greater when they're secure, so let's turn our attention now to the built-in Windows Firewall with advanced security. P.S., ever notice that everything in IT is advanced? I remember my $6000.00 1986 IBM PCAT had advanced in the name, but today would feel like an abacus. Well we'll start with a big picture of review. The Windows Firewall, which hasn't changed for several years now, is a bidirectional one with separate rules for inbound and outbound traffic. It uses different profiles for different types of networks, namely, domain, public, and private. It comes with a lot of predefined rules, we can make our own from scratch if we want to. Maybe the biggest advantage of this firewall with respect to other choices in the marketplace is that we can control it via Group Policy. For example, making different settings for different sites or organizational units, and the console includes IPSec connection security rules, which work hand in hand with the firewall rules to help ensure secure communications. Let's look at two quick ways we can make a rule in the Windows Firewall. We're on our Windows 10 workstation, GM-WS1 in the Globomantics domain. Now if I type firewall into the search field on the task bar, I have basically two choices: the Windows Firewall Control Panel, or the Windows Firewall with Advanced Security administrative console, shows up as an app, it's not really an app. Let's start with the simpler interface, which is the Control Panel, and we can see that the domain profile here is the active one showing as connected, and what I want to do here is open up the firewall so that a helpdesk technician can access the event logs on my machine. The quickest way to do this is to click the length that says Allow an app or feature through Windows Firewall. So I can scroll down here to the Remote Event Log Management entry and see that the checkboxes are all empty. I want to check the one for the domain profile, but we can see that it's grayed out. So first I have to click the Change settings button here at the top. Now, things are live and I can click Remote Event Log Management and click the Domain column entry there, I'll leave the Private and Public choices unchecked. There's no real need to open this app for communication in private or public networks. Now, if I click Details here, I can see a concise description mentioning that this feature uses Named Pipes and Remote Procedure Calls. So file that away in the back of your mind, I'll click OK, and OK again. Let's now move to the fancier interface, the Administrative Console. We can get there from here by clicking Advanced settings. If I now click Inbound Rules, I can scroll down and I can see that there are actually three Remote Event Log Management rules that got flipped to green by my previous checkbox. If I double-click the RPC one, I can see the details. The General tab, for example, shows me that the Action is to Allow the connection. The Programs and Services tab allows me to click a Settings button by Services, and I can see that the Windows Event Log is the relevant service here. The Scope tab shows me that any IP addresses are allowed. The Advanced tab shows that it's the Domain Profile that is affected. There are other settings here to, but this should give you the idea. By the way, if I go over to the General tab, note the yellow message at the top; it's a predefined rule so I can't just edit every possible setting within this rule. The only way to be able to edit every setting is to build a rule from scratch. Now by the way, I could have started here in this console and just enabled these three canned rules in order to open up the firewall for Remote Event Log Management. This console provides more information and more control, but it takes a bit longer than the simpler Control Panel. The Network Location Type helps supply the appropriate degree of security for the network based on the network's trustworthiness. Windows is smart enough to know if you're on a domain network, in which case it decides that the network location type should be domain. Otherwise, Windows typically prompts the computer user upon the first attempt to connect to a new network, and that decision will affect what firewall profile should be used for that network. It's important enough that if Windows doesn't have the ability to determine the location type, an administrator of the computer should make that decision, and that decision affects whether network discovery is on or off - that's the ability to see other computers and have them see you - and likewise, file and printer sharing. The types of network location are Private, Public, and Domain. Additionally, there are two subtypes of Private networks: Work and Home. Private networks are presumed to be safe and secure where it's okay for computers to be able to see each other and share files and printers. The computer user manually selects a private network when first connecting to a new network, but must choose between work and home. A work network is a type of private network that does not permit joining a home group. Home groups, introduced in Windows 7, make it simple to share libraries and devices on home networks without requiring frequent re-authentication. So you'd choose Home if you want to join a Home group. A public network is presumed to be risky and insecure, where you don't necessarily want other people on the network to be able to see your computer and any file shares you've made. Networks in public places such as airports, hotels, coffee shops, etc., should normally be tagged with this location type. Domain networks are as I noted already, selected automatically by the operating system. A domain environment is, like private networks, presumed to be secure, but being a managed business environment, we don't necessarily want Windows PCs in a domain to be able to browse and discover other computers and devices. So that capability may be managed by Group Policy, for example, by controlling the services that support it. Or users can turn it on at will, for example, by clicking the network icon in File Explorer, and then clicking the little yellow box that appears. Also, if you have more than one network interface in your computer, it's fine to designate different network location types for networks that connect to different interfaces. Furthermore, when you make a selection for a given network, you won't have to make it again the next time you connect to that same network. Changing the Network Location Type was easy in Windows 7, but it's changed in Windows 10, and in fact, you can't actually even do it directly via the Settings applet. You have to go to Settings, Network and Internet, click the network type on the left, Wi-Fi or Ethernet, and then there's a switch labeled Find Devices and Content. Turning this on effectively sets the network type to private, and turning it off sets the network type to public. You could also view and set the location type in PowerShell, for example, via Get-NetConnectionProfile, you can view it, and Set-NetConnectionProfile to set it, the types are, logically enough, public, private, and illogically, domain authenticated instead of just domain, so watch out for that. The netsh command also lets you set the location type, and a clunky but functional method is to forget a Wi-Fi connection that's hidden away under network, and internet, and Wi-Fi, and manage Wi-Fi settings, and then reconnect it, specifying the desired behavior. Now one of the features that the network location type either turns on or off is Network Discovery. Basically, Network Discovery lets you see other computers while browsing the network, and it lets them see you as well. It actually goes further than that and permits browsing for network devices other than computers too. This feature is on by default for home and work networks, and off by default for public and domain networks. So, how can we change the state of the Network Discovery feature? There are, as usual, multiple places one can do this. The Network and Sharing Center's advanced sharing settings link is available in the desktop control panel. In the more modern interface you can go to Settings and choose Network and Internet and then click either Wi-Fi or Ethernet, and then Advanced Settings, where you can change the switch labeled Find Devices and Content. Note that this option also turns File and Printer Sharing on and off. You could also use the netsh command to enable the firewall rule group called Network Discovery as shown here, and as you might expect, you could go to the Windows Firewall console itself and enable the necessary rules. So, let's now demo setting and changing a network location type. I'm on GM-WS1, and I've recently connected up a new network adapter. Let's go to the Network and Sharing Center by right-clicking the network icon in the task bar. We can see the domain network: globomantics.local, and a new network named Network, that's a private network type. I can't actually change the network location type here, although I can view it. Closing out of Network and Sharing Center, and right-clicking PowerShell in the Start screen, and running it as an administrator, here I can type Get-NetConnectionProfile, and confirm that, yep, it's a private network. So now let's change the network location type to public. I'll go ahead and minimize PowerShell, click Settings, Network & Internet, Ethernet, and Network. Here's the Make this PC discoverable switch, which, if I turn it off, basically makes this a public network. If I click the Back arrow, I can now open Network and Sharing Center, and we can see that Network is now a public network. Closing out of this Control Panel, and going back to PowerShell, and tapping my up arrow to repeat the Get-NetConnectionProfile command, we can see further confirmation that "Network" is now a public network. I can change the profile back to Private using this PowerShell command, and now we can confirm it with Get-NetConnectionProfile, and voila, Network is now private again. Setting the network location type modifies both network discovery and file and print sharing, but how can we turn just network discovery on or off? Let's select the Network and Sharing Center. Now if I click Changed advanced sharing settings, and open the node for private networks, we can see where it's possible to turn network discovery on or off without simultaneously turning file and print sharing on or off. Notice that network discovery is turned on for the private network type. Now, let's open the Windows Firewall with Advanced Security. Notice we have different rules for the domain, private, and public profiles. So let's filter the display further by profile, and we'll specify Private. Now we're just looking at the rules that affect the private profile, and we can see that the rules are enabled. So this is how the network type, which affects network discovery, impacts the firewall rules. It's a lot easier to change network discovery with a single radio button than by modifying a dozen or so firewall rules. Going back to the Advanced sharing settings screen in the Control Panel, let's click the radio button to turn off network discovery for the private networking profile, and we'll click the Save changes button, which will work because I'm logged on with an administrative account. Now if I hop over to the firewall console, and tap F5 to refresh the display, voila, the network discovery rules are now disabled for the private profile. IPSec improves network connection security, where the information flying around might be more than usually sensitive. It does this in several ways. Authentication can be required in one direction or both directions, and encryption can be required in more demanding scenarios, although IPSec encryption is famous for imposing significant performance overhead, and therefore is not recommended for broad use. IPSec encryption can be used with L2TP in a VPN situation also. Domain isolation allows us to create many networks within larger networks, and restrict communications to a specific group of computers sometimes called a logical network, and IPSec can also be used to build secure tunnels between routers or other computers that don't support other forms of secure tunneling.
Wireless networking is becoming more and more prevalent as speeds and reliability continue to improve, so it's important to know the nuances of wireless configuration in Windows 10. As per usual, some of the configuration details live in the Settings applet, and some in the desktop control panels. In the Settings applet, open the Network & Internet tile, and then click the Wi-Fi node in the navigation pane, this is where you can modify whether the wireless connection occurs automatically, whether network discovery is on, whether the connection is metered or not, i.e. pay for bandwidth, and whether you want to enable Wi-Fi Sense, and/or HotSpot 2.0. More on those in a minute. Over in the control panel, we have the good old Network & Sharing Center, where we can configure IP, discoverability, file and printer sharing, specific adapter options, and wireless network properties; much of which we'll see in the upcoming demo. Taking a closer look at what's available in the Settings applet, specifically for Wi-Fi connections, in addition to turning Wi-Fi on or off, here we can show available wireless networks, at least if they're broadcasting their SSID, view a select subset of hardware properties, although we can't make any changes here, manage known networks, which means that for each network we've already connected to at some point in the past we can choose whether to connect to it automatically, whether it's a metered connection, and for the active connection, whether to turn network discovery on or off. Here is also where we can control Wi-Fi Sense, Hotspot 2.0, and paid Wi-Fi Services purchased via the Windows Store. Wi-Fi sense has changed a bit in the 1607 build of Windows 10, basically it instructs Windows to connect to hotspots that are known to have good performance based on feedback received by Microsoft. To use it, you have to log in with a Microsoft account. There used to be a variety of configuration options for sharing hotspots with your contacts and sharing passwords via social media, but nobody really used those features and they're gone in more recent builds. Now this is basically a yes or no setting. A related capability is HotSpot 2.0 aka Passpoint, also an auto connect Wi-Fi feature for public networks, but based on 802.11u and characterized by solid security, namely, WPA2 authentication and encryption. The wireless choices in the desktop control panel are mainly stored in the Network and Sharing Center, which hasn't changed much over the years. Advanced sharing settings let you modify network discovery, file and print sharing, and so forth. Adapter settings let you modify the IP configuration, network clients and services, and interface-specific capabilities via the configure button. The settings you need that are wireless specific are actually buried. You have to right-click the wireless adapter and choose Status. Then you see the choices at the bottom for Properties & Wireless properties. It's that wireless properties button that lets you tailor the Wi-Fi security settings, such as security type, WEP, WPA, WPA2, encryption type, and password. Now you may have heard about something in Windows 10 called Wi-Fi Direct. This is actually an API or Application Program Interface that permits wireless communications between devices without a wireless access point. We're talking about purposes such as file transfer, printing documents, and so forth. Purposes for which Bluetooth might be to slow. Now you say, hey, that sounds like ad hoc wireless, and you're right except that Wi-Fi Direct uses an easier device discovery process. Basically, turn the feature on at your printer for example, and then search for printers in Windows just as you would normally, providing a PIN when you connect. This is more secure and it requires a compatible adapter. I'll show you how to identify whether your wireless adapter is one of the compatible ones. By the way, Microsoft has stated that future versions of Windows might not support ad hoc wireless and that you should use Wi-Fi Direct instead, starting with Windows 8.1. What tools do we have available to us for working with Wi-Fi and Windows 10? Well, the older one is good old NETSH, which is getting a bit long in the tooth but still quite useful. With this command line tool we can export wireless profiles, add an existing wireless profile to a PC's configuration, and create a mobile hotspot, although there's an easier way to do that now as we'll see. PowerShell of course is the more modern alternative with cmdlets to examine details of a network interface, enable an interface, disable an interface, and so forth. Let's look at where one can make all these wireless settings in Windows 10. Here on a Globomantics laptop and starting with the Settings applet, if I click the Network & Internet tile, we can see the status page that was added in the anniversary edition build of Windows 10. A lot of what's here duplicates what's on the Network & Sharing Center, but notice the Network reset link down at the bottom here. This wipes your network interface settings, removes and reinstalls your adapter software, and schedules a restart, which makes it a sort of last resort for troubleshooting network problems. So just thought you should be aware of that. Clicking Wi-Fi over here in the navigation pane lets me view the available networks, like this, I can view hardware properties, like this, and I can manage known networks, like this, which lets me click a recently visited network, and view its properties, or forget the connection, perhaps to reestablish it later. Unfortunately, the properties are very limited here, including only whether to connect automatically and whether to designate the connection as metered, that is, pay as you go. If I want to see more properties I have to go elsewhere; however, if I click the active connection, Frieda in this case, and go to Properties, I can also change the discoverability setting. To change wireless security settings, I need to visit the Network & Sharing Center. I can link to that from the Status page or from the Wi-Fi page, and of course we can get there directly from the control panel too, or by using the search facility. Now if I choose Change adapter settings, I can right-click the wireless interface and choose Properties to see the usual network interface card settings, network client protocols including IP, a Configure button for detailed options that are specific to this particular driver, and so forth. But where are the wireless security settings? Well that's a secret, but I'll share it with you. We have to right-click the wireless adapter and choose Status instead of Properties, and now we can see the secret button for Wireless Properties. The Connection tab shows the auto connect choices, and the Security tab is where we can change the security and encryption settings and the connection password. Notice the dropdown protocol list, WPA2 comes in two flavors, Enterprise being appropriate for organizations that have a radius server and personal for networks that don't. Radius is just a centralized authentication server for remote access connections. Whatever we set here must match the settings on the wireless access point. So let's cancel out here. Now we need to see how to determine whether the wireless adapter can work with Wi-Fi Direct, a PowerShell session will work for this. I can right-click the Start button and on this system choose Windows PowerShell. I could equally do this from a command prompt, I'm just going to type ipconfig /all, and if I scroll up a bit, notice that the third entry here is for a Microsoft Wi-Fi Direct Virtual Adapter. That's confirmation that this adapter should work with Wi-Fi Direct devices. Now the last thing we'll look at here is the mobile hotspot. Now it used to be that we had to use netsh to configure a Windows 10 system as a mobile hotspot to which other users can connect, now it's a simpler affair. I'll just click Mobile hotspot on the left, and I can click Edit to name the network whatever we want, and enter in a network password of our choosing, and then, we'll save that out, and now we can just change the toggle from Off to On, and you can now connect to this Windows 10 system from a phone, tablet, etc. And that's a quick look at wireless networking in Windows 10.
VPNs and DirectAccess
Up to now we've been chatting mostly about local networking, but this clip discusses remote networking with VPNs or Direct Access. Now when we define Virtual Private Network, the virtual part means that a VPN is not a real network in the sense that all participants are using the same native protocols. A native connection to the internet is insufficient to protect and authenticate business traffic. So we're creating a virtual network and software that exists inside the native public internet. The private part means that VPN communications get encrypted so that any observer or eavesdropper on the public internet will not be able to glean any useful information from the VPN. A remote access VPN allows a remote system to connect to the corporate network and access multiple systems on that network, unlike a site to site VPN which actually links two networks, and which we won't be chatting about in this course. VPNs have three primary components. First is the tunneling or encapsulation component. This is software that takes native format data and puts it into a container so that it can traverse a public network. The outer container or wrapper includes routing info to help the data reach the private network. Encapsulation helps us work around the potential problems of firewalls on the public internet blocking the kind of traffic we wish to generate. The second major component is the authentication piece which verifies the identity of the communicators so that there's no risk of impersonation, and the third component encrypts content so that it can traverse the public internet without risk of exposure. The complicated thing about VPNs is that sometimes these three components co-exist in what we call VPN protocols, that is to say a VPN protocol might contain specifications for both the tunneling component and the encryption component. Still, the three primary components are a useful way to think about VPN features. The tunneling and encryption functions are often treated in combination. So, the center of a block of data will encrypt that data for security and then place the encrypted data block inside a new wrapper that contains routing details. The tunneling protocols, some of which incorporate the encryption piece in Server 2016 include the following: PPTP, or Point to Point Tunneling Protocol, which has been around for many years, but which is no longer really recommended in light of more secure protocols. LT2P/IPsec, which is short for Layer 2 Tunneling Protocol, featuring IPSec for encryption and compatible with Vista and newer. There's SSTP, or Secure Socket Tunneling Protocol, also supported on Vista and newer, characterized by compatibility with most firewalls because it uses port 443, which is generally open. And IKEv2, that's Internet Key Exchange version 2, which is the default protocol in Windows 7 and newer and the one that supports something called VPN reconnect, which improves the resilience of connections when a client is moving around between access points. The authentication protocols you should know include PAP, for Password Authentication Protocol, which uses plain text passwords and is not recommended nowadays. CHAP, for Challenge Handshake Authentication Protocol, MSCHAPv2, Microsoft's improvement on CHAP providing for bidirectional authentication, and EAP/PEAP, Extensible Authentication Protocol and Protected Extensible Authentication Protocol, a highly flexible scheme in which clients and servers can negotiate the specific authentication scheme based on their respective capabilities, and which can use certificates for strong security. The preferred manual method for configuring a VPN connection in Windows 10 involves going to the Settings Applet, choosing Network & Internet, and VPN. You could also get there via the Action Center. SO the data that you'll need to provide once you get there is the VPN Provider. Now this will only show Windows unless you've run an app that provides third-party VPN providers. The connection name is how the VPN appears in the GUI, and if you have multiple VPNs that could simultaneously, this will help users distinguish them. Then you specify the server name or IP address, the type of VPN which could be automatic if you want to let Windows 10 try to figure it out, the type of authentication, the username and password if applicable, and whether you wish to have Windows remember the sign-on credentials for speedier log on next time. A relatively new feature in the remote access VPN world is the so-called App-triggered VPN, which fires up a VPN link whenever the user runs a specific application. Unfortunately it's for non-domain computers only. You have to use PowerShell to set it up in the absence of something like Microsoft Intune or System Center Configuration Manager, which we're going to discuss more in a second. So in PowerShell we can use the amazingly long cmdlet: AddVpnConnectionTriggerApplication, and you have to turn on a feature called split tunneling which you can also do via PowerShell. Split tunneling just means that any user initiated traffic not intended for the corporate network does not flow through the VPN link, but rather through another pathway. We can configure something called a VPN Profile, and deploy it via Microsoft Intune or System Center Configuration Manager. The features that we can control via such a profile include that app- triggered VPN capability just mentioned, also network traffic filters whereby we can limit VPN access to specific apps and/or restrict VPN traffic to specific protocols, ports, or IP address ranges. There's also an always on capability that can be triggered when the user logs on or when the network state changes, for example, when the user leaves the corporate wireless network. We could also define an exception for the always on capability so that if a trusted network is present, such as globomantics.local, the VPN will not be activated. Well that's an overview of some VPN basics. The subject is considerably deeper and if you're interested in plumbing the depths more thoroughly, check out my Pluralsight course titled Implementing Windows Server 2016 Connectivity and Remote Access. Now, let's take a quick look at where to set up a VPN connection in Windows 10. Setting up a VPN connection on a Windows 10 client is fairly easy. We'll begin by visiting the Settings applet, Network & Internet, and VPN. There are a couple of settings here that apply to all your VPN connections, metered networks by the way are those that you pay for, such as broadband cards. Let's click the Add a VPN connection link here at the top, where we'll specify the VPN provider, this will typically be Windows unless you've received provider software from another vendor. The Connection name, we'll use Globo- VPN because Globomantics is too much typing, and then we'll pick the server name or address. We can specify the type of VPN. Automatic means we'll let Windows 10 try different protocols starting IKEv2, where we can pick the protocol that we want if we know exactly what we're connecting into. We can also specify the type of sign-in info: Smart card, password, or Certificate, and if we want, if we scroll down a bit, we can provide credentials. Clicking Save will save the connection. Now if it's necessary to fine-tune the settings, we might have a bit more work to do. So at the VPN page, I see the Globo-VPN icon up here at the top, I can choose Advanced options, and edit connection properties, and come back here to where I was before. There are also some proxy settings that I can control here, and if I want to change the security settings, I can come over here to adapter options to bounce over to the desktop control panel. So there's Globo-VPN set up for IKEv2, I can choose its properties, and bounce over for example to the Security tab here if I want to change the encryption settings or perhaps some of the authentication settings to permit MSCHAPv2 for example if I'm not using a certificate-based VPN. DirectAccess is like a VPN. It's a PC to network connection that traverses the public internet, but it's better in that for starters users don't have to establish a direct access connection every time they want to communicate with the mothership. Once it's set up, it connects automatically. Windows can tell whether it's running on a device that's on the internal network or a device that's outside the corporate LAN, and adjust accordingly. If a Windows 10 computer is external but internet connectivity exists, then DirectAccess is active. It's also bidirectional by default, meaning that the client can receive Windows updates and group policy settings as if it were on the LAN. There is a Manage out option however, in which the user doesn't gain access to the corporate network but administrators can use DirectAccess for managing user systems when they're outside on the internet. Another benefit is that DirectAccess provides finer-grain control over which systems can access the internal network. Let's look at the major components of DirectAccess. First, the internet. If we don't have that we don't connect. Then of course we have to have something to remote into, our Globomantics corporate network. The gatekeeper on the edge is our DirectAccess Server, that's the system our external systems are going to connect to and through, and we have a Windows 10 client shown here outside the LAN but connected to the internet. As with the VPN, DirectAccess sets up a protected private tunnel across the public internet between the external client and the DirectAccess Server. Then, the DirectAccess Server passes traffic to systems on the corporate network. When we get to the locality features, we see where DirectAccess departs from typical VPN technology. Notice this animal at the upper- right called a Network Location Server, or NLS. A DirectAccess client looks for that machine to figure out whether it's inside the corporate network or outside it. For this method to work, by the way, the NLS must not have a DNS entry on the public internet. It should only be resolvable within the private network. So on the right, if the client is inside, then the NLS is found, no tunneling is performed, the name resolution policy table, which contains rules for which DNS server to use is ignored, and the domain firewall profile is set on the client. Now on the left, if the client is outside, then the NLS is not found, the client resolves msftncsi.com to ensure that it's on the public internet, the tunnel gets created, the NRPT rules are used, and either the public or private firewall profile gets applied. By the way, any host names that do not match an NRPT rule will be resolved by the DNS server already configured for the network interface, which is usually a public DNS server. So the locality feature is really the special sauce that makes DirectAccess preferable to regular VPNs. The requirements for DirectAccess range from things that we already have at Globomantics to some that we'll need to set up. To start with, we need Active Directory, including DNS and group policy. A setup wizard will build the GPOs that we need. We also need clients that support IPv6, although as long as we're using Server 2012 or newer, the internal corporate network can be IPv4. Windows 10 supports IPv6 by default, so that should be okay. We also need a DirectAccess Server so we'll have to set that up at Globomantics, but if we're using Server 2012 or newer, the setup is easier than it used to be in that we don't have to set up two consecutive public IPv4 addresses. The clients have to be running the Enterprise edition of Windows 7 or newer, no problem there. We don't already have an NLS, but that's become easier to set up than it used to be. It doesn't necessarily have to be a separate computer, we'd like to set one up at each site. So for Globomantics that means one in Denver and one in Toronto, and if we're using Server 2012R2, no SSL certificate is required. Now, if we want a full functionality implementation of DirectAccess, we should have a public key infrastructure in place, and we have to have one if we have Windows 7 clients which must use certificates. We can manage and troubleshoot DirectAccess via the command prompt. NetSH, our networking Swiss Army knife for many years offers the netsh dnsclient show state command, look for the values listed for DirectAccess settings and Network Location Behavior. The netsh namespace show effectivepolicy command will display the name resolution policy table if the machine is outside the corporate network and somewhat confusingly will advise that NRPT settings would be off if the client is inside the corporate network, and it is inside that network. On the PowerShell side of things, Get-DAConnectionStatus will give you some network location information, but only if the Network Connectivity Assistant service is running. Now you might lean towards a VPN solution rather than DirectAccess in certain circumstances. For example, if there's no IPv6 support in client applications, if you need support for Windows XP clients or for Windows 7 clients in the case where your organization has not fielded a public key infrastructure. If you need support for non-Windows clients, if you're using some edition of Windows that is not the Enterprise edition, which DirectAccess requires, or if you don't have a strong need for a convenient way to specify resources that should be off limits to remote access users. And believe it or not, that concludes our monster module on configuring core networking. When you're ready for it, the next module covers physical storage and it contains an interesting mixture of old and new technologies, just like most parts of Windows 10.
Configuring Physical Storage
Disks, Volumes, and File Systems
In this module, we're going to take a look at physical storage. Metaphysical storage is outside my expertise. Specifically, the topics here include disks, volumes, and file systems. An overview of storage tools, including Microsoft management consoles, PowerShell, and the older DISKPART program. Virtual hard drive storage formats which have uses beyond Hyper-V. The newer storage spaces technology which is available in Windows 10 as well as in Windows Server 2016. And removable storage from CDs to USB devices. Our first clip concerns disks, volumes, and file systems. Terms that we often use somewhat loosely. And by we, I'm including Microsoft. But it's helpful to define them a little bit more precisely before we get into the various options in Windows 10 for arranging storage devices. Now, I realized that most Windows 10 systems would use a single disk, a single volume, and the NTFS file system. But there's a lot more we can do with physical storage and a CAD workstation might need a different storage setup than a tablet PC. So let's consider the definition of a disk. As a storage device that we can slice and dice into smaller pieces called partitions and volumes. Disks can be physical or a virtual. A virtual disk being a file that behaves like a physical disk when used with tools such as the Disk Management console that are virtual disk aware. These days, we have various types of disks that we can use with Windows 10. Traditional spinning disks have magnetic platters over which read/write heads move radially while the disk spin underneath them. These are still great choices for high capacity scenarios. But solid-state disks or SSDs, which use non-volatile semiconductor storage are continuing to improve in terms of capacity, reliability and cost. They are noticeably faster than spinning disk in terms of random access. They may or may not be faster in terms of sustained IO. Reliability is now on a par with spinning disks. And maybe even higher than spinning disks and portable systems because SSDs aren't as susceptible to damage from being knocked around in airplane overhead bins and car backseats. Hybrid disks combine a small SSD with a larger capacity spinning disk. These are common in tablets. But as SSDs continue to decrease in cost, this category will diminish. And of course, we still have removable disks such as optical and USB. Although optical disks are becoming rare, as more software's available via direct download and flash drives are preferred for transferring files between computers. Okay, so what's a partition? Well, it's a chunk of space curved out from a disk and it can be treated as though it were a separate disk. For example, we can format a partition with a file system. A partition might occupy an entire disk or you might have multiple partitions on a single physical disk. You can usually boot into a different Operating System and still see partitions. For example, people often boot to Linux in order to perform repair or management of partitions created on a Windows computer. Windows 10 supports two types of partitions. The Master Boot Record or MBR Flavor is the older kind and permits up to four partitions per physical disk with an upper limit of two terabytes per partition. MBR partitions are compatible with both BIOS and UEFI machines. GUID Partition Table or GPT is the newer arrangement and permits up to 128 partitions per disk with an upper limit in Windows that is of 256 terabytes per partition. GPT style partitions are compatible with UEFI systems at 64-bit versions of Windows and are preferable in those scenarios. Now, the last structure we'll examine is the volume which is a bit more complicated. Essentially, a volume is a chunk of space allocated from one partition or from multiple partitions that is configured by specific operating system and formatted with a file system. A simple volume lives on a single disk and a complex volume is built from multiple disks. Now, unlike partitions, volumes are built by the OS and other operating systems typically cannot see volumes created in Windows. Now, in the Windows world, if we have a single simple formatted partition, Microsoft can refer to that partition as a volume. So the terminology is a bit overlappy in this case. The types of volumes that we can create and manipulate in the Windows Disk Management console include simple, using a so called basic disk, built from a single disk but potentially using non-neighboring chunks of space. Non-contiguous is the ten dollar word. Mirrored volumes require creation of the so called Dynamic disk. You can change to Dynamic in the Disk Management console. Actually, all of the multi disk volume types and Windows require dynamic disks. Mirrored volumes provide fault tolerance by mirroring all rights on multiple disks. So the failure of one disk causes no data loss. Spanned volumes are rarely used but their purpose is to minimize slack space and the result in the capacity loss. In the spanned volume, data fills up completely on one disk before spilling over to the next disk in the span set. There's no fault tolerance here. Striped volumes spread IO across multiple disks for speed purposes but again provide no fault tolerance. And finally, RAID for redundant array of inexpensive disks can provide both speed improvements and fault tolerance in a way that doesn't eat up as much disk space as mirrored volumes. Unfortunately, you cannot use this on Windows 10. It's just for servers. But Windows 10 has something newer that's basically RAID 5. And we'll discuss storage spaces in just a fewer minutes. In fact, generally speaking, the kinds of volumes that we can create with dynamic disks in the Disk Management console are now pretty much obsolete, given the inclusion of the newer and more flexible scheme and embodied in storage spaces. So after we obtain a disk, partition it, and created a volume on it, we need to format it with a file system. This is a system for naming, placing, and organizing files on a volume for purposes of reading, writing, editing, copying, deleting, forgetting about overtime, and so forth. That's at least the bare minimum of what file systems facilitate. But they may also include features to support security, redundancy, searching, compression, etc. as NTFS does, for example. When we format a volume, we get it ready for use with a specific file system. Formatting, generally, wipes any existing content on the volume. Now, the file systems available with Windows 10, include ReFS, NTFS, FAT32, and exFAT. The table here summarizes some of the upper limits in terms of volume size and file size. Note that the file size limit for NTFS supplies to Windows 8 and newer. It was just 16 terabytes under XP and Windows 7. Also note that the maximum volume size of FAT32 reflects the limitation of the Disk Management formatting code. It can actually go up to two terabytes with other utilities. Now, let's consider when we might use these different file systems. The ReFS file system, the lowercase E being not a typo, since the acronym stands for Resilient File System, is usable in Windows 10 only in the single scenario of mirrored storage spaces. So, it's not a replacement for NTFS by any means. It is robust and it has the ability to scrub disk problems on the fly so it never needs a chckdsk run. exFAT is a format that Microsoft developed for flash drives that may need to hold files larger than FAT32's four gigabyte limit. A more common form format for flash drives, is FAT32. Which has no ability to access controls ala NTFS. But is in wide spread use. Well, let's now take a look at disks, partitions, and volumes as they appear in the Disk Management console. I'm on a Globomantics workstation GMWS1 where I've just added a new hard drive. So let's right click the start button, and choose disk management. And note the disk management requires that the new disk be initialized before the Windows logical disk manager can work with it. You can see that we can choose either the MBR or the GPT partition type as we just discussed. We'll leave the default of GPT and click OK. Now we can now see the disk, the new disk as Disk 1 in disk management with 127 gig or so of unallocated space. Now, it's a basic disk but if we wanted to use it, in a multi disk volume, we could make it dynamic by write clicking and choosing convert to dynamic disk. Now, if I look up at Disk 0 here in the console, we could see that it has three partitions. Only one of which is directly accessible and that's C drive. Here's a recovery partition here and EFI system partition and the C partition which by the way is also a volume. And which is been formatted with the NTFS file system. So a single disk, Disk 0, contains three partitions. Each of which contains a simple volume.
MMC, PowerShell, and DISKPART
In this clip, we'll look at some of the tools available in Windows 10 for managing physical storage and give you a taste of how you might use these tools. The Microsoft management consoles that pertain to physical storage include device manager which we can use to view hardware details such as the disk model number and information about the disk device drivers and cache and policies. From around Device Manager, check out the implementing Windows 10 course in this learning track. We won't repeat that content here. The Disk Management console, which we can invoke via the start button's context menu, or through the computer management console is useful for initializing, partitioning and formatting disks. As well as for creating dynamic disk volumes. Although those are a bit obsolete now with the event of storage spaces. And speaking of storage spaces, it's control panel is where you can create storage pools, virtual drive, and perform more modern mirroring and fault tolerance. We've got a whole clip on storage spaces coming up later in this module. But for now, let's look at a few of the management tasks that we can perform with the disk management console and Windows 10. So here I am on GMWS2 workstation 2 running Windows 10. Let's look at some of the actions that we can perform in disk management. So if I right-click the C volume here, I can change drive letter and paths. If I want to access it with the different drive letter, I can also click shrink volume. And Windows will analyze how much I can shrink the volume by what Microsoft calls the shrink space. By the way, third-party tools can usually do a better job here than Windows. Let's shrink it by 200 megabytes. Another right-click menu option is to add a mirror which lets me select Disk 1 for the creation of a two-disk mirror set to provide fault tolerance. Normally, these days, I'd set up a mirror set and storage spaces but the Disk Management method works with more versions of Windows. Now, if I right-click the unallocated space on Disk 1, I can create a new simple volume. I'll click next at the wizard. And specify volume size of 200 megabytes. Next, I can give the volume a drive letter. Or have Windows mount it in an empty NTFS folder. We'll assign it a drive letter. And next, I can format the volume. A quick format saves a lot of time but I do a complete format if the drive is a bit long in the tooth. And notice that my choices here are NTFS, FAT or FAT32. ReFS is not available here. So after I after a brief delay, we see the new volume E in the console. Now, if I were to change Disk 1 from a basic disk to be a dynamic disk, I could use it to create multi disk volumes. And now, disk 1 is a dynamic disk and I could create stripe sets, spanned sets, and mirror sets using Disk 1. Let's take a quick peak at the action menu here, one of the interesting things is the ability to create a VHD or attach an existing VHD kind of like a mount operation. We'll be chatting about VHD and VHDX files in an upcoming clip. Now, as a final note on this console, you'll see that it does not have the ability to connect to other computers but we can give it that capability. Assuming that we have set up the firewall rules first. And I've already done that here basically enable the rules for remote volume management here. And once that's been done, what I can do is I can right-click Start, choose run, and mmc. Accept the user account control prompt. And then choose File, Add/Remove Snap-in, and we'll select Disk Management and add. And now we can specify the computer on which we'd like to focus. For example, our domain controller, GM-DC1. Finish and OK. And hey presto, here's a console pointing to GM-DC1. Just be aware that you don't have the full range of options that you'd have when running the console locally. For example, if I right-click the C drive, you can see an abbreviated context menu. Now, we can use PowerShell to manage physical storage as well. Add-PartitionAccessPath lets us specify a drive letter or folder to access a partition. Clear-Disk, warning, scary, will blow away all the partitions and data and even move the disk's initialization. Format-volume leaves the partitioning in place but formats the designated volume with a specified file system. Get and set disk allow you to see a few details of the disk in the case of get such as its health status online or offline status, partition style, and so forth. And you can modify some of those details in the case of Set-. For example, if you wanted to take a disk offline, Get-Partition shows a list of partitions for all drives by default including partition types, recovery system, basic, and so forth. The new and remove prefixes are self explanatory. Get-Volume shows File System and size information for volumes on the system. Set volume, just renames a volume and then new volume creates one. Initialize-Disk sets up a new disk so that Windows can use it and Resize-Partition lets you perform that action by supplying the disk number and partition number and a size value. Well let's see how some of this cmdlets work. Now, I'm back on GMWS2. So let's type PowerShell under the search field here. And fire it up as an administrator so I can make changes. Let's type, get-disk first. And now, get-partition. Note a quirk of windows since the Jurassic era, this numbers started zero but partition numbers start at one. Now, if I want to create a new basic partition on Disk 1, give it a drive letter of Z and format it to NTFS, I can do so with these commands. And I'll format it. And now, let's bounce over to Disk Management. And we can see that it worked. Now, let's delete the partition so that our Disk 1 is again pristine. The older command line tool DISKPART is still available and that's a good thing because some companies have created DISKPART scripts for setting up disks in a specific way. As with the PowerShell command, let's be careful because you can easily nuke a disk if you get a command wrong. Initially, you'll probably want to list disks and volumes and partitions. For example, to get their numbers and sizes. When you select a disk, that designates it as the working disk for future commands. The Clean command is just a scary here as it was with PowerShell. Create Partition is self explanatory. Attach Vdisk lets you mount VHD or VHDX file for processing. Convert allows you to convert between basic and dynamic as well as between MBR and GPT partition styles. Create and delete work with partitions, volumes, and virtual disks and format sets up a volume with a file system using the FS equals parameter. Let's see what DISKPART looks like. Back on GMWS2, we'll fire up an administrative command prompt. And I'll type DISKPART to get into the tool. List disk, shows me the disks. And list partition which you can abbreviate. Fusses at me for not telling the tool which disk I'd like to use. So I'll pick Disk 1. And now, we see the sole partition on Disk 1. Let's create a new 200 megabyte partition as we did with PowerShell. And the new partition is now automatically the selected partition. So let's format it to NTFS. And as before, we can bounce out to Disk Management to verify the result.
VHD and VHDX Storage
In this clip, we'll look at a storage type associated with virtual machines but as we'll see, not limited to that usage scenario. Let's look at the formats first. VHD has a size limit of two terabytes and supports older systems such as Windows 7. VHDX is the default choice and it permits a maximum size of 64 terabytes but only supports Windows 8 and newer. It's more reliable especially for dynamically expanding disks. Unless you need the support operating systems before Windows 8, I'd be inclined to use VHDX. Incidentally, you can convert between VHD and VHDX. As long as your living within the size constraints when going from VHDX to VHD. So, what can we use these formats for? Well, of course, they work with Windows 10's Client Hyper-V feature to provide physical storage for virtual machines. But also, the VHDX format is used by the included backup and restore program from good old Windows 7 when you're making a complete or image backup. And there's one other use for these formats that you may have never explored. You can use them for natively booting Windows if you have a need for such a thing. You can boot the Windows Vista, seven, eight, or 10 this way. For example, if you're a support technician who may need to jump into an alternate universe during a troubleshooting session. This basically replicates the old dual boot scenario except without the requirement of installing Windows on a separate partition. Handy if Client Hyper-V is not an option for some reason. So, how do you build a VHD or VHDX file? It's actually pretty easy and as usual there are multiple ways to do it. Within Hyper-V manager, choose New and Hard Disk. Within the Disk Management console, choose Action and Create VHD. At the command prompt, we can use DISKPART and the create command and at the PowerShell prompt, we have the New-VHD cmdlet. Now when creating a Virtual Disk, we can go fixed, dynamic, or differencing. Fixed disks use the maximum allocated space immediately. They can be faster and they tend not to fragment overtime. Dynamically expanding disks starts small and grow as required overtime. The VHDX format is best if you go this route. Differencing disks link to a parent and represent only the changes made from that parent image. These are really useful if you need several VMs that use a hard drive whose contents are largely the same but only varies slightly from one VM to another. You could use differencing disks in your demo environment if you need to reduce the disks space used. But that's a bit beyond the step of our discussion here. And it does complicate the setup a little bit. PowerShell offers a variety of VHD related cmdlets. But if you don't have Hyper-V installed on Windows 10, you'd need to add it along with the Hyper-V module for Windows PowerShell. You could do that if had the Windows features control panel. That's a pretty heavy ad so you may prefer to just use disk management or even DISKPART for occasional VHDX chores. So the Convert-VHD cmdlet in PowerShell can migrate a VHD to a VHDX or vice versa. It can also create a VHD from a physical drive. Mount-VHD makes a VHD or VHD file accessible via a drive letter. The disk management console uses the verb attach instead of mount but it's the same concept. New-VHD pretty self explanatory. The syntax lets you choose fixed or dynamic and specify size. Optimize VHD performs a compact operation for dynamically expanding disks and resize-VHD resizes a virtual disk although be aware that size reduction is only permitted for VHDX files. Also, note that despite the names of these cmdlets, they generally worked fine on both VHDX and VHD file formats. Okay, so let's build a VHDX quickly using the Disk Management console. Here I am on GMWS2, a Globomantics Windows 10 workstation. The procedure to create a VHDX is pretty straight-forward. And I don't have to have Hyper-V installed as I do with PowerShell. So I go to the action menu and I can choose create VHD and that will bring up the dialog box. So we'll specify for the location, C VHDS newsdisk.vhdx. That's the location. Format VHDX dynamically expanding. We'll set the size to be 200 megabytes and OK. Now, if I scroll down in the console, I see Disk 2 representing my new VHDX. If I open file explorer, I can navigate to the VHD's folder and see the new disk. Its size is only four megabytes because it's the dynamically expanding type and I haven't put anything to it yet. Back in Disk Management, I can initialize the disk as GPT and that will create a new simple volume. Specify the maximum size. And give it a drive letter of V. We'll do a quick NTFS format and finish. And it goes quickly. Now, I can bounce out to file explorer and copy a few files to that VHD. For example, the contents of the field tests folder. Now, let's take a look at how big that is. Pretty small, just 12k on disk. I can now go back to Disk Management. Right-click the disk 2 tile and choose Detach VHD. Which basically unmounts it from the file system. Windows 10 reminds me of the location and I'll click OK. And let's bounce back over to file explorer. See VHDs and there's newdisk.vhdx which is now about a 135 megabytes in size even though the filled test folder was less than one megabyte which tells you that Microsoft has some work to do on this dynamically expanding disk technology.
The storage spaces feature lets us connect a variety of drives, of different characteristics, and create a shared pool of storage from which we can carve out virtual drives. Storage basis is a little bit like a storage or a network on a shoestring budget. We can pull together different types of physical drives and we can mix and match just about anything in stored spaces. And you can mix internal and external drives as well as drives of different sizes. You can't use iSCSI or RAID though because these technologies add a layer of obstruction between storage spaces and the physical drives. We can then create different types of virtual disks from the pool. Simple, mirror sets, and parity sets which are like RAID 5 arrays. You remember the thing that we couldn't build using Disk Management on Windows 10. It's relatively easy to add storage when we need it. Although there are some nuances if you have multi-disk sets such as parity sets. Incidentally, this is not a brand new feature in Windows 10 but it's fairly new. Having been introduced in Windows 8 and Server 2012. You'll use it more often in server environments. But there's no law against using it on a high performance Windows 10 workstation. You might want to setup a home media server to store large music or movie files. You might have a grab-bag an old disk lying around. Or you might just wish to build a portable RAID array on a cheap without having to buy special controllers or disks. Now, the basic procedure for building a storage space is to connect two or more non-operating system disks. Make sure you backup and delete any existing partitions. Or alternatively, format the drive, open the storage spaces control panel and create a pool which must be less than 480 terabytes and can sport no more than 64 storage spaces per pool. Then create a virtual drive out of that pool. Each virtual drive being less than 10 terabytes and the recommended scenario. Now, by the way, to be clear, a virtual disk can store spaces as a different animal from a virtual hard drive and Hyper-V or VMware. Microsoft really should have chosen a different term here but they didn't so just be aware of the context to know. What kind of virtual disk is being discussed? Now, when you create your storage space, you have a few choices to make. If you use Thin provisioning, you can specify a size that exceeds what you have at the moment in terms of physical disk space. With the idea of adding more later if and when you need it. If you want fault tolerance, which in Storage space lingo is called resiliency, that's another decision to make. These are decisions that you cannot change later without destroying and recreating the storage space. If you're up on your server technologies, you may read about Tiered Storage Spaces Server 2012 R2. Microsoft doesn't support this in Windows 10. As it's really consider a server capability but it does work by PowerShell. And it takes advantage of the speed edge that solid state drives have by caching frequently used files on the SST and by improving the right speed to parity sets which otherwise is fairly decimal. Well, let's take a look at creating a simple storage space in windows 10. To create a simple storage space, I need to start with an unpartitioned, unformatted, physical disk. So, if I open the computer management console by right-clicking the Start button. And then navigate to the Disk Management tool. I can see that Disk 1 has a 127 gigabytes and no partitions have been allocated. Closing this console, I'll go to control panel. Not the settings app by the way but control panel. And there is storage spaces in the small icons view. And note that creating a new pool will require admin privileges, the shield icon. Windows spots my disk. And if I want to create a simple storage space with no resilience, I can just click the create pool button. Now, I'm at the create a storage space page where I can assign a name. We'll call it SS1, a drive letter. We'll choose K and a file system. The choices being NTFS and ReFs. Remember though, ReFS is only for mirror sets. On the resilience you type, the only choice that will make this create storage space buttoned at the bottom go live is simple or no resiliency. For the size, I'll choose 500 gigabytes which is more than I have right now. But that's okay within provisioning. And finally I'll click create storage space. Now, when it's done, I can see that the formatted space is ready for use. I can see the free space corresponding to my thin provisioning setting of 500 gigs. So here's the storage pool with some associated overhead affecting the total capacity and underneath is my SS1 storage space. Showing the type and provision capacity. And underneath that, I can see the physical drives. There are many useful links in blue at the right part of this page. For example, if I click change, next to the SS1 storage space, I see that I can change the drive letter. The name and the maximum provision size. What I cannot change is the resiliency type. So, just be aware of that. Now, in a simple storage space, there's no fault tolerance or resiliency. So if we decide that we want that, we need to go with mirroring or parity sets. A two-way mirror or RAID 1, requires at least two drives. And one of them can fail without the mirror set losing data. A three-way mirror is somewhat rarer beast. It requires five physical drives and can tolerate the failure of any two. Finally, a parity set which is RAID 5, can be set up different ways. If you have three drives or more, you can tolerate the failure of one drive and if you have seven or more, you can tolerate the failure of two. As you probably know, RAID 5 is somewhat less wasteful of disk space than mirroring. Here's a picture of a two-way mirror with the number data blocks being written in an identical way on the two disks. The downside of mirroring is that it delivers a fairly low ratio of usable capacity to total capacity, 50%. So, a compromise that balances fault tolerance and capacity is a parity set in which the computer writes data to each drive in the set including parity or recovering information that gets spread across multiple drives so that a drive can fail with no data loss. The capacity utilization is better than with mirroring. But in this implementation of RAID 5, I have seen a major speed drop with write operations. Although, essentially, no reduction in performance with reads. Now, here's a representation of number data blocks in a parity set. You can see that the number blocks get written across all three drives. The ones with the P are parity blocks. And again, the computer writes these across all of the drives. If you want to make your virtual drive even more resistant to failure, you can use the resilient file system or ReFS in a mirror set. ReFS can perform data repair on the fly so you don't need use chckdsk as a separate operation to perform repairs. ReFS is incompatible with some storage features that you may need though such as encrypting file system and quotas. And ReFS doesn't work with parity sets just mirror sets. If you suffer a failed disk or storage spaces, Windows 10 alerts you to an issue in the notification area. You should probably backup data on the affected storage space then connect the good replacement disk. Navigate in the storage spaces control panel to change settings. And then choose, add drives. Once that's done, you can remove the dead drive from the list. Now, let's chat about capacity and storage spaces. When we fire up the storage spaces control panel, the pool capacity is not the same as the usable capacity. Instead, it's the total space which may be greater than the usable capacity. The ratio of usable to total space, will depend on the type of resiliency that we choose. This slide shows theoretical capacity utilization for different types of virtual disk. A simple storage space with no resiliency provides for 100% utilization. A two-way mirror 50%, a three-way mirror 33%, and a parity set is N minus one divided by N where N is the number of drives in the parity set. So for example, a three-disk parity set would have a capacity utilization of 2/3 or 66%. Now, as usual, life is a little bit more complicated than simple mathematical formula. You're not likely to see the theoretical maximum capacity factors if the disk you're using in a multi-disk set are not the same size. For example, a two-way mirror, with one one terabyte drive, and a second two terabyte drive will waste that last terabyte. Windows will let you know at the 70% full point based on the smallest drive's capacity when it's time to add storage. And storage spaces, itself, uses some disk space for its own internal housekeeping. When you add capacity, Windows doesn't automatically redistribute existing files across the available drives. You can initiate this operation manually, however, in the control panel, and in PowerShell if you prefer. Also, when adding capacity, if you're using a multi disk set, the best practice is to add drives in the same multiple. If you have a three disk parity set and you run low on space, add three more disks. Finally, if you need to remove a drive, you can do so as long as there's enough free space left in the pool to satisfy the storage space requirements. Select Change Settings, Physical Drives, and Prepare for Removal. The delay will follow before you'll be told that you can disconnect the drive. Now because it could be awhile, you're smart to disable sleep mode, so that the PC doesn't go dormant in the middle of the operation. Sleep, isn't always smart enough to realize that things are actually happening with the computer.
Now, let's turn our attention to some specific issues having to do with removable drives in Windows 10. The challenge is of today's removable storage devices, lie first of all in their high, and increasingly higher storage capacity. A lot of data can walk out of an office on a flash drive these days. The small form factor is a big part of the challenge. Windows wasn't really designed with very high capacity flash drives in mind. And the potential for problems from data theft or loss is something to which IT professionals have to pay attention. One of the things we can do is implement encryption on removable devices. BitLocker-to-go is an example. This provides encryption for USB Flash drives and SD cards. As well as for external removable hard drives. It does not use the on board trusted platform module chip that BitLocker uses for controlling access to fixed disks because removable disk by definition are not fixed. BitLocker-to-Go uses a password or potentially a Smart card to gain access to the drive. If you use encryption, at least a loss of a flash drive doesn't necessarily mean the compromise of sensitive data. In an active directory domain based network, we can also go a long way towards meeting removable storage security challenges with the help of group policy settings. The settings that relay to removable media fall out in this several categories. Spread out in various places and the group policy or organizational structure. There are for example, settings to restrict what device drivers and driver classes may be installed. To specify quote is to limit the quantity of data to impose restrictions on what types of devices software maybe installed from to impose other restrictions unique to optical drives which is still present on desktop computers. Settings having to do with BitLocker-to-do encryption. And Windows Defender malware protection. So, let's see where some of these settings live in the group policy console and then you can explore them on your own and decide which one's might be appropriate for your organization's removable storage environment. I'm on Globomantics GMWS1. The Windows 10 machine that's been equipped with the remote server administrative tools are set. So I'll navigate down to Windows administrative tools and the group policy management console or GPMC. Now, let's create a new domain link GPO. And we'll call it, removable media linked to globomantics.local. And now, we'll edit it. First, let's navigate to computer configuration policies, administrative templates, system, device installation, and device installation restrictions. We won't look at all these settings but I want to point out a few here. You'll notice and allow and prevent pair of settings for devices that match specified device setup classes. And a bit lower, a similar allow and prevent pair of settings that pertain to devices that match plug and play IDs. You can discover both the setup classes and device IDs via device manager. These are just two different ways of either permitting or prohibiting users from installing specific classes of devices or even very specific individual devices. Now, further down, there is another setting here that's more broad. That says, "prevent installation of removable devices which encompasses all the devices that report themselves to Windows as being removable." Here's a tip though, any device drivers that are already installed by the time Joe user gets his PC, will still be able to function. These settings just prevent or allow the installation of new drivers. Now, navigating back up here to the system container. There is a node for disk quotas underneath it. And if we click it, we can see that disk quotas can be made to apply to removable media. By the way, this only words with removable media that is been formatted with the NTFS file system. Down a bit further, is a node called enhanced storage access. So here we can provide a list of manufacturers and product IDs of these security enabled flash drives that are approved for use. We can also prohibit the use of non-enhanced storage devices and a few other settings here. If we slide further down, under the system node to removable storage access. I told you these settings were scattered around. Here are some settings that allow administrators to deny various types of access to optical, floppy, and tape drives. That is read, write, and execute access. We can even deny all access to any and every type of removable device. Which we might do for highly secure workstations, for example. Now, finally, if I move back up to administrative templates. And choose Windows components, leaving the system node behind. We can expand this and navigate over to BitLocker drive encryption where there is a subnode for removable data drives. So there's a grab-bag of settings here. Including the ability to deny write access to removable drives that are not protected by BitLocker-to-Go. So that's a fairly quick look at some of the GPO settings that can help our Globomantics administrators deal with the increasingly significant risks associated with removable storage. And congratulations, on finishing this module on configuring physical storage. When you're ready to move forward, our next module is configuring data access where we'll discuss sharing and permissions topics that is important for us as grownup IT professionals as they were when we were little kids in kindergarten.
Configuring Data Access
Welcome to our module on configuring data access, how to share information in Windows 10 but with appropriate controls. The topics in this module include setting up sharing within a peer-to-peer home group, sharing folders and using the pre-shared public folders, restricting access to files and folders using NTFS permissions, restricting access with newer dynamic access control conditions, and sharing files via the OneDrive cloud service. We'll begin by considering HomeGroup sharing, the latest iteration of Microsoft's home-oriented network sharing solution. A HomeGroup is a home PC oriented network with relatively relaxed password protection that let's Windows 10 users share documents, pictures, music, videos, and printers. It's not a domain in which user accounts and credentials are stored in a central domain controller. However, it's more than a work group in that it's easier to set up. Users can share folders in View or Read Only mode, or in Edit, that is Read/Write mode. HomeGroups do not have stringent requirements. You need to be an administrator to enable a HomeGroup but you don't need a dedicated server machine as you would for a domain network, and you also don't need a separate client access license for your HomeGroup systems. One thing you do have to do though, is make sure the network type is "Private" for all computers on the HomeGroup. Private networks presume a trusted environment, like a workplace or a home. HomeGroups do require that IPv6 be installed and not disabled, and they only work with versions of Windows equal to or newer than 7. That's an important point, by the way. The HomeGroup is not new with Windows 10 and different versions of Windows can coexist on the HomeGroup. Remember that IPv6 support has been included in Windows all the way since Vista. Now, interestingly, it's possible for a domain PC to join a HomeGroup, although it can't create one. For example, if Harry at Globomantics brings his domain-joined laptop home, he could connect to his home network. Choose the private network type, and join the HomeGroup. But the sharing is one directional. From the HomeGroup to Harry's computer. Let's talk HomeGroup passwords for a minute. A single HomeGroup password serves to unlock all resources shared on the HomeGroup. This password is auto-generated when the HomeGroup is set up and you can change it to something friendlier anytime. You could also view the password as long as you have log on access to a HomeGroup member computer. The password protection in the HomeGroup is, therefore a long way from anything we'd consider secure in a corporate environment. Globomantics isn't going to set up HomeGroups at work but Globomantics employees may very well set them up at home for use with their family members or housemates. Unlike older peer-to-peer networks, the HomeGroup by default doesn't require you to set up one or more of the same accounts and passwords on each of the participating PCs. To create a new HomeGroup, there cannot already be a HomeGroup on the same network. There are various ways to get to the HomeGroup control panel, via Settings, via the Desktop Control Panel, and via Network and Sharing center. As usual in Windows, there are several ways to get anywhere. If no other HomeGroup exists you'll see a Create button. Click it and then choose the folders you wish to share with other members of the HomeGroup and the level of access you wish to confer, Read or Read/Write. You can change that list later and even go beyond it. Windows will provide a generated password that you can then note down. You'll probably want to change it later. Remember that in the normal Windows controlled mode, shared resources are available to all user accounts as long as the computer joins the HomeGroup with the password. The GUI prompts you to share your documents, pictures, music, and video libraries, as well as local printers. You'll see these prompts when you create the HomeGroup and when you join the HomeGroup from other computers. This kind of sharing can go in both directions, inbound and outbound. Although when you configure the sharing in the HomeGroup control panel, you're always configuring outbound sharing on that PC. Working within the HomeGroup control panel, choose the length that says, "Change what you're sharing with the HomeGroup," and you can modify that original list of five things by changing the drop-downs between Shared and Not Shared, which is fine as far as it goes. But the good news is that we can change the list of what is shared over the HomeGroup beyond what Microsoft suggests in the control panel. Simply open File, Export, right click any folder, and hover over the Share With Element. If the folder is one that's outside of the pre-determined list, just choose HomeGroup View for read-only access. Or HomeGroup View and Edit for read and write access. If you want to exclude that folder from what you shared already, for example, maybe there's a sub-folder under Pictures that you don't want to share with the HomeGroup, then you can select Stop Sharing. You can also change the underlying sharing mechanism in the HomeGroup. Go to the HomeGroup control panel, choose Change Advanced Sharing Settings and choose HomeGroup Connections. The normal method lets any user with the password join a HomeGroup and gain access to shared resources. The old method whereby you would need to set up identical user accounts and passwords on each local machine is the alternative. And here's where you can change that HomeGroup sharing mechanism. Okay, well now, let's take a quick look at sharing a folder with a HomeGroup. Here I am on GMWS1 which is running a HomeGroup in this example. If I open the Settings applet, and choose Network & Internet, the relatively new network status page gives me a HomeGroup link which bounces me over to the desktop control panel. We can see that when this HomeGroup was set up, I chose to share Pictures, Music, Videos, and Printers & Devices. Well that's all well and good. But now I'd like to share a folder that's outside the preset list of shareable libraries. Over in File Explorer, I see a folder named Tools. Let's say it contains some files that I'd like to share with other computers on my HomeGroup. Right-click, Share With, and there are the choices, Read Only or Read/Write. And now it's shared. Although nothing changes visually, Windows 10 doesn't give us any visual cue anymore about whether a folder is shared or not. Some of you may remember the little hand holding shared folders, like a waiter holding a plate, in previous versions of Windows. We can't even tell from the context menu. If I right click the Tools folder, there's not even a checkbox to indicate the sharing status. We can see what is shared though by right clicking the Start button, choosing Computer Management, expanding Shared Folders, and then clicking the Shares node. And there's Tools right there. There's no indication that it's been shared with the HomeGroup, but at least we can see that it's been shared. By the way, this console also works to show shares in non HomeGroup scenarios.
Shared and Public Folders
Sharing files and printers is not necessarily a core capability of a workstation OS, it's normally thought of as a server function, but we can do it in Windows 10. Let's take a closer look. File and printer sharing provides the capability of sharing files and printers from the local machine across the network to other computers. You do not need file and printer sharing to connect to file shares on file servers or to print to printers that already exist on the network. This capability is just for sharing out files and printers in the other direction, which frankly, most Globomantics employees may not need to do. File and printer sharing is on by default for private networks, and off by default for public networks. File and printer sharing can be turned on or off in the Network and Sharing Center. In the Settings applet, click Network & Internet tile, click Ethernet, and choose Change Advanced Sharing Options. In the Network and Sharing Center, click the link named Change Advanced Sharing Settings and watch out for the inconsistent lingo, Options and Settings are the same thing. We can also turn this feature on or off by modifying the Windows firewall an advanced security rule group of the same name, and with the good old Netsh command using the rule group named File and Printer Sharing. By the way, this command should all be on one line. We chatted in this course's first module about network location types and you may remember that by changing the network location type, domain, public, private, and so forth, we can change both file and printer sharing behavior and network discovery behavior. Now, I'll demo how to activate file and printer sharing independently of the network location type. We're working here on GMWS1 on the Globomantics.local domain. Changing the status of file and printer sharing can be accomplished by visiting the Network and Sharing Center, choosing Change Advanced Sharing Settings and then clicking the radio buttons under the File and Printer Sharing heading here. Now right now, that feature shows "On", so let's turn it off and click the Save Changes button. I'll now close Network and Sharing Center. By the way, as already mentioned, we could have used netsh or PowerShell to make this change. Now, if I search for Firewall, and if I select Windows Firewall with Advanced Security, and we'll maximize that, I can click Outbound Rules and filter by group, and then we'll scroll down here and filter by File and Printer Sharing. And then let's also filter by Profile, and we'll pick the domain profile. So notice that the rules allowing this type of traffic are not enabled, as we have just turned the feature off. So far, so good. The sort of, goofy thing is that all an administrator has to do to turn the rules back on is to share something. So, if I minimize the firewall console, open up File Explorer, navigate down to the C: Drive, in the Tools folder here, and then if I click the Share tab at the top of File Explorer, and select specific people, and I'll click the down arrow here, and choose Find People, and we'll add the domain group Marketing. And then click the Share button. I see the message that this folder is shared. I just turned that feature off. Why did Windows 10 let me share the Tools folder without at least mentioning that, "Pardon me, that feature is off do you want to turn it on?". Well, let's click Done, and minimize File Explorer, and go back to the firewall console. It's not a real-time display, so I'll tap F5 to refresh the window, and I can see that the rules are all back on. So, turning on File and Printer Sharing in Windows 10 can be done simply by an Admin sharing a folder. A little weird but that's how it works. There are two primary ways to share folders using File Explorer. Method one, is to right click the folder and choose the Share With, menu option. Specify whom you want to share the folder with, and whether you want to provide read-only access, or read-write access. This is pretty much the same as using the Sharing tab on the ribbon bar, as I just illustrated in the preceding demo. Method two, is to access the folder's properties page, click the Sharing Tab, and then the Advanced Sharing button. This method lets us specify people and access methods, but it also lets us limit the number of simultaneous users, and define the caching behavior for users who are accessing the folder. At an Administrative command prompt or PowerShell prompt, there is a somewhat dizzying variety of choices for sharing folders. The venerable Net commands allow us to share a folder with a Share Name, as in the example here. Net share without parameters reports on shares that exist on this system, and Net share/delete, removes an existing share, but doesn't delete the folder by the way. And we have equivalents in PowerShell. The PowerShell cmdlet, New SmbShare, is a bit more verbose, but let's share a folder with a Share Name. Get SmbShare, reports on existing shares, and Remove SmbShare, deletes the share. Incidentally, Smb, stands for: Server Message Block, which is the Windows file sharing protocol. The Public Folder is a bit special, it can be shared across the network, which might be a convenient method of providing occasional access to files on someone's local machine. Although you can't control that access on a per account basis. It's either "on" for all network users, or "off" for all network users. Whether or not the Public Folder is shared over the network however, it is always available to any user with a local account on that machine. The default permissions let everyone have read and write access. Well let's demo sharing a folder over the network. Now back on WS1, I'm focused on the local C: Drive in File Explorer. Now if I open up the Tools folder, I see the "rsat". And I could try to share the rsat Installer File. But I'm stymied. The only time I can share a file is if that file lives in a user profile folder. Other than that, sharing is done on a per folder basis, not a per file basis. So, let's share the Tools folder, this time using advanced sharing. I'll right click the Tools folder here, and choose Properties. And click the Sharing tab. Now I'll click Advanced Sharing because it gives me more options and also sounds cool. If I check "Share This Folder", I can create a share name, that may or may not be the same as the folder name. I can also limit the number of simultaneous users, click the Permissions button to set share-level permissions, which we can see consists of: Full Control, Change, and Read. And I can modify whether and how the folder and its files can be cached, or locally copied to the Sharee's computer. By the way, the share permissions only affect situations in which the user accesses the share over the network, or by its share name. These are separate from NTFS permissions, which I'll discuss later in this module. I'll cancel all the way out of here. And let's share the same folder using PowerShell. I'll open an elevated PowerShell prompt. And we'll type "New-Smbshare". Now to verify that the share got made, I can use "Get-Smbshare". Okay, and there it is, and there's the name that I just gave it. The shares that show up with a trailing dollar sign are hidden shares, that users on the network cannot see when browsing. Let's now double confirm that the share works, by bouncing over to GMWS2. So here I am on GMWS2, and I'll right click the Start button, and I'll click Run, and I'll type \\GM-WS1 and Enter. And voila, there are the available shares, including our SAT share. Now let's bounce back over to GMWS1. Let's look at the public folder next. A quick check of Network and Sharing Center, on this domain joined machine, shows that public folder sharing is presently turned off. Now if we open to File Explorer, and navigate to the Public Profile folder, and then take a look at the Properties, and the Security tab, we can see that only the Administrator's group has full control and there's no entry for the Everyone group. Now there is an entry for the Interactive, built-in group, which includes any currently logged-on, local interactive user. Back to the Network and Sharing center. Let's click the radio button to turn on sharing of the Public folder. And click Save Changes. Now, if we go back to the Public folder, and look at its properties and click the Security tab, we can see that the Everyone group, has an entry now, and furthermore, it shows Full Control. So the effect of enabling sharing of the Public folder, is really a permissions issue. Defining who can access this public profile. Local users only or local users plus network users.
Learning to share is important, but so is learning to protect what you share, and that's the purpose of this clip on NTFS permissions, aka File and Folder permissions. NTFS, which actually stands for NT File System and harks back many years, includes a very important facility that's lacking in FAT, FAT32, and exFAT file systems, namely Access Control Lists, or ACLs. There are actually two ACLs in NTFS, the DACL, or Discretionary ACL, controls access to files and folders. Whereas the SACL, or the System ACL, controls the details of any auditing that's been set-up. Our focus here is on the DACL. Each of these lists consists of one or more entries, called Access Control Entries or ACEs. Sorry for all the acronyms. The governing principle is called Implicit Denial. And what that means is if a given user doesn't have at least one ACE that grants access, then that user is denied access. So what happens behind the scenes when a user tries to access a file or folder in NTFS, is that the user's Security Access Token gets compared against the file or folder's access control list. The Security Access Token is like a key-chain, comprised of several keys. Each key being a security ID, or SID. For example if Harry is a member of two groups, Engineering and Denver, he'll have at least three keys, or SIDs in his SAT. The Access Control List, which is associated with, in this case a folder, has three entries, created by Administrators. Engineering is allowed to read. Management is allowed to modify, and Glenn is denied modify privileges. So, because Harry's a member of Engineering, he can read the folder when his SAT is compared against the ACL for the folder. The main principles of NTFS permissions are as follows: Child folders normally inherit the permissions of their parent folders, however, we can override that inheritance by essentially dis-inheriting the parent, and setting explicit permissions on the child. "Deny" wins over "Allow", with the exception that an "Allow" permission on a Child folder will override an inherited deny permission. I know, it sounds a little weird to say "a deny permission", but that's just the lingo. Access Control Entries generally specify groups, but they can also specify individual users. And NTFS permissions are cumulative, meaning that if I have one permission by virtue of being a member of Group A, and another permission by virtue of belonging to Group B, I get both permissions. There are two kinds of NTFS permissions, Basic and Advanced. Of course the terminology has changed over the years, so we don't get complacent, so Basic Permissions used to be called "Simple Permissions", they're the same animal, just common combinations of the "Special" or "Advanced Permissions". Now the Basic Permissions are fine for most of what we need to do. The default ones are: Read, Read and Execute, and List Folder Contents. Full Control, perhaps the most over-used permission, means that you can take ownership of the folder, and/or change permissions on that folder. Modify is much better than Full Control in most cases. It includes the ability to edit and delete. The Advanced, or Special Permissions, are more granular and detailed. You might need them from time-to-time, but we won't delve into them here. So here's an example of a permission entry page, for a folder named "Tools". This entry applies to authenticated users, i.e. someone who has logged on. It's an Allow Permission, and it applies to the Tools folder, sub-folders, and files. There's the list of Basic Permissions, we don't like giving Full Control unless the group truly needs the ability to modify the ACL. And here's the corresponding Advanced Permission Page, for the "Tools" folder. These permissions combine infrequently used groupings, to constitute the Basic Permissions that we saw in the previous slide. Now as you might guess, determining the bottom line NTFS Permissions can get complex for users who belong to multiple Windows groups, and for folders and files with lengthy ACLs. We should therefore be thankful for the Effective Access tab on the Advanced Security Property page. Here we can specify the user and even postulate different group memberships and devices. In other words, we can play "What if?". This tab calculates the net effect of all inherited and explicit permissions. And resolves any conflicts between any overlapping group permissions according to the rules that we discussed earlier. What this tab does not do, is consider the "Share Level" permissions that we mentioned in the previous clip, so just be aware of that. Some Administrators don't use Share Permissions at all, so that limitation may not be a big deal. As usual, just about anything we can do in the GUI, we can also do at the command line. ICACLS, which is not actually a Wicked Witch ring tone on an Apple phone, let's us grant, deny and reset permissions. This tool has been around for many years. In PowerShell we have Get-Acl, for example: Get-Acl -Path "c:\ whatever" piped to the FL cmdlet, which is short for "format list". We can also set a variable such as OldACL, that will contain the ACL from a particular folder. We could then apply that ACL to a different folder, with the Set-Acl cmdlet, as shown here, specifying the OldACL variable in the ACL Object Parameter. Now in the upcoming demo we'll look at a brief example of how permissions work for a user who belongs to multiple groups. So I'm here on GMWS1, logged-in as "Harry", and Harry is a member of two Globomantics groups, "Research" and "Denver". So we can verify this, via a command prompt. So I'll just type the command whoami\groups, and I'll format that in the list format. And we can see that the Research and the Denver groups are present among several others. Now, over on a server 2016 system, let's bounce over to GMDC1, which is acting as a file server, among other things. So let's configure access to a Share, for the Research and Denver groups. So in Server Manager, I'll click File and Storage Services, over here on the navigation pane, and then I'll click "Shares". And then there's the Graphics Share, at the top here. So I'll right click it, and choose Properties. And I'll expand the Permissions Node over here at left. And if I scroll down, I can see the folder permissions. Let's customize them, Customize Permissions button here. Now we'll add an entry for Research, by clicking the Add button, and we'll select a principle, and so I'll type in "Research" here. And we'll leave the basic Allow Permissions in place, for Read and Execute, List Folder Contents, and Read, but I'm going to go ahead and add Modify here, for the Research group, and we'll click OK. And then I'm going to add another entry, and this time we will select "Denver". And in this case we'll leave Read and Execute, List Folder Contents and Read, but we will not include the Modify right. And so we'll OK our way out of here. Now let's jump back over the GMWS1. Let's open File Explorer, on GMWS1. Now in the search field I'll just type "\\GM-DC1" and we'll specify the graphics share. So we clearly have Read Access, if I can open up the file. Nothing in it, but I can open it up successfully in Notepad. And if I type in "Harry was here", and then close and save, we can see that the Modify operation is successful. Harry is a member of both Research and Denver, and membership in the Research group gives Harry the Modify privilege. Now let's bounce back to GMDC1, and let's change the Denver setting. Now we'll go back to the Permission entry for the Graphics Share. So there's Denver, and I'll choose edit and this time, we'll change this to Deny, and we'll deny all the Basic Permissions, for example, for a situation in which the Denver office might have been compromised, and we need to prohibit access to the Share, to anyone who's in the Denver group. As I OK my way out, I get a warning about the power of Deny entries, which is entirely appropriate as we'll see. I'll say "Yes" to continue, and OK. And now let's bounce back to GMWS1. So let's access the Graphics Share again. And I get a message telling me that I don't have permission. It doesn't matter that I'm still a member of Research, which has Read and Modify permissions. The Deny setting for the Denver group takes priority. So use the Deny permission sparingly, if at all. Share Permissions can affect a user's ultimate access rights to a folder on the network. These are really a remnant of a bygone day. Before NTFS when Share Permissions were the only way to restrict access, the available permissions are: Full Control, Change, and Read, and they only apply to accesses over the network, or using a network path, such as a UNC path. When Share Permissions combine with NTFS Permissions, the more restrictive settings apply. To keep things relatively simple many Administrators just give everyone Full Control at the Share Permission level, and do all the access control at NTFS. Perhaps also with Dynamic Access Control conditions. The subject of our next clip.
Dynamic Access Control Conditions
In my live seminars, my students often come away from this topic saying "How did I not know about this?". I think that Dynamic Access Control is just about the coolest Windows data access technology, that you've never heard of. NTFS and Share Permissions are very useful as far as they go. But they can restrict access only based on user and group identities, that is security identifiers or SIDs. Conversely, they cannot restrict access based on other attributes of user or computer objects and they also cannot restrict access based on file attributes, such as, for example, confidentiality. Now the typical SAT or Security Access Token, in Kerberos includes the SID of the user, and the SIDs of all the groups to which the user belongs. You may remember our key-chain analogy from the NTFS Permissions discussion. If we're going to go beyond these limitations and create Access Rules based on user and computer attributes, called "claims", we have to expand the concept of the Security Access Token, and in the domain environment, there's a convenient way to do that, Group Policy. By the way, I should mention that if you want to create rules that are only based on new file tags, you don't need to broaden the SAT to accomplish that. So what does it look like to create a new style Access Condition? Here's the way it shows up in the GUI. We can use the drop down menus to specify attributes of the user object, as well as of the device that the user is working on to create a compound condition that must be satisfied in order for permissions to be granted. So where do these attributes come from? The Active Directory Database. Specifically the Active Directory Schema which lists all the possible attributes, for user and computer objects. We can view the possible attributes in a variety of consoles, ADSI Edit, Active Directory Users & Computers, the AD Administrative Center, and the Schema console. In fact, why don't I just show you where these attributes are? GMWS1 has the Remote Server Administrative tools, so, let's go ahead and click on Windows Administrative Tools, and then, a little hard to see, but there's Active Directory Users & Computers, let's open that up. I'm focused on the Engineering OU right here, and here's a user "Harry". I can take a look at Harry's properties, and basically everything on these tabs is and AD attribute, these are the more commonly used ones. Now if I want to see them all, I can cancel out of here, and go up to the View menu and select Advanced Features. And now let's go back to the Engineering OU and look at Harry's properties. And you see more tabs, and in particular, there's a tab here called Attribute Editor, and you can see that there are approximately three jillion attributes here. Most of which are not set, right. And any of these can be used to create a Conditional Access Rule. Of course you have to populate the attributes, for example with PowerShell, before they become useful for access control. Now as it happens, we can use Device Claims as well when building Conditional Access Rules. As with user claim types, device claim types are based on active directory attributes we could look at them in much the same manner that we just did, via Active Directory Users & Computers. Now sometimes there will be an attribute that may exist for both users and computers, so watch out for that. They're actually stored in different places in the AD Database, even though they may have the same name, such as Department, Location, and so forth. In order to use Conditional Access Rules, we need to have our Active Directory Forest, at the server 2012 or newer schema level. The file server hosting the access files, needs to have the File Server Resource Manager installed. You can install that via Server Manager. And if we're using Device Attributes, or claims, then we need at least one Server 2012, or 2012 R2 Domain Controller per site, and Windows 8, or newer clients. You may remember me mentioning that we can also use customized file tags, or classifications, to restrict access. We're not going to explore that here, but if you're interested, Google the acronym "FCI" for File Classification Infrastructure. That's pretty cool stuff too.
OneDrive, began it's existence way back in 2007, when Microsoft called it SkyDrive. Microsoft gave it a major make-over in 2011, and after a trademark dispute, renamed it to OneDrive in 2014. Here we'll focus on sharing files with OneDrive, but first a little bit of background. There are three types of cloud storage in the Microsoft world. OneDrive, for consumers, available free to anyone with a Microsoft account. OneDrive for business, part of a paid Office 365 subscription. And Microsoft Azure, also a subscription service, through which organizations can select their cloud services a la carte, on a pay-as-you-go basis. OneDrive for consumers is the flavor that you get automatically with Windows 10. You have to have a Microsoft account, which are free, to use OneDrive, and it provides 5 gigabytes free at the time I'm recording this course. Plus you can pay two dollars a month, and get 10 times that amount. You can get more storage with Office 365 subscription plans, and just be aware that the various plans can be expected to change over time. Setting up Consumer OneDrive is pretty easy. Set up your Microsoft account, if you don't have one already. Configure Windows 10 to log you on with that account, or if you don't want to do that, link the Microsoft account to your local, or domain account, or as another alternative, you can just provide your Microsoft account credentials when you sign in to OneDrive. However you do it, run the Setup Wizard, but clicking the OneDrive icon in the task bar. And if you want, click the check box so that OneDrive starts up every time you start Windows. OneDrive for business uses a separate sync client than OneDrive for consumers. It's based on SharePoint, and subscription plans are available for OneDrive for business by itself, and in combinations with Office 365, and SharePoint online. The capacity limits vary, depending on the subscription from a low of two gigabytes, to a high of infinity. In Windows 10 File Explorer you can see OneDrive as a node in the navigation pane. Only synchronized folders will appear under that node, along with indicators of the synchronization status, and share status. And if you're not logged-in to OneDrive when clicking the node, Windows will prompt you to do so, or create a Microsoft account if you don't already have one. You can right click Folders and Files under OneDrive, to create links to share with others. And if you edit a file locally, that is the cached copy of a file, it syncs to The Cloud automatically. As for saving files to OneDrive, you can make it the default saved location for pictures and for documents. You can also automatically save captured screen shots and snapped photos to OneDrive. These features are available in the OneDrive Settings tool on the Task bar. Sharing OneDrive files is easy and intuitive. If you're at the website: OneDrive.com, just right click the file or folder of interest, choose "Share", and then either select "Invite People", if you want to share with a specific person, or "Get a Link", if you want to create a link that you can share with a number of people, or via social media. If you're on the local PC using File Explorer, you can right click the file or folder, choose "Share a OneDrive Link", which will create a OneDrive link and drop it into the clipboard, or click "More OneDrive Sharing Options", which bounces you to OneDrive.com. The options available when sharing a OneDrive file or folder are to specify View Privileges, that is "Read Only" permissions, "Edit" permission, which lets others modify the document, and whether the user must have a Microsoft account in order to participate in the sharing. Removing access to a OneDrive file or folder, that you've already shared is not difficult. Go to OneDrive.com, click "Shared" in the nav pane at left, right click the shared folder or file, choose "Details", and you'll see the Explicit and Implicit links and share details over on the right side of the browser window. Just click "X" to disable a share link that you want to deactivate. The Fetch capability is interesting, and potentially a little scary. You can enable it on host PCs, but not Macs. It allows you to access the PC from an alternate device, through the OneDrive.com webpage, by choosing the PCs node at left. You'll need to provide authentication, for example by means of a security code, delivered via text message to your phone. So what's the scary part? Well this Fetch capability provides access to the entire computer, not just the files and folders on OneDrive, but also, to mapped network drives. Here's the security check screen you'll get to provide two factor authentication. Here I've set-up Globomantics laptop number one, so that it's files are fetchable over the network via OneDrive.com. And here, you can see that once I connect to that laptop from another computer, maybe a shared PC in a hotel lobby for example, I've got access to everything on that GM laptop one machine, and not just the user profile either. I can see anything on any local drive. Notice the C: and D: Drive tiles on the bottom row. So you definitely don't want to share your OneDrive logon credentials with other people if you're using this Fetch feature, because that would give them access to your entire PC. Still it's a way to gain remote access to your desktop at the main office, if you're on the road, and you need some files that you forgot to put on your OneDrive Shares. Congratulations. You've accessed the data in one more module, and if you're going in order, you're now more than half way done with the course. The next module deals with Configuring Applications. Both the newer, universal kind, and the more traditional desktop kind. See you then.
Types of Windows Applications
The reason we're all here today is that computers and Windows can run applications that let us do useful things or fun things in the case of games. This module explores deployment and configuration of applications, both old and new varieties. The topics we'll cover in this module include the different families of Windows applications, traditional methods for deploying applications to user systems, the special methods for deploying Windows Store and universal apps and finally, a few comments on configuring applications, realizing of course that most configuration will be program specific. Let's begin with the different types of Windows applications. The good news is that there's only three. The bad news is that types two and three are hard to tell apart unless you're a software developer. Desktop applications, sometimes called legacy applications even though they're still the most common type in business networks are written to the classic Win32 API or application program interface and they contain the typical Window controls presence since the very first version of Windows, minimize, maximize, close and restore. I should note that since Windows 8, Windows Store apps have grown basically the same set of controls, it turns out that users had become attached to those controls so Windows Store apps aren't quite as foreign in Windows 10 as they were in Windows 8. Desktop applications may be very full featured, complicated and used for many different purposes, think Access or Excel. They're also built for use with mouse and keyboard input. Finally, desktop applications aren't necessarily reviewed, certified or otherwise approved by Microsoft. Windows Store applications also go by the name of metro, modern, immersive, these generally mean the same thing. These apps made their debut with the infamous Windows 8 and they use a different development API, namely WinRT, not to be confused with the operating system of the same name that is now defunct. Windows Store applications are designed for tablets, so they run full screen in Windows 10's tablet mode. These apps are typically more lightweight, street forward and single-purpose programs and they're designed to be used with touch screens. Windows Store apps are reviewed, certified and digitally signed before they're put up onto the Microsoft Windows Store. By the way, both desktop applications and Windows Store applications can show up on the Windows 10 start menu and/or start screen. Only Windows Store and universal apps, however, can take advantage of the live tiles feature on the start screen. Desktop applications have static tiles. So, what about that third type of application? It's the universal apps so called because it can access both the Win32 and the WinRT APIs, so the feature set is entirely up to the developer and not limited by the choices available in the API. Universal apps don't necessarily look any different from Windows Store apps, the distinction is what's available to the developer under the hood. Let's look briefly at a few of the design differences between these different application types. Alright, well, I'm on Globomantics workstation, GMWS1. To highlight some of the designed differences between desktop applications and universal or Windows Store apps, let's look briefly at a desktop application, Internet Explorer. I'll click start and I'm going to scroll down and find Windows Accessories and there's Internet Explorer and at the upper right we have the traditional window controls, minimize, maximize, and close. If we flip Windows 10 into tablet mode, we can see that the Windows controls are still there. Now I'll flip back into desktop mode and close IE. But we can look at the weather app as an example of the Windows Store applications style. So, there's a weather app. By the way, the tile can be live with Windows Store apps. As you can see here that's one of the points of distinction because desktop applications have static tiles, so we'll click that and here's a Windows Store app. Now, in Windows 10, even a Windows Store app typically has the traditional controls up here at the top right, such as we saw in IE. These were notably absent in Windows 8 but when I flip the computer into tablet mode, those controls go away. Flipping back to desktop mode, the window controls reappear and by the way, I can also resize the window, also something that was not available in Windows 8 Store apps. Now, although there is some variation between app providers, Windows Store apps often use a quote unquote hamburger menu like the one at the upper left here to allow users to make settings for the app. Clicking this icon in the weather app shows a few links and then a settings choice at the bottom. Windows Store apps generally do not use traditional dropdown menus like desktop applications do.
Traditional Deployment Methods
This clip examines traditional methods of deploying applications, that is methods that work for desktop applications. There are a few methods that Microsoft developed in the context of Windows Store applications and we'll look at those in the next clip. We have many choices for deploying desktop applications. The interactive installation is of course an option. Typically the application comes in the form of an MSI file or perhaps an executable file that may or may not function as a wrapper or container for an MSI file. Generally you have to log with an administrative account or provide admin credentials in order to perform an interactive installation. We can also deploy applications together with operating systems in the form of images, hybrid images contain common applications and thick images contain all the applications an employee might use. We have a variety of tools for building images, there's the Windows assessment and deployment kit, a free but somewhat huge download, the Microsoft Deployment Toolkit, also free and smaller but which does require the WADK, System Center Configuration Manager and similar enterprise management tools from other software vendors and the Imaging and Configuration Designer, a new free tool from Microsoft that comes in the WADK. We can use the ICD to deploy a single application too, separate from an operating system via something called an SPP which we'll look at in a few minutes. In a domain environment we can deploy MSI files via a group policy to domain members and finally, we can set up a Windows Store for business which I'll discuss a bit more in the next clip. Whoa, lots of choices here. Let's chat a little bit more about a few of them. The WADK has been around since the Windows Vista days and the basic tools work more or less the same way in Windows 10. First, we'll build a boot image using the Windows pre-installation environment or WinPE. Optionally we can create a Windows setup answer file to customize the creation of a reference computer. The tool for that is called the System Image Manager or SIM. We'll build that reference computer the one that we're going to clone and install some applications onto it as well and then we'll create an operating system install image from that reference computer using a tool such as DISM. Over time we'll update and maintain that install image so it can be used for new deployments and as mentioned, the WADK contains the ability to create SPPs. Now, the different business scenarios for image deployments are high touch, light touch and zero tough. High touch is the most interactive. This method can use the retail version of Windows and application software or a customized version of the software. Light touch is pretty much automatic. This method is usually associated with the Microsoft Deployment Toolkit and requires a user or administrator to start the deployment. Zero touch requires SCCM or its equivalent which is not free but the deployment can be completely hands free. Now, the very first course in this learning track discussed all of the above imaging and deployment techniques from the operating system standpoint but here we're focusing on application deployment. Using the MDT to deploy applications along with an operating system hides some of the complexity of the WADK. The MDT includes canned programs called task sequences that help automate the creation, capture and deployment of images. The MDT provides a unified management console and a centralized deployment share for most if not all of the required files. The MDT admin can add applications to operating system installs and monitor the progress of deployments as well as control just how interactive the deployment should be. So, it's a pretty cool tool. Using Configuration Manager for application deployment would make sense if you already have it and use it for other administrative purposes in your organization. SCCM does take advantage of MDT task sequences and can even use MDT images so there's a nice upgrade path for shops that start with the MDT and move to Configuration Manager later. Configuration Manager provides scheduling flexibility for the deployments, detailed monitoring and centralized reporting. It's a full-featured management solution. Now, I mentioned that the ICD comes with the WADK, now be careful by the way, there are different versions of this tool for different builds of Windows 10. The ICD's latest version as of this writing has the ability to create something called siloed provisioning packages and these are files with the PPKG suffix and this capability started with Windows 10 version 1607 and it's cool because we can use the PPKG to install an application against a running system without having to re-image that system. This capability not only works with applications, by the way, it also works with drivers, at least as long they're in an EXE form and Windows settings. Now, what if we don't have any of these tools available? Do we still have to be limited to interactive application installs? No, we can use group policy. It ain't fancy. There's no scheduling, inventory management, centralized reporting, bandwidth management or any of the other bells and whistles that come with a tool like Configuration Manager and it's limited to MSI files but on the plus side it doesn't cost anything extra and it uses a group policy console you may already be familiar with. We can deploy applications to computers in Active Directory in a mandatory fashion called assigning but when deploying to users, we can choose between deploying in a mandatory fashion, again assigning or just publishing applications which simply makes them available but does not mandate the installation. With group policy when you deploy to computers, any user on that computer is going to have that application installed for them. When you deploy to users, any computer onto which that user logs on will get the application that you have assigned to that user object in AD.
Windows Store App Deployment
Windows Store apps and universal apps install somewhat differently than traditional desktop applications, so let's look at the ways to deploy this newer application style. One can install a Windows Store style app interactively through a procedure known as sideloading which we'll look at more closely in a few moments. One can also perform an install by including the app or a link to the app in images that you deploy via the WADK, the MDT, the SCCM or Microsoft Intune. The deep linking method works with the last three of these choices actually and lets the target computer connect to the store for downloading and installing the application. We could also deploy Windows Store apps with the ICD which we discussed in the previous clip and with the Windows Store, certainly the easiest method and the Windows Store for Business if your company has set that up and again, more about that shortly. Let's begin with sideloading. In the Microsoft world this term means that you're obtaining and installing a Windows Store or a universal app without going through the Windows Store or Windows Store for Business. It's kind of similar to installing Android apps without using Google Play. Companies that use this method must provide enablement of the clients so that they're permitted to perform sideloading which they're not by default, certification of the application, something that's normally done through the Windows Store, installation and then updating of the app. Sideloading in Windows 10 is easier than it was in Windows 8. There are no required license keys and no required domain membership. We can sideload apps in several ways, manually using the PowerShell commandlet install appxPackage, in an automated configuration management solution such as SCCM or Microsoft Intune. Now, with Intune we'll use the Intune software publisher and define some Intune groups for deployment like we defined collections for deployment in Configuration Manager. We can use DISM for example to add an app to an image that we're preparing for deployment. We can use the Imaging and Configuration Designer or ICD either in conjunction with an image or separately via a provisioning package and finally, we can use logon scripts, lots of choices. So, how do we enable Windows 10 for sideloading? Manually it's actually very easy. Fire up the setting applet, choose the update and security tile and click for developers. I'll show you this in a minute. At that page we can select sideload apps to enable sideloading for the device. We could also select developer mode. That turns on debugging and deployment options that would be appropriate if we're doing application development and we can also use group policy to enable sideloading for a lot of systems at one time. The location of the setting is here. By the way, notice that it is called app package deployment and not sideloading. Group policy settings tend to be worded somewhat formally. After the systems have been enabled for sideloading, we then need to import the required cert to the trusted root certification authority store of the local machine which can be done manually or by deployment tools such as the ICD. After the certificate has been installed, we can install the app itself with PowerShell's add-AppxPackage commandlet and I'll show you the sideloading setting now just in case you've never noticed it before. Back on good old GMWS1 we'll enable Windows 10 for sideloading by clicking start, settings, update and security and then on the left is this for developers option. I'll click that and we can see that the default is to only install apps from the Windows Store. If I click sideload apps, Windows warns me that apps from outside the Windows Store might be evil and I'll click yes to acknowledge that possibility. You can see the developer mode radio button here as well. That's basically the equivalent of the old developer license in Windows 8. So, that's pretty straightforward. While we're here I should just point out that Microsoft has added a bunch of other settings to this page since the initial release of Windows 10, so if I scroll down there's device portal, device discovery, Windows Explorer options. Wait a minute, what? They renamed it to File Explorer years ago. Apparently somebody is nostalgic for the old name. There's some settings for remote desktop here and the PowerShell execution policy down here by the way that permits the execution of local scripts without Windows requiring a digital signature. So, kind of a grab bag of settings that Microsoft feels might be interesting for developers. We chatted a little about the ICD in the previous clip. It's part of the WADK and again, make sure you get the right version for your build of Windows 10. This tool creates provisioning packages which can be executed against a running instance of Windows to perform the application deployment without disrupting the user's life by mandating a full re-image operation. By the way, you can also run a PPKG against an existing image file, for example, to add an app to an image without having to go through the process of recapturing that reference computer. The PPKG files can modify both desktops and portable devices so it's actually a pretty cool capability, especially if you don't happen to have Configuration Manager in your company. By the way, there is more detail on the ICD in my separate Pluralsight course on Implementing Windows 10 in this same learning track if you hunger for more info. If you don't want to mess with sideloading, you might be able to get by with installing Windows Store apps via the Windows Store as long as the apps you want are hosted there. The requirements are first money, if you want to buy something. Some of the apps are free. You do need a Microsoft account which you can use by logging on with that account, logging on with a linked account or authenticating separately at the store portal. You do have to make sure your firewalls and proxy servers are open for the half dozen or so URLs that are needed. You also need to make sure that group policy isn't blocking the Windows Store. The location for that setting is here and by the way, it now only works with Enterprise and Education editions, not Windows Pro, that's as of version 1511. Now, a variation on the Windows Store theme is the Windows Store for Business. Any organization can set up their own Windows Store for Business. Now, why would you want to? Well, it makes it a bit easier to manage volume purchases of Windows Store apps, you can distribute private or Line of Business apps that you develop or hire to be developed, you can deploy apps flexibly such as assigning to individual users or creating a private store where users can see apps that you've made available for your employees. You can use automatic updating which is a feature of the Consumer Windows Store and you can sign applications or application catalogs if you're using the new Device Guard feature to limit programs to those that have been signed and approved by your company. Incidentally, blocking the Consumer Windows Store via group policy does not restrict users from accessing the Windows Store for Business. The requirements for Windows Store for Business include version 1511 or newer of Windows 10, Azure AD accounts for both administrators and users, a reasonably recent browser and the appropriate domains opened up on your firewall, some proxy servers. Here's the URL to visit to set up your own company's Windows Store for business and now let's take a quick peak to see what the portal looks like, here I am on GMWS1 with Internet Explorer open to https://businessstore.microsoft.com. This is actually a surprisingly tricky URL because it just seems wrong to have three Ss strung together like that. I've already logged into my company's store with my Azure AD account and you can see the four main links here along the top, Shop, Independent Software Inc., Manage and Find a Partner. Store opens up at the Shop with annoyingly large tiles and a really long scroll bar here. If I click the Independent Software Inc. link here, the company name, we can see the apps that are available for company employees via the private store. If I click the manage link we can see a navigation pane to the left, whose organization is mirrored in the big tiles in the larger window to the right here, so we can see recently added apps and software, benefits, order history, account-related stuff, permissions, there's some store settings down here at the bottom. If you take a look here you can see the link for Device Guard signing, so if you have applications or catalogs of applications, and you'd like to get them signed so that you can require that Windows 10 clients only use those applications when using Windows 10 via something called Device Guard, you could do that here. And a number of these links by the way tie in to the office.com portal, so that's just a quick look at the Windows Store for Business. Just one more way to get apps out into your environment.
Most application configuration is unique to the app but some behaviors can be set via Windows. We'll take a look at these and also a quick look at configuring the Windows 10 feature set. One of the behaviors that we can modify is what apps start with Windows? One of the steps I like to perform to make Windows boot a little more quickly is to disable cheater apps that preload into memory so that the associated full app seems to start more rapidly. Though the benefit may be minor but every little bit of added speed is welcome. How can we identify apps like this or other apps that we may not want to start with every boot? Well, Task Manager is certainly one way. The startup tab has a handy startup impact column that ranks the effect each app has on boot times. If OneDrive has a high impact, you may not want to start it every time you start Windows. The System Configuration Tool, MSConfig has been around for a long time and has a startup tab but there's nothing there anymore except a note to use Task Manager. What you can use though is the services tab where you can uncheck services you don't think you need to start automatically, although frankly the services.msc console is probably a better tool for that because you can check dependencies there. Best for last is sysinternals AutoRuns tool. That's the most thorough display of what starts with Windows and it has the twin virtues of being blessed by Microsoft and free for us. Task Manager has another nifty tab called app history that can help us identify apps that are using more than their fair share of resources. In terms of CPU and network bandwidth and live tile updates, you might choose to restrict the usage of apps that eat up too many system resources if more efficient alternatives are available. Shifting from applications to Windows itself I should mention a nifty feature in Windows 10 called fast startup. This is basically a variation on the old hibernate theme in order to achieve quicker boots after a system shut down. It has no effect on restart performance if you don't shut down by the way. Set this feature in the desktop control panel on the power options page. It's hidden under the choose what the power buttons do link. If you don't see the options, maybe double check your UFI firmware settings and look for something like POST behavior, fastboot, POST standing for power on self-test. Well, let's quickly see some of the places where we can examine and potentially improve the performance of the Windows 10 startup process. The System Configuration Tool is easily accessible via the task bar's search field. The startup tab has been eviscerated since Windows 7 and is useless these days unfortunately. Used to be a convenient way of seeing startup settings in the registry but the services tab retains some utility in that we can clear the check boxes for services that we don't necessarily need at startup. Personally, I prefer to do my services surgery in the services console though. If we cancel here, and search for services and run the services administrative console, I get a lot more information. For example, if I'm thinking of maybe streamlining startup speeds by killing this background intelligence transfer service thing, I can double click it for some helpful advice that I might be killing Windows update in the process. Plus I can take a look at the dependencies tab which might give me further heads up on the consequences of disabling a services. Now, bouncing over to Task Manager, we can click the startup tab and get an idea of the impact of apps that are set to start with Windows. This is by no means a complete list but it can be helpful, for example, I see that OneDrive has a high impact but if I only use it rarely I could right click here and choose disable and that just disables the app on startup only. If we look at Task Manager's app history tab, we can see a slice of data, it's no more than 30 days worth, ranking CPU time and network usage, metered network usage, if applicable, and live tile updates. This information might be useful for selecting from multiple similar apps on systems with limited resources. Somewhat frustratingly, this tab only reports on Windows Store/universal apps. You won't see desktop applications here. However, here's a secret. We could look at network usage for all apps if we go some place else, namely settings, network and internet, data usage and click usage details. Notice that we see an entry for iexplore.exe and Internet Explorer did not show up in the Task Manager list. Now, the tip of the day really is to go get the AutoRuns tools for examining the Windows 10 startup environment. I've pinned it to my start screen here. Now, that's a startup monitor. You could easily spend a couple of hours with this tool, fine tuning your startup environment. We can tune the behavior of Windows Store apps in various other ways, apart from optimizing the startup environment. The hamburger menu with the three horizontal lines at the upper left of the app window's typically how we access settings but these do vary quite a bit from one app to another. In the Windows 10 settings applet we can navigate to the system tile and apps and features to do things like uninstall an app, although this often isn't available which makes me wonder why Microsoft feels that the system stability will suffer if I don't use Microsoft's movies and TV app. Move an app, again, something that is often forbidden for reasons that I don't completely understand and modify an app via advanced options link, for example, to repair or reset it. We can modify which apps work by default with specific file types in the default apps page which presents a simplified page at first but we can choose a link that says choose default apps by file type or by protocol which uses a long list of mostly Microsoft-specific URL types. And finally, we can use the storage node to modify the default data save locations by file type such as music, photos, videos and so on. So, let's see quickly where these settings live before we move to the final topic of this module. Back on WS1 Globomantics workstation let's open the settings applet and we'll click the system tile here where most of the global app configuration options live. Now, if we click apps and features over here at the left, we'll see a list of apps on the right. These are just Windows Store and universal apps by the way and we can click on them one at a time to see what options Microsoft will allow us to change, for example, we can neither move nor uninstall movies and TV but via the advanced options link we could actually reset it. If we take a look at oh, I don't know, Microsoft Solitaire Collection, Microsoft will allow us to uninstall that one. The inability to uninstall unwanted apps is somewhat of a step backwards. Windows used to give us more control here. If we click default apps over here on the left, we can see the major categories of file types and that's fine as far as it goes. If you want more detailed control, we can choose the link that says choose default apps by file type and as we can see, there's a lot of them here. If I click the back arrow, we can click the storage item on the left pane to modify saved locations but this is very limited in the sense that we can only select volume labels, that is drive letters here, we can't select folders. We can also take a peak at the drive overall by going to the top and clicking This PC and we can see a quick overview of how storage is used by the operating system by apps and by various different types of documents. Of course, Windows 10 itself comes with many mini apps that Microsoft considers to be part of the operating system. These we can manage using the Windows features control panel. It hasn't changed much over the years. We'll get to it typically through programs and features and a little link saying turn Windows features on or off or we can use PowerShell's enable-WindowsOptionalFeature commandlet with the online parameter to modify the running operating system or the path parameter to modify an operating system image that we might be building for cloning. We could also use DISM directly and in fact, this is the code that the PowerShell commandlet accesses, so it has basically the same set of capabilities and parameters and that ladies and gentlemen, completes this module on deploying and configuring applications in Windows 10. The next module deals with working remotely via a remote desktop, remote assistance, MMC consoles and of course, PowerShell.
Configuring Remote Management
Welcome to Configuring Remote Management, where we dive into the topic of connecting remotely to other computers. Topics in this module are four in number: Remote Desktop, now on version nine zillion, when you need to take control over another computer, Remote Assistance, for help desk troubleshooting scenarios that may or may not require taking control over another computer, remoting into other systems via MMC consoles, and remoting in via PowerShell, either for a one-off command or a session consisting of multiple commands. We'll start with Remote Desktop, the tool with which you can basically take over another computer as though you were actually in front of it. Now, when might this tool be appropriate? One scenario that comes to mind is when one of our Globomantics engineers has some content on a work PC that she needs to access from her home computer. Another's a technician who needs to make a setting on a remote user's PC, for example, to modify a wireless profile to reflect a new password. A Globomantics tech support person might need to access a user's laptop to save time during a troubleshooting procedure. Users don't always have the vocabulary to quickly describe a problem they might be having, and remote access might save some time. Although Globomantics hasn't yet implemented a virtual desktop infrastructure, when they do, users will need Remote Desktop to access their personal or pooled virtual machines, and our server admins may need to log onto a remote server; for example, to restart an operating system service without actually driving into work. Another scenario is a little less common than it used to be, given today's impressive computer processors, but some organizations may still need to run very demanding applications that a given user's local hardware may not have the horsepower to execute. With Remote Desktop, the processing occurs on a central server, which may be better able to handle CPU-intensive engineering or statistical analysis. And the other thing to remember about Remote Desktop is that if a user is logged on already, remoting into that system will bump the user off, although the user has 30 seconds to deny the incoming connection. If you want to establish an interactive, two-way remote session for troubleshooting purposes, you could use the separate Remote Assistance feature, which works basically as it did in earlier versions of Windows. We'll talk about it in the next clip. Or you could use the remote control tools built into System Center Configuration Manager. So, what goes on behind the scenes in a Remote Desktop session? A Windows 10 Remote Desktop client can remote into different kinds of systems, including a virtualization host running virtual machines. The traffic that occurs between these computers is input/output traffic; that is, keyboard, mouse, and display data. Another type of server that a Windows 10 user can remote into is called a session host, or what we would have called in the past a terminal server. In this mode, we install applications on the server, and they're run in a multi-user fashion in Remote Desktop sessions. Once again, the data that traverses the connection is I/O data. The client can remote into a server that is configured for remote administration; i.e. remote management and maintenance, rather than configured as a virtualization host or a session host, and the client can remote into another Windows 10 PC, as long as it's not running Home edition. Bear in mind that any system you may want to remote into must be powered on, so when thinking about a Remote Desktop strategy, we do also have to think about power plans, hibernation, sleep, and stuff like that. So how do we enable Remote Desktop? On the server side, installing the Remote Desktop services role enables the connection in the firewall, so we really don't have to do anything extra. On a workstation, however, we can modify the system control panel to enable the firewall exceptions. And by the way, we do this in the Desktop Control Panel, not the newer Settings applet. So what about security risks, which you might expect are large with this type of utility. Well, first off, Remote Desktop is off by default, so no system could be accessed if it hasn't been configured to be accessed. Secondly, remote desktop connections are encrypted so there's really no danger of snooping attacks. Only local administrators have access, unless other users are explicitly added. Now, remember, by the way, the domain admins are already, by default, local admins on domain member computers. Next, we have the ability to block or tailor Remote Desktop via Group Policy, which overrides any local settings. We can control RDP traffic in the firewall by blocking port 3389, and we can use a gateway to further control access to remote systems. Something called Network Level Authentication further enhances the security of RDP connections, so when enabling inbound access, an administrator can choose the NLA checkbox, and NLA authenticates the user before the session is actually established, which is just a more secure way of doing things, and also reduces the chance of denial of service attacks. It works with all modern versions of Windows, so that shouldn't be an issue for us at Globomantics, for example, where the oldest operating system is Windows 7, and we can require NLA via Group Policy, which we're going to do at Globomantics to ensure that users can turn it off. Speaking of group policy, the bulk of the Remote Desktop settings that control who can access a system and with what restraints are in the computer half of the Policy console under true Policies > Administrative Templates > Windows Components > Remote Desktop Services and Remote Desktop Session Host. Here's where we can control whether users are allowed to connect remotely, whether device redirection is allowed, what the session time limits might be, and what the default encryption level should be. The only users who can access a remote system using Remote Desktop must belong to the Remote Desktop users group on that system. We can add users to that group manually, for example, by the System control panels Remote tab or by the computer management console, COMPMGMT.MSC. At the network level, we can use group policy to modify the user right designated "Allow log on through Remote Desktop Services," which initially contains the administrators and Remote Desktop users groups. Okay, well, let's take a fast look at where to enable or disable Remote Desktop. Here I am on my bright purple Globomantics workstation, GM-WS2, and I'd like to verify that the system's been enabled for Remote Desktop access. So we'll do this manually, so you can see the procedure. So I can just right-click the start button and choose Control Panel, and go to System using the small icons view here, and then click Remote settings. Now, here I can see that the setting has been changed from the default of Don't allow to Allow remote connections to this computer, and notice also the checkbox for network global authentication. And so that's something I would choose if in this environment I don't have any Windows XP systems anymore. I also have the Select Users button here where I can modify the list of people who have the right to access this system remotely, and note the reminder that any members of the administrators group can connect, even if they're not listed here. And again, domain administrators, by default, are members of the local administrators group on domain-joined PCs. RDP stands for Remote Desktop Protocol, and it's the suffix that applies to Remote Desktop connection icons. You create an RDP file when you make a connection and choose to save your settings. You can then modify the properties of that connection object later via the RDP file's properties page. These settings include the computer name that we're remoting into and the credentials to log onto that computer. We can also specify the display resolution, whether we want to use multiple monitors, and whether to prevent full screen operation. We can also set whether we wish to use remote audio, which can be a bandwidth hog, so maybe test this out on your network first. We can set the preferred behavior of Windows key combinations to define whether they will be interpreted by the local machine or the remote machine, and we can specify whether to prevent local resources such as local drives or printers to be available during the remote session, which is really useful, for example, if we may need to drag and drop some files between systems or print some kind of a report. The client-related group policies settings, which will apply to computers that are remoting in to other computers, live in a containing node called Remote Desktop Connection Client. Just remember that if you're remoting in, you're a remote desktop client, and if you're being remoted into, you're a remote desktop server. Now, many of the RDP options that affect the perceived performance of the session across potentially busy network connections can be fine-tuned. You can let Remote Desktop figure out the settings that it thinks will work best, optionally telling Remote Desktop your typical connection speed, or you can control the settings individually, like the desktop background, which usually isn't needed, font smoothing, desktop composition. More important really with displays. Whether you want to show the window contents while dragging, menu, window animation, and the application of visual styles. Speaking for myself, I tend to turn all this stuff off, but you can work out the best compromise for your users. And now, let's take a look at configuring and using Remote Desktop. Now, here I'm on my dark gray workstation, GM-WS1. Let's click Start and just type mstsc, which is the executable, and now I can click Remote Desktop Connection, and I can manually make all the settings that we just discussed by clicking the Show Options button here which displays a multi-tab window that some of you have probably seen many times before. Note on the General tab the checkbox to permit the saving of credentials, which may be convenient, but may also reduce security. The Connection settings buttons let us create an RDP file containing the current settings. The Display tab lets us set screen size, multiple monitors if applicable, color depth, and whether to display the connection bar in full screen viewing, which is handy if you might otherwise forget whether you're in a remote session or not. The Local Resources tab is where you can set audio playback and record preferences here, or you can control the standard Windows keyboard combinations here, and which local devices and resources you want to be available during the session, including drives, which you can only see by clicking this More button. Note that the drives category can include mapped drives and drives that you plug in later. So we can select the local disk C: and the DVD drive D:, and if we had any USB drives connected, we could select those, too. The Experience tab let us optimize the performance-related settings via connection speed using the dropdown, and notice that if I select a default speed, for example, LAN 10 megabit per second or higher, I can now come in and modify the individual check boxes, which I cannot do if I tell Windows to detect the connection quality automatically. The Advanced tab lets us customize Remote Desktop behavior when performing server authentication, and it also lets us configure the Remote Desktop gateway. I'll choose Hide Options, and here's a name of another computer on the domain, GM-WS2. I could also specify an IP address here. And if I don't enter the fully-qualified name, Window will tack on the default domain suffix, globomantics.local. I can click Connect here, and I'm prompted for credentials. When I provide them and click OK, I'm logged into GM-WS2. And by the way, my local session on GM-WS2 will get bumped if I click yes. And at this point, the local user has 30 seconds to say, hey, I don't want you to bump me off. Now to verify that my local C: drive on GM-WS1 is still available during my remote session into GM-WS2, I'll open File Explorer, and this PC. Then we can see the local disk C:, and we can also see the drive C: on GM-WS1. That's my local machine. Now, to see the new Zoom feature in Windows 10, I'll restore down, and then click the RD icon at the upper left. There are a variety of zoom options available here, plus something called Smart sizing, which attempts to zoom the desktop appropriately and automatically as I resize the window. So, these are some of the more common Remote Desktop connection options. Remember that you can preset many of these via group policy, too, in which case they would be grayed out in the Remote Desktop client. Now, we can set up a Remote Desktop services machine to host sessions, that's the session host, or to host virtual machines, which is the virtualization host. In the session host model, desktops and applications are shared across multiple users, and so the applications must be compatible with that mode of operation. Some aren't, by the way. The VDI administrator will install the needed applications on the session host server. In the virtualization host model, users connect not to a session, but to a full-fledged Hyper-V machine, which provides greater isolation and fewer compatibility issues so the applications don't need to be multi-user, and it involves installing applications on the VMs, therefore requiring more disk space overall than the session host model. So why should Globomantics consider a VDI deployment? One benefit might be that administrators can exert tighter control over the user computing environment, because everything executes centrally. Another is that users can run Remote Desktop on client hardware that might not be up to the task of running the desired application or application mix. Changing user configurations is easier, too, and a VDI permits users to run Windows on a non-Windows device, considering that there are remote desktop clients for several non-Microsoft platforms. And finally, users can work using personal devices, without having to install applications or modify settings on those devices. And that's important, because we may not want work applications interfering with our game platform. We've got to keep our priorities straight. What are the different ways that users can connect to remote sessions or VMs using Remote Desktop? Well, first, a user can connect directly by firing up Remote Desktop Connection on an internal network and specifying the computer name or IP address, as we've seen. Another way is by using a device also on an internal network in which the user fires up a browser and points it to a server running Remote Desktop Web Access. That might be useful if there are several VMs or remote apps and we want to make it easy for the user to see what's available. Third, the user with a browser can connect from outside the corporate network if we add a Remote Desktop Gateway to the equation. And finally, the Windows 10 user can fire up the RemoteApp and Desktop Connections utility and connect that way, too, as long as an RD Web Access server exists. RD Web Access and RD Gateway are interesting topics, but they're really more relevant in a server course than a client course. So, we'll leave it there.
Now, let's look at Remote Assistance, a tool for interactive troubleshooting on Windows networks. Remote Assistance has been part of Windows for many years, and it's still present in Windows 10. It's designed for the classic help desk interaction scenario. When a user needs tech support, the user and the tech get on the phone, and both people can access the user's computer at the same time, usually by taking turns. The core functionality is to view and/or control the Remote Desktop, but there's also a chat facility for situations where a phone call isn't practical. Remote Assistance differs from Remote Desktop in that a Remote Assistance session does not kick the local inbound user off of his or her system, and the mouse and keyboard may be controlled by both the local and the remote party. Invoke the program with MSRA.EXE or by searching on a word like invite in the Control Panel, or you can go there directly through Control Panel > System and Security and Launch Remote Assistance. So-called Solicited Remote Assistance means that a user submits an invitation in one of several ways to a support person. The user can save the invitation as a file to share on the network and the communicate the related password to the technician by phone or email. The user can send the invitation via email or use something called Easy Connect, which depends on an internet connection and which uses a temporary password. Network administrators can configure Remote Assistance using group policy. The relevant setting is Configure Solicited Remote Assistance, and it permits configuring the allowable invitation method, the maximum lifetime of a service request ticket, and whether tech support can just view the user's system or also take control of it. Now, Easy Connect usually isn't all that easy. In most networks I've worked with, it just produces an error message saying, "Can't connect to the global peer-to-peer network." It turns out that a specific protocol, Peer Network Resolution Protocol, is required for Easy Connect, but most routers don't support it. PNRP and its associated services work fine within the context of a home group, but when you're traversing the public internet, it becomes an iffy proposition. If a technician doesn't want to make a user submit an invitation, such as, for example, when a user's already on the phone with a technician, Unsolicited Remote Assistance lets the technician start the process by sending the user an offer of assistance. This, too, can be configured with group policy via the Configure Offer Remote Assistance setting, which lets us specify what Windows groups are allowed to be helpers, and whether those helpers can control remote systems or just view them. Now I'll show you what Remote Assistance looks like. Okay, let's say that I'm working on GM-WS2 with the purple background, and I run into a support issue, so I decide to request Remote Assistance. Now, a fast way to accomplish this is just to type the word invite into the search field on the task bar and then click the link that appears in the results. I'm the user in this scenario, so I'm going to choose the top option, Invite someone you trust to help you. And now I can choose the method of inviting a technician. So I can save this invitation as a file, for example. I could choose to save this file locally, then attach it to an email message if I'm using a web-based email system that Windows doesn't automatically recognize. But in my case, a network location is already set up for this purpose, gm-dc1\assistance, and so I'll go ahead and save the invitation, and I'm going to receive a password that I'll need to provide for the helper. Now, bouncing over to GM-WS1 where help desk technician Harry is already logged on. Let's say Harry gets a call from the user that there's a pending invitation. So, here Harry can navigate to gm-dc1\assistance and open up the .msrc incident file. Now, Harry's going to enter the password that he got from the user and click OK. Now, notice the message at the bottom of the screen saying Waiting for acceptance. At this point, the user requesting help needs to take action. So we'll bounce back over to GM-WS2, and we can see the prompt. Remote Assistance is asking whether I'd like to allow Harry to connect to my system. So there are two safeguards here. First the password, and second, this granting of permission. So I'll click Yes, and then in the Windows Remote Assistance window, I can see that Your helper can now see your desktop. And now, from Harry's system, I can indeed see the user's desktop. If I want to take over control of it, I can just click Request control along the top menu bar. There's also an option to open up a chat session if I want to do that; for example, if a phone link is not available. And when I'm done troubleshooting, I can just close the Windows Remote Assistance window, and bouncing back over to GM-WS2, I see a message stating that the session has ended. And that's just a quick look at what Remote Assistance looks like in Windows 10.
One of the drawbacks of tools such as Remote Desktop and Remote Assistance is that they tend to consume a fair amount of bandwidth. For many remote management scenarios, we don't have to use heavy tools like these. Instead, we can fire up an administrative console that can communicate with a remote system using more efficient methods. The general method for using such remotable consoles is find the topmost node in the Navigation pane, which will generally feature the name of the local computer. Right-click that node and choose Connect to Another computer. Examples of consoles that offer this capability include Event Viewer, Computer management, Hyper-V Manager, Performance Monitor, Services, Task Scheduler, and most of the Remote Server Administrative Tool consoles that you download via the Windows 10 RSAT. There are some notable exceptions to the list of consoles that are easily remotable. Device Manager, Disk Management, Task Manager, and Resource Monitor are among these; however, there's a little trick we can use even in some of these cases. Build yourself a new console by writing the MMC.EXE console shell program. Add the specific snap-in such as Device Manager, and when prompted, specify the focus of the snap-in by typing in the name of the remote computer. I'll show you this technique in a minute. Now, console remoting doesn't always work the first time. The remote system may need to be running specific services in order to support the console that you're using. Windows Remote Management, .NET framework, Remote Registry Service, and so forth. So don't give up if your console doesn't connect successfully right away. A little web research should help you identify any prerequisite services on the target computer, and you can even use group policy to ensure that those services start automatically. Sometimes too you may need to open supports on the firewall in order to use console remoting, and other times, the technician may need to belong to a special Windows security group, such as, for example, Event Log Readers, in order to access the data on the remote machine. Well, let's take a quick look at how console remoting works in Windows 10. I'm on GM-WS1 logged on as Harry. Now, an example of a console with built-in remoting is Computer Management. Let's fire that one up by just typing compmgmt in the search field, and we'll choose Computer Management in the search results area. Now, if I expand this out, we can see at the navigation pane's topmost node the designation Computer Management (Local). Now, if I right-click that node, I can then connect to another computer and type a name or an IP address. If I specify GM-WS2, I can connect to that remote system, and then if I open the System Tools node, after a delay, I get an error. The path is correct, and the computer is available to the network. So according to this message, we could have a firewall issue. And so here's some advice in the middle about some of the rules to enable, including the Remote Event Log Management group. Let's bounce over to GM-WS2 and take care of that. Okay, here I am on GM-WS2, the purple machine. And I'll open up the firewall with Advanced Security. And I'll choose my Inbound Rules. Let's filter by domain profile, and let's filter by Remote Event Log Management. And let's enable these three rules. Now we'll bounce back over to GM-WS1. I'll close and reopen the console here. Connect again to GM-WS2 and open System Tools. No error now. So this is the sort of thing you may have to do when remoting into other systems for management purposes, and we might not be done yet. For example, if I click Device Manager, I'll see a new message that advises me to turn on Plug and Play and the Remote registry service on the target machine. So as you go to the individual nodes, you may have further ports and rules that need to be opened up and services that may needed to be started on the target system. Now, if I close out of this console and open a console that does not have that built-in remoting capability, let's see how we can work around it. So I'll type diskmgmt. Now, notice that there's no place in this console to connect to a remote computer. Here I have a couple of choices. I could go back to Computer Management, which actually incorporates this Disk Management tool, and connect like I just showed you, or I can run MMC.EXE, and I'll choose File and Add/Remove Snap-in, and I'll scroll down here, and I'll add Disk Management. And now the wizard asks me which computer I want to manage. So I can specify GM-WS2 here. Finish, OK, and, voila, I can see the volumes on GM-WS2. Now, if you're managing Active Directory from Windows 10, some of the RSAT tools have more options than just connecting to a different computer. For example, if I fire up Active Directory Users and Computers, I can right-click the topmost node here and choose to change domain controller or to change domain if I want to stop managing Globomantics.local and start managing some other domain in my Active Directory forest. Also some of the RSAT consoles let me manage multiple computers at once, such as the DNS Console. Finally, if I fire up Server Manager, another RSAT tool, I'm not managing any servers right now, but I can actually manage multiple servers at once by choosing the link, Add other servers to manage. So I'll go ahead and add my domain controller, click it, and click the arrow, and I'll click OK, and now, when I look at All Servers, I see GM-DC1, and I can see the services that are available on that server. I can now add more servers if I'm inclined or actually create a server group if I want to manage lots of servers and organize them into groups.
You may have noticed Microsoft's increasing emphasis on PowerShell for configuration and management. It's not just for scripting anymore. We can use PowerShell remotely in Windows 10. But as with console remoting, PowerShell remoting often requires a bit of finagling. Let's see how it works. There are actually three ways to run PowerShell cmdlets against a remote computer. First and easiest is when the specific cmdlet supports remoting, typically via a parameter like Computer or ComputerName that we can just tack on to the parameter list when we type the cmdlet. Second, for cmdlets that don't have such native support, we can invent the cmdlets in another one, Invoke-Command, that always has a ComputerName parameter. And then third, if we want to execute multiple cmdlets and we don't want to have to type the ComputerName parameter each time, we can establish a remote session. Let's look at these three methods, and then I'll demo them. Method one is pretty straightforward. The bummer is that not all cmdlets support native remoting. For the ones that do, this is the easiest method. There's no prep work to set things up for remoting. We just include a ComputerName parameter in the cmdlet. Examples of cmdlets with native remote access capability include Get-EventLog, Invoke-GPUpdate, which is handy for those of you who might be working with group policy, Get-Process, Get-Service, Restart-Computer, and so on. As with the console remoting, you might need to open up some firewall rules and start some services on the target computer, depending on what data you're trying to access or what operation you're trying to perform. Method two involves a little sleight of hand. For cmdlets that don't have a ComputerName parameter, we just wrap the cmdlet inside another one that does, namely Invoke-Command. For example, we can run Invoke-Command with the ComputerName parameter GM-WS1 to execute the block of script that starts the eventlog service. In another example with somewhat different syntax, we again run a cmdlet against GM-WS1, but this time, we're running a script named script.ps1. We can use Invoke-Command with multiple target computer names separated by commas. Now, the third method is useful when you need to execute multiple PowerShell cmdlets in a bunch. This method requires Windows Remote Management to be running on both computers. The usual way to set this up is to type winrm quickconfig or QC or short, if you're in a hurry, and then we make sure the remote computer is enabled for remoting; for example, on Windows 10 using the Enable-PSRemoting cmdlet. As an example here, Enter-PSSession establishes a session with the remote machine, GM-DC1 in this case, and then we can execute whatever commands are required, such as Get-Service, and at the end of the session, we terminate it with Exit-PSSession. And now it's time for a brief demo of PowerShell remoting, all three flavors. I'm here on GM-WS1, and I've opened a PowerShell window. Let's first demo using a cmdlet with native remote access built in. So I just got a list of all the services running over on GM-WS2. So that's about as easy as it can be. Now let's try a cmdlet that does not have the ComputerName parameter. Let's say we wanted to stop and restart the printer spooler on GM-WS2. We could do that using Invoke-Command, putting the meat of the commands into a script block as follows. Okay. Let's see if it worked. Okay, it did. The spooler is stopped. Now, let's restart it remotely. The up arrow is our friend here. And using the up arrow again to see whether it worked. And it did. The spooler has restarted. Now, by the way, this sometimes fixes printing problems in Windows. Now let's look at the third method, which is to set up a session. We'll do the same exercise with the print spooler, but this time, we're going to set up a session, since there's more than one command and it might save us some key strokes. Okay, so I've started the session, and you notice that the prompt has changed to indicate that I'm working now against GM-WS2. You can see how setting up a session can save you some typing if you need to run more than one cmdlet. The prompt reminds you that you're connected to a remote computer to prevent confusion. And so those are the three ways to use PowerShell cmdlets across your network. Well, you've reached the finish line for this course. Many congrats. I hereby declare you a member of the Windows Core of Engineers, and convey the good news that this is the last bad joke in this particular course. Thanks for watching, and I hope to see you soon in another Pluralsight class.