What do you want to learn? Leverged jhuang@tampa.cgsinc.com Skip to main content Pluralsight uses cookies.Learn more about your privacy PowerShell Toolmaking Fundamentals
  1. Prerequisites In this course, I'm assuming you at least have a passing familiarity with PowerShell. I'm expecting you to have at least done some copy and pasting of the scripts. You don't even have to be at a full intermediate level, but you at least need to understand core PowerShell concepts like what's a script, what's the console, etc. If you'd like to follow along with the demos and you do don't you, you'll need a small lab with a couple of VMs. Technically, you only need a single machine with Windows 8.1 installed and PowerShell v4 if you're skipping to module three. If you'd like to follow up in module three, which consists of managing accounts in Active Directory, you'll at least need a virtual machine running at least Windows Server 2008. There's no specific scheme or recommendations or anything like that. It goes without saying, you're also going to need permissions to this, as well. To remove the complexity of figuring out permissions, I'll be demoing everything with a domain administrator account. I don't necessarily recommend this, so if you want, you can try the demos with a less-powerful account; however, be warned that things may not work out for you like you intend because you might not get the concept down if you're too busy figuring out all these permission issues.

  2. Summary With the completion of this first introductory module, you now know what you're in for. I hope you're going along with the whole tool and toolmaking concepts and that you have a firm grasp on the skills you should possess. Finally, again, this course intends to teach you to fish, not necessarily to lay out a grand plan on how to do everything. Feel free to stop the course at any time, experiment, and come back.

  3. Moving from the Console to a Script Let's Go Scripting Hello, this is Adam Bertram, and this is module two of the PowerShell Toolmaking Fundamentals course, Moving from the Console to a Script. In this module, I'll be briefly going over some beginner concepts like the PowerShell console and what it consists of a PowerShell script. This module will be for beginners that may not necessarily built their own script from scratch before. By the end of this module, you should have an understanding of what PowerShell basics and what it takes to take a command from the console and convert it into a script. The PowerShell console is home base. It is a place where your PowerShell journey begins. It is a place where you'll get familiar with PowerShell and what it can do. The console in its simplest term is a command prompt replacement. You know cmd.exe right? Think of a console as that, but rather than just running exe and batch files, it runs compiled .NET code as well. The PowerShell console is so similar to cmd.exe you can and should completely replace cmd.exe with the PowerShell console. Inside PowerShell are aliases. Some of these aliases were created to ease the transition from cmd.exe to the PowerShell console. In the command prompt, you can typically run commands like dir or del, right? You can do the exact same thing in PowerShell, and you will accomplish the exact same thing, but you're just getting there a little bit different way with PowerShell. This is one of the features that the PowerShell team at Microsoft put into place to not make PowerShell so scary for beginners. If you've mastered the console, moving your skills to a script is a piece of cake because a script is just a group of console commands inside a text file. Rather than running one command at a time with a console, you group these commands in a text file or a script and then execute them one at a time in sequential order. In this module, we'll go over creating your first script.

  4. The Console vs. Scripting If you're already executing commands in the console, moving these to a script should be no problem at all since a script is just a combination of console commands. Let's dive in to a demo here. We'll be running some basic commands in the console and then move those commands into a script and see what happens. So you've been using PowerShell as a command-line replacement for a while now. It's time to move your skills to the next level by creating your first script. The moment you find yourself wanting to execute more than one command at a time is the time when you need to up your game and create a script, so let's go over a brief demonstration of this. I've got the console already open, and let's say I just want to list the files out on the current directory. To do this, I'll just type dir, hit Enter, and get some results as you see here. Notice how the results come back immediately. In the console, you type what you want, hit Enter, and you get the results back. That was an extremely simple example. Let's take it a step further in saying I not only want to get a list of files and directory, but to keep track of what server I'm running this on, I also want to get the hosting where the server is shown before the directory listing. For this, I need to use a variable, which is completely unrelated to the dir command. So how would I add another command if as soon as I hit Enter to see the results I can't do anything else? The solution is to use a semicolon in between them. So let's try this again, and this time I'm going to use a semicolon in between the variable and the dir command. So do this I'll just cls to clear the screen, and I'll use the computer name variable, and then I'll do the dir again. So see how I now added the two pieces together. The semicolon is the way you can perform multiple commands at the same time. It's much like the double ampersand with cmd.exe, so if you're used to that I'll go down into cmd.exe, and I'll show you the same thing. So the same command in the old school batch files in cmd is hostname. So, I'll do both of these, and you'll see the same thing. You'll see the WINDOWS-VM showed there as the hostname and the directory listing of all the files in the current directory show. So I'll exit back out of the command prompt, clear the screen, and get back into PowerShell. So the semicolon works to perform multiple commands on one line until you've got dozens, hundreds, or even thousands of lines. So what then? Then you need a script. Writing a script simply consists of editing a text file. This isn't a programming language per say that makes you write out code and compile it before it will work. PowerShell is what's called an interpreted language, which means you can just write out PowerShell commands in a text file, point PowerShell.exe to the text file, and it will run without a problem. Because PowerShell is just a text file, I'm just going to open up Notepad. I'm going to create my script in Notepad. Granted you're probably going to use a script editor more suited for this eventually, but for this simple example I'll just create this one with Notepad, just a simple, plain text file. And I'm going to do the same thing that we did in the console in a script. So I'm just going to type dir and save it as SampleScript.ps1. And the ps1 file extension is an extension that all PowerShell scripts add. So whenever you save a script, you always have to save it as a ps1. So I'll go ahead and save this, I'll go back into my console here, and I will execute it. And you'll see as soon as I executed the script, you'll see the exact same results as I got with just typing PowerShell in the console, although this time I just executed the ps1 script. So what's happening in the background there is the PowerShell engine is actually taking that ps1 script, interpreting it, and then just dumping out - executing every command that it sees in there. In this instance it's the same thing. I can type dir or run my sample script as the same thing, but the biggest advantage to using a script is you now have the opportunity to add an endless number of commands in there so you don't have to constantly be using the semicolons to do line breaks all the time. So let's go back and add that computer name variable in the script before our dir. And I'll save it, clear the screen again, and run it. So you now see I have the exact same output as I did when I typed in with the semicolon. I have the WINDOWS-VM hostname and the list of files in the current directory. The reason that happens is because in a PowerShell script the line break signifies a different command that PowerShell knows. So PowerShell knows to look at the line break. It says if I see a line break, then I know that computer name is a different command than dir, so I can differentiate those two. But I guess if I really wanted to I could still do a semicolon and it wouldn't bother PowerShell as much, so I'll just do that. It's the same thing, but you could save a few keystrokes by just eliminating the semicolons. So that's the basic difference between the console and a script, so let's now get into the next clip where I'll go into a little bit more detail in using PowerShell as a scripting language instead of just a command interpreter.

  5. PowerShell as a Scripting Language You've seen an example of writing a very simple script and seeing how PowerShell executes it. Now let's take a quick step back and show you some of the details I glossed over in the last script. PowerShell has a security feature known as an Execution Policy, and it will be the first roadblock that you hit when you execute your first script. This execution policy is a mechanism that either allows or disallows the local computer to run PowerShell scripts. There are essentially four different types of an execution policy from most restrictive to least restrictive. The execution policies are Restricted, All Signed, Remote Signed, and Unrestricted. By default in Windows 8.1, the execution policy is set to Restricted. This means that console commands can run, but no scripts can be executed. You then have All Signed and Remote Signed. These are execution policies that have to do with script signing and certificates. These policies are out of the scope for this course. And finally, we have Unrestricted, which you may have guessed has no restrictions at all. You can run console commands and all the scripts that you want without the execution policy getting in your way. This is what we'll be setting the policy to in our upcoming demo. One of the first roadblocks you're going to run into as soon as you try to start running scripts is the execution policy. This is why you need to talk about this sooner than later. In the previous demo, I was already running the execution policy at Unrestricted, so I cheated a little bit, but we need to address this now so you don't go punching your monitor when you have to get this script to run. So this is a brand new install here that I'm at, and I just launched the console, so let' me try to run that sample script we just did. Red text is bad, right? Read the error message a little bit. It says running scripts is disabled. Now that's not very convenient, is it? This is Microsoft's Secure by Default initiative, which means they're probably going to lock as much stuff down until you ease up on it a little bit, so I guess you are the one that could be blamed for the security problems, who knows. Anyway, this is an error about the execution policy, so first let's check out what the execution policy is set on right now. Okay, I can see that it's set on Restricted, which is a problem since I can't execute a single script this way, so let's change that. To do that, we use Set-ExecutionPolicy and using the ExecutionPolicy parameter. If I want to see all the options I have, the ExecutionPolicy parameter has a handy dandy little tab completion here, so I can just tab through the options I have. As a side note, tab completion is extremely handy, and I use it all the time, so try it next time you're in the console by just typing a few letters of the command at the parameter end, you know, watch it magically appear. Anyway, we're just going to set this to Unrestricted for now, so I'll tab over here to Unrestricted, because I'm lazy and I don't like to type, and hit Enter. It asked me if I want to. Yes, I want to. Now let's test it out and see what it is now. And now it's set to Unrestricted. So let me try that sample script again, and it works. So the execution policy is kind of like a gate that you have to set whenever PowerShell is first started. There are many different levels of the execution policy you saw here in the Set-ExecutionPolicy parameter here, AllSigned, Bypass, Default. We're not going to really go over a lot of these in this course because it's kind of out of the scope of the course, so for now we're just going to set this to Unrestricted and so we can actually get some stuff done. Another thing, you only have to do this once thankfully because this setting is placed in the registry. So I can close the console here, open it back up, do it again, it's still set to Unrestricted. You can reboot, close the console, do whatever you can. It's reading from a registry value inside of the registry. So that does it for us setting the execution policy to Unrestricted. I showed you earlier how to create and edit a script with Notepad. I did that, not as a best practice, but to show you how simple it is and how the PowerShell engine itself really doesn't care what you use to edit the script with. Us humans, however, need some kind of structure organization to the code to understand it. We care more about the code looking pretty more so than what the code actually does. Because of this, we have script editors and sometimes referred to as IDEs or Integrated Development Environments. These editors not only give you a place to edit your code, but provide useful features like alerting you to potential problems with your script, auto-completing the code for you, and many debugging features, which assist you when tracking down problems with your code. There's a script editor that comes with PowerShell known as a PowerShell Integrated Scripting Environment or PowerShell ISE. This is a fairly robust script editor that can really help expedite the script-making process. It will also be the editor I'll be using throughout this course. I've got the ISE open here to its default settings. Immediately, the first thing you'll notice is that it looks a lot like the PowerShell console itself. Well, technically it is. The PowerShell console is technically a PowerShell host. A PowerShell host is an environment that allows you to run commands against the PowerShell engine. The PowerShell ISE has its own host, and this is what you're seeing here. It's identical to the console that we've been working with. So I want to use the ISE to edit scripts. How do I do that? The ISE has multiple window panes available for use. One of those is the script pane. To expose this, I need to go up to View and click on Show Script Pane. You immediately see a spot where I can start creating or editing scripts. If you look at the view menu a little bit more, you'll see there are multiple panes we can turn on and off. One of the very useful ones when you're just getting started is the command add-on. So I'll show that here, and you'll see that it opens up a command window to the right with a bunch of commands or cmdlets displayed to the side. This is all the cmdlets available to you right now. By clicking on a cmdlet, the ISE brings up that window and shows all the parameters. So I can click on one of these here, hit Show Details, all the parameters that I can use for that cmdlet. This is extremely helpful when you're trying to find just that right cmdlet to do something you don't know exactly what the parameters are. So you can even fill out the parameter arguments here and click Insert, and that will insert the cmdlet and all the parameters already filled out for you in the script console there. There's also a nice search feature up here. So if I want to find all the cmdlets, let's start with Get- for example, you can easily see all the cmdlets available to you right now available that start with Get-. And you can also limit the cmdlets you can see by the modules above here. So let's say I only want to look at this. The only Get- cmdlet inside of the MMAgent is the get MMAgent cmdlet. If you've never created a module before, stay tuned for module seven, and we'll create a very robust one. So the ISE also includes all the common stuff like creating and opening and saving the script. So here I can go New, Open, Save, Save As, that sort of thing, all the common stuff that you're used to. Another cool feature of the ISE is this IntelliSense feature. IntelliSense is a feature on many script editors that auto-completes code for you and gives you hints. So let's say I'm looking for a certain variable name, and I don't remember what it is. Since all variables start with a dollar sign, I can just type a dollar sign, and the ISE automatically knows what I'm trying to do and displays a drop-down of all the variables that are currently available to me. So I can just scroll down here, and I can just pick the one I want. The same goes with cmdlet names. So let's say I just wanted to just type a few characters here. Get- shows me all the cmdlets I have available to me. Set-, same thing. Only comes in when you complete a variables so Set-, Get-, Add-, that sort of thing. And finally, the last feature I'd like to explain to you is running snippets of code. So you remember how I was able to execute script earlier? That was the entire script I was executing. But what if I just want to test a piece of it? Using the ISE you can. So let's say again I have, I don't know, env:COMPUTERNAME or Com whatever, something like that, and dir hostname, something like that. So if I would execute all of them, I would have three separate output here. But what if I just wanted to execute dir? I could just highlight dir here and then click on this, Run Selection, and it just runs this piece of code. Or I can optionally F8 if I'm keyboard guy and do the same thing. This option is very handy when debugging code. So imagine if you have hundreds and hundreds of lines here, and you just want to test a few of them. All you have to do is highlight, click it, and it will run just that piece of code. So, remember this for module six especially.

  6. Summary If you are to take away anything from this module, it is this. Don't be afraid of a script. Think of a PowerShell script as simply a batch file from the cmd.exe days. Also, remember that an execution policy is just PowerShell's gate you have to go through before you can run scripts on a system. And finally, open and learn the PowerShell ISE. It will be your life blood when editing and running PowerShell scripts. In this module, I introduced you to the PowerShell console and writing your first script. We went over what an execution policy was and how it can be changed. And finally, we briefly went over the PowerShell ISE and gave an overview of some of the features of the ISE.

  7. Tool #1: Active Directory Account Management Automator Tool Introduction Hello, this is Adam Bertram, and this is module three of my PowerShell Toolmaking Fundamentals course. In this module, we get started building our first tool, the Active Directory Account Management Automator tool. This module will go over creating and managing Active Directory accounts. This tool will consist of using the Active Directory PowerShell module to create a typical Active Directory user account for an employee at our company, as well as their computer account. This tool will have the ability to not only create user and computer accounts, but also add the user to groups and reset user passwords. The purpose of this tool is to create a script that is reusable. New employees are always being hired at our companies. If not, you should probably be worried. So it's a task that's repeatable. Doing all of this in Active Directory users and computers is a pain and is open to a lot of fat-fingering. We'll be creating a very useful tool here to use as our main Active Directory account management solution.

  8. The Active Directory PowerShell Module Before we can use PowerShell to do anything with Active Directory, you must have the PowerShell Active Directory module loaded. The Active Directory module comes included with Microsoft's Remote Server Administration Tools or RSAT. RSAT is a group of Windows features that allows you to remotely manage various services. To get RSAT, we must download and install it, so let's go ahead and do that now. To set up the Active Directory module, we need to download and install the Remote Server Administration Tools kit. I'm running Windows 8.1 here in the demo, so I'm going to download, install that one. However, this will work for both Windows 7 and Windows 8, so just make sure you download the right one. To find it, just type the full title in your search engine of choice, and it'll always be the top link. I'm here in that URL, so now I'm just going to go ahead and download this package. So I'll just go ahead and click Download. I'm running the X86 version of Windows 8.1. Hit Next, and just go through the process here. My once, and I'm going to Save it. And through the magic of editing, the download is complete, so let's open it and get it installed. So I'll go ahead and open it now and wait a little bit for it to start up here. It is a Windows update, and yes, I do want to install it. Accept. Wow, that was fast huh? I have the luxury of video editing, so you will not expect such an install time, but obviously you won't get that fast of an install, but imagine anywhere from, I don't know, two to five minutes to probably get that installed. So now that it's installed, the next task is confirming that the Active Directory module inside of RSAT here is enabled. RSAT is a Windows feature, so we'll go into the control panel here and check it out. (Working) And if I scroll down a little bit, I will see the Remote Server Administration Tools, and here it is. If I drill down a little bit here, you will see AD DS and AD LDS tools. That's Active Directory Directory Services. And you do see the Active Directory module for Windows PowerShell there, and it is checked, so that's what we want. So now that we've confirmed that it's installed and enabled, now is the time for the final test to see if it's actually available in the PowerShell console. So I'll go down here, and I'll open up the console and run Get-Module to see all the loaded modules. It's not showing up in there. The reason is because PowerShell now loads modules when it needs them automatically. So these are just the modules that are currently loaded, but it doesn't show all the modules currently available. To do that, we have to use the ListAvailable parameter. So doing that, there are a lot more modules available. So let me go up here to see if Active Directory is in here. Yep, there it is. The very first one is Active Directory. So that's great. So we've downloaded Remote Server Administration Tools or RSAT, we ensure the module is enabled, and we finally ensure that PowerShell can see it. The next step is just playing around with it. The Active Directory module has a lot of cmdlets included that we can use, so let's investigate. The first thing I do whenever I install a new module is first take a look at all the Cmdlets that it gives me. To do this, I use the Get-Command cmdlet. Let's say I want more information about the Get-CimInstance command. All I have to do is type in Get-Command and the name of the cmdlet, and from that I can see it's a cmdlet, its name, and the module that it's in. For our purposes, I don't necessarily know a cmdlet name. I just want to know all the commands in the Active Directory module. Luckily, Get-Command comes with a module parameter. So if I do Get-Command -Module ActiveDirectory, you should see a bunch of cmdlets come down. I have cmdlets here to do just about anything with AD. For our purposes, since our tool will be managing only user and computer accounts, we'll only be addressing those. So let's start playing around a little bit with the account management cmdlets. The first and safest cmdlet I start with are the ones that start with a Get. These are the cmdlets that just read information and not write unless you've got some pyscho that names his function with a Get verb and actually does write. In that case, he'll get his one day. But anyway, let's explore what all Get cmdlets are in the Active Directory module. To do that, we use the Get-Command again with a module parameter, and this time we add the -Verb parameter in there with the argument get. As you can see, there are still quite a few cmdlets. We're going to be working with user and computer accounts, so which ones would probably do the trick? We know it's probably going to have the word computer or username, right? I don't feel like eyeballing each of these cmdlets one by one, so I wanted to see which Get cmdlets have the word computer or user in them, so let's do a little bit of filtering. The Get cmdlet also has a noun parameter, but I don't necessarily know the noun, but I do know that the cmdlet I'm looking for has either the string computer or user in it. Thankfully the verb and noun parameters both allow wild cards. This way I can surround the noun with asterisks to get any cmdlet with the word user or computer in the name, so let's try it out and see. So I'll try it out for computer. So I do -Noun, and I do computer here. And you see I'm just getting the cmdlets inside of the Active Directory module that have the verb Get and have the noun computer anywhere in them. And we can also do the same for user as well. So this is much more manageable. There's only two cmdlets here for each. I'm not going to be messing around with the service accounts, and I'm not going to be messing around with password policies, so the only two cmdlets left are Get-ADComputer and Get-ADUser. These are the most likely ones I need to read both user and computer accounts with. So let's run Get-ADComputer and see what it does. Oh, it looks like this cmdlet is making us use the Filter parameter. On this cmdlet, the Filter parameter is a mandatory parameter, so it's forcing us to put something here. That's probably smart because if I have an environment of 100,000 computers, it might not be the wise decision to just read them all, but I don't in this instance. I used a wildcard and the Noun parameter earlier, and I'm wondering if I can just use the asterisk here too, so I'm going to go ahead and try that. It looks like we got lucky, and I was able to see all the computer objects in the domain. So, let's try the same thing for Get-User, except this time I'll specify the Filter parameter manually here. Alright, great. So it looks like we got a couple winners. So what exactly is each of these cmdlets getting from Active Directory? Let's get some help on the matter with the Get-Help cmdlet. So I want to get some help on Get-ADComputer. Oh great, that's not much help at all. Notice in the remarks section there that it's only displaying partial help. This is because we need to issue the Update-Help cmdlet to get the latest help info. When you run Update-Help, it goes out and downloads all the latest help information from Microsoft. It's the beauty of PowerShell's help system. No more static help. So let's go ahead and try that. And it will bring up the progress bar, and it will go through every module that we currently have to get all the latest and greatest help contents, so we'll just sit here for a second and let it go. (Waiting) Alright, looks like in this instance we had an error, Failed to update help for the modules TLS with the UI cultures. You don't necessarily have to pay attention to the error itself, but there's a parameter on Update-Help called Force that sometimes helps you get all the information there, so let's try that as well. Alright, it looks like Force didn't help either, so I'm going to go look at the error a little bit more, and it says the module TLS. I really don't care about the module TLS in this example, but let's just see if the help is downloaded for the Active Directory module that I care about. Again, we'll do help Get ADComputer, and looks like it's much better. We now have lots of more help than what we had before. And just reading the description on this, The Get-ADComputer cmdlet gets a computer or performs a search to retrieve multiple computers. That looks like exactly like we want to do. Let me look over Get-ADUser and just make sure this does what we think it's going to do. The Get-ADUser cmdlet gets a user object and performs a search to retrieve multiple user objects. Again, exactly the same thing we want to do. So it looks like we have our two cmdlets that we can play with to get computer and user objects, which is Get-ADUser and Get-ADComputer. So now that we've got an idea of what the Active Directory PowerShell module consists of, feel free to do some further poking around just to get used to the cmdlets with the Active Directory module. There are really quite a few useful cmdlets in there.

  9. Forget Active Directory Users and Computers (ADUC)! Active Directory Users and Computers or ADUC is a GUI tool that's meant to create, modify, and remove single users and computers. Sure you can remove multiple objects at the same time, but it's not pretty. With PowerShell, you can easily remove 1, 10, or 10,000 objects in Active Directory with the same effort. Once we get done with our tool, you shouldn't even need to get into ADUC again. Well, maybe sometimes, so I'll forgive you if you open it up every now and then. ADUC wastes an admin's time by having to navigate multiple OU levels, click through numerous wizards when needing to make changes, or having to remember that certain tab that you remembered was there somewhere, but not necessarily there. Not only that, try to change a single attribute on multiple objects at the same time in different OUs. It's a pain. Why not have a tool you can create yourself in PowerShell that makes AD account management a snap? Well you're in luck. I have just the thing for you. We're going to start building this tool with a fictional company. Our company has a particular standard we follow to create new ADUser accounts. Every new user account for an employee has to meet the following standard: First, it must contain the employee's full first name and last name, middle initial, and title. Second, the username must be the first initial and last name of the employee. If that is taken, then it must be the first initial, middle initial, and last name. If that's taken, it would then have to be looked at manually. Does this kind of scheme sound familiar in your environment? Third, it must be created in a certain OU. Fourth, it must have a default password that's set to be changed on first login. And fifth, it must be a member of a company-wide group. I'll use these parameters to demonstrate doing this in ADUC and also build these into my tool to show the difference in the next clip. In this demo, I'm going to demonstrate creating a user based on my company's standards with the Active Directory Users and Computers tool or ADUC. I'll go through some of the thought process during this, and pay attention to the many opportunities where mistakes could happen and how long the process takes. If you forget, I'll remind you at the end. First, I need to find the ADUC to run, so I'll just hit the Windows key, type a few characters here, and I'll find it and open it up. I'll wait a little bit here. And now that it's up, my user account has to be created in a certain OU. This is the OU for corporate users, so let me right-click on that like New User. So I'm creating the user for Bob Smith, and our standard says the first name, middle initial, and last name must be filled in, so I'll go ahead and do that now. I think he goes by Rob, so I'll put that in there. Or was it Bob or Robert? I can't remember, but I'll go with Rob and put his initial in here and Smith. Alright, so Bob, Rob, whatever his name is, Smith is in there. Alright, for the user logon name, our standard says the username's first preference is to use the first initial and the last name. So let's go ahead and do RSmith, Bob Smith. We're going to go with RSmith, and we'll try that. The user logon name already exists. Looks like I have to try the next one. The next one is to try the first initial, middle initial, and the last name. So his middle initial is D, so we'll try that. Okay, it looks like that one worked. Alright, now the password. I'll use the company's standard initial password in here. Alright, we'll try that. Finish. And we got Rob, Robert, Bob Smith's account set up. So now we need to get his title in here. That was part of our company standard. So I need to get the title. Which one of these tabs was title? I can't remember. Organization. Alright, organization. So there is Job Title. So I believe he is the Accounting Man---oh typo, typo, okay, Accounting Manager. No, one more. Okay, there we go. Or was it Accounting Exec? I don't know. Accounting Manager, I think, so we'll go ahead and put that in there. Alright, I better check to get this right because I know we have a script that runs in the background and emails certain information to specific job titles. If I get this wrong even by one character, I'm going to hear about it. Okay, it is definitely Accounting Manager. Alright, so we're done here. We'll apply that. We're good there. Alright, we're done. Alright, three days later goes by, and the user starts their job, and I'm getting a call from somebody that Rob, Bob, Robert, or whatever his name is can't get to the company intranet. Why? I don't know. I mean I'm looking through his account here, and really everything looks fine to me. I don't see anything that's necessarily wrong with it. Oh, you know what? I forgot to add him to the group that we put all of the users in. Great. Alright, so let's add him to the group. Alright, done. Alright, finally, he's finally good to go. If you were to time me without commentary or anything like that, that took me around three minutes after all the back and forths and all the mistakes that I was making. And you never know. I could've made multiple mistakes doing that, so you never really know how many mistakes you're actually making. So look ahead to the next clip to see what kind of experience our new tool can provide.

  10. Onboarding New Employees in AD The first task our tool needs to do is assist us when onboarding new employees. When a new employee is hired in my company, I have to create a user account and a computer account using "Company Standard Procedures." On occasion, I sometimes am given a CSV file on employee names, which is a real pain that I have to run through. I intend for my tool to both create these users and computer accounts based on the company standard so I don't have to worry about that stuff anymore, both manually via PowerShell console and via a CSV file. Up here in the ISE, to build the first part of our tool called New-EmployeeOnboardUser, you can see I've already saved the file as that, and I'm ready to get started. So where do we start first? Well, this tool needs to create all kinds of different user accounts. I'll have it around for a long time, so I'm going to have to figure out a way to make it reusable. The way to do this is through script parameters. It's always a good idea to statically add anything you don't have to when writing PowerShell scripts. It makes your script more flexible and easily changed. In order to support different employees, I'm going to go into make parameters out of the attributes that make each employee unique. These are first name, middle initial, last name, the OU location where the user account will be placed with a default argument, and the employee's title. So with that being said, let's go ahead and add these attributes now as parameters to my script. So with these parameters, we can now feed different inputs into our script. This definitely comes in handy when we need to add a lot of users with a CSV file, for example. Let's quickly see what is possible with these parameters. So I'll go ahead and just echo out all of the parameters via some right host commands here and then run the script to see what happens. (Typing) Alright, and I'll save it, and I'll run the script. Notice that the only value that showed up was OU=Corporate Users. This is because this is the only one that had the default set. The others didn't. Whenever a default is set like that, the parameter will always be that, unless overridden. Now I'll run the script and specify some values for the parameters excluding the location parameter and run it again. Notice how all the parameter values were populated, and it passed the parameters into the script. Location was still OU=Corporate Users. Now I'll just go ahead and change the location parameter and run the script again. So I'll go ahead and change this to somelocation. The script now returns what I specify for the location parameter instead of the default. You can see there it says somelocation instead of OU=Corporate Users. That's just because I overrode that parameter. So what's the next function the script needs to do? Ah, yes. We need to go through that crazy username naming scheme. Because this script will be doing multiple things, it's a good idea to make some comments before I start writing to keep on track, so I'll just go ahead and do that now. So I'll go ahead and remove all these write-Host commands here. And let's see. The first thing we're going to do with this script is create the username, so I need to figure out what the username should be. The next thing I need to do is create the username, create the user account rather. And then we're going to need to add the user account to the company standard group. So I've got a rough outline now of what I want this script to be doing, so I'll go back up and get started on figuring out what the username is here. Oh, but wait. I forgot there's that default company password that's assigned to all the new user accounts. I'm not going to use this as a parameter. And you never know. One of these days it might change, so I need to make it a variable here at the top of the script, so I'll add that as a variable at the top here. Now this isn't necessarily the best practice by any means to place this password in clear text like this. The correct way of doing this is encrypting it on the disk, but that's out of scope in this demo. Feel free to search for a string like PowerShell store passwords on disk, something like that, but you'll find very good resources on that. I've found some great tutorials. Alright, next for you LDAP guys you'll notice that the location is actually a distinguished name or at least part of one. This is the OU that the user will be going in. However, the Cmdlet to create the user needs to be the fully distinguished name with a domain name as well. Now this could be put in manually, but it's not a good idea, so we use a cmdlet from the AD PowerShell module called Get-ADDomain. This gives basic information about the domain itself, and one of the properties it provides is the domain's distinguished name, so I will go ahead and create that as a variable also. If you're curious here, I'll drop in the console and run this command and show you what this output would be. You can see that it will show the current domain's domain DistinguishedName, which we will append the location to when creating the user later on. And next we have that default group all users need to go in. Again, this group may change at some point, so I'm going to use it as a variable near the top of the script as well. Okay, I think we've got all the parameters and the variables that we need, so we can go ahead and get started creating the username. The first preference is to use the first initial of the first name and the last name, so let's create that as a variable. Alright, the substring method I'm using here is a handy method that allows you to pick a number of letters out of the string starting from a certain character position. In this instance, you can see that I'm starting at the 0 position, which is the first letter, and getting one character to the right, which is the first letter of the first name. I'm then concatenating that together with the last name variable, which ends up being the exact string we need. So we've got the preferred username in a variable now, so now we just need to see if it's in use. To do that, I'll use the Get-ADUser cmdlet from the AD PowerShell module. All that needs to happen here is just to run Get-ADUser and the username, but first I'm going to use assign a variable to the ErrorActionPreference variable, and I'll explain why I'm going to do this in a minute here. So with that in there, and then we need to add in the if construct here. So what I'm doing here is assigning a variable to the current value of ErrorActionPreference. And if you're not familiar with ErrorActionPreference, it is a preference you can set in PowerShell that essentially tells PowerShell just to go on. If an error is thrown, it will just go on and keep processing. There are a lot of different values you can have for ErrorActionPreference. So if I go in the console here and say, right now it's at continue, but I can also do things like I can do silently continue, I can do stop. So what that's doing is that's just giving the instructions to the script. If an error occurs then, you know, do this particular action. So for now, let's just go ahead and just put it back to continue. And I'm doing that because of line 11 because Get-ADUser username. By default, that's going to throw an error if the user does not exist. So I can show you here what this does. So if I do Get-ADUser, you know, some random user, it's going to throw an error like that. I don't necessarily care. All I care about is if there is no---if it finds no user at all, that's what I care about in this instance. I don't care if there's no user. I don't want to see the error at all because I'm---actually what I care about is just seeing if there is no user at all. So what I'm doing in line 9 there is first, getting whatever value the ErrorActionPreference is, assigning it to our own variable so I'm storing that. Then I'm changing it to silently continue because of line 11 it won't show me that error. It'll just run through without the error. And also when we get to line 11, this line says if the result of Get-ADUser is anything at all, then it will be matched and continue in the if construct below, so we continue down there in line 12. If not, which would mean the username isn't taken, then it would just go ahead and skip over the if construct completely and move on. So this is perfect because I've already defined the username variable above, and I might want to change it only unless it's taken. So I'm now testing the first initial, last name. So let's say that that's already taken. I need to add some code here on line 12. Alright, again on line 12 I'm assigning another variable to the Username variable. This time our second preference is the first initial, middle initial, and last name. I'm assigning that to the Username. I'm doing the test again. Does that username exist? If it does, then I've got to write a warning with no acceptable username scheme that could be created because according to our company standards we don't have that defined yet what we're going to do with that. And finally, the return statement there just says if you can't find an acceptable username, just go ahead and return out of the script. I don't want it to move anymore because, you know, if I can't find an acceptable username, the script is pretty much done for. We can just go ahead and exit out. Now I'm in the part of the script that will run if we were able to get a username successfully. So let's say we went through that whole if construct and we found a username, so right now we have the username that we want to use in $Username. So now we need to actually create the username. I was messing around the Get-Command cmdlet earlier and found a new ADUser cmdlet, did create a test user I was playing with, so let's use New ADUser. I know this creates the user. So I also saw I had a ton of parameters that I needed to use. So if I go over here to my Command Add-on that I showed you earlier and look at New-ADUser, you'll see it has a ton of different parameters. I'm probably going to need a lot of these parameters. So what I do in this instance if a cmdlet requires more than, I don't know, four to five parameters or so, I personally use a concept called splatting to feed parameter values to it. I do this because it's cleaner and doesn't make you use line breaks in your script. That sometimes gets confusing to read. So to do splatting, you create a hash table with all the parameters and the parameter values in it. So I'll go ahead and create this hash table now, matching up all the required parameters that I need to use. Alright, but first on line 20 there you saw that I'm finally following up on our ErrorActionPreference variable. Once I've got through line 11 and 13 there, the Get-ADUser lines that I know throws that error, I'm just going set ErrorActionPreference back to what it was before. So all of the parameters there in the NewUserParams hash table are pretty self-explanatory, except for maybe the AccountPassword parameter. If I drop down into the console here and do a help on New-ADUser and look at the details help of that, and I will scroll up, and we'll look at what this AccountPassword parameter shows, you'll see to the right of all the parameters is a string of characters in, you know, less than or greater than signs. So AccountPassword has SecureString, AllowReversiblePasswordEncryption there, it has Boolean. Those are the data types that each parameter has. Now typically we deal with strings, but in this instance AccountPassword is a SecureString. I can't just use the password I defined above for the account password. It needs to actually be converted to a string. Thankfully this is pretty easy with a ConvertTo-SecureString cmdlet that I have in line 28 there. So in line 28, what I'm doing there is converting the default password, which is just a simple string as the AsPlainText parameter tells the ConvertTo string cmdlet that I am actually passing in a plain text value. And Force just allows us to tell ConvertTo string to not prompt us because we know we're passing in a plain text. And that's fine. We understand that. And you'll also notice that I have that whole string of code here in parenthesis. That's just order of our operation. So what it's going to do for the AccountPassword parameter, it's going to execute that string. It's going to convert it into that SecureString type that the account password accepts, and then that SecureString is actually going to be the parameter that has passed eventually into our New-ADUser that you'll see here in a minute. And the final parameter that I'll explain here is the Path parameter. You can see how I'm concatenating the location and the domain distinguished name together. This will create a fully distinguished name, which will be the entire path that show New-ADUser where to put the user account when created. So in this instance if I keep the default location of OU=Corporate Users and as I showed you earlier what the value of this is, it's going to concatenate those two together, and it's going to eventually be OU=Corporate Users, DC=lab, and DC=local, so it will be the fully-qualified distinguished name. Alright, so it looks like we have all the parameters, and we're finally ready to actually create the user. So we've done all the hard work getting all these parameters defined. Now all I have to do is use the at sign and the name of the hash table and we're done. So I'll go ahead and put that line in there. And with splatting, you just define the hash table, which is the NewUserParams, all the key value pairs. And then on line 35 there, I'm using New-ADUser. And with an at sign instead of a dollar, that just says I'm going to pass this whole hash table of news or params into New-ADUser, so it's just giving them all the parameters at one time. Okay, so I'm going to say that the user is going to be created. Now what do we do next? The final piece is to actually add it just to the group, which is a piece of cake. This is another cmdlet in the AD module called Add-ADGroupMember. To use this, I just need the username that we just have as $Username and the name of the group, so I will go ahead and do that. Okay, on line 38 there, that Add-ADGroupMember, it's using the Identity parameter, which of DefaultGroup. And if you remember, we define it up there as a Gigantic Corporation Group. And then finally, Username. That will be whatever the username is available, so we should be done at this point. So now let's give it a try and see if it works. So I have the script in the C drive, so I'm going to do New-EmployeeOnboardUser, -FirstName. Let's do Rob D Smith. Alright, so we have Rob D Smith, the -Location we'll just leave it in the default, the -Title is Manager, and see what happens. You can see that Get-ADUser tells us that RDSmith was already taken. And if we go over here into Active Directory Users and Computers, we can just briefly see what that looks like. Look at RDSmith. So RDSmith was already taken here. And then we'll see that Rita Smith is already here, so Rita Smith is RSmith. Well, let's see what happens. What I'm going to do here is a little bit out of the scope of this demo, but I'm going to put this in a try/catch block. And what the try/catch block is, just says if anything in lines 12 through 18 throws an error, throw it out to the console where it will show that red text to us. Just throw it into the catch block there on line 20, and just continue on. So if we put this in a try/catch block, it will be silent. So let's go ahead and try this again. If I refresh this, I'll remove RDSmith. So remember, we still have Rita Smith here, so we still have RSmith. Okay, so we'll try it again and see what happens. And notice now that the red text is gone. The error will come from line 12. Since we did know that RSmith, the Rita Smith was already there, line 12 should've thrown an error, and when it did, it would have thrown it into the catch block here on line 20. But since we had in a try/catch block, all that red text was silenced. So now we should have RDSmith. You know our first preference is RSmith, but since Rita already had that username, we had to actually create RDSmith, so we did create it. So let's see if it actually created it correctly. Now use the parameters of the GivenName and Surname were FirstName and LastName. Now I have Rob D Smith, so all that's correct. AccountPassword, I assured you it's correct. Enabled, the ADUser accounts don't come enabled by default, so I did enable it, it is enabled. The Paths and the Location and then the DomainDn like I was referencing earlier, you'll notice that it is in the Corporate Users OU like it should be. And the ChangePasswordAtLogon parameter is true, and that was part of our corporate standard we use. And you can see that the user must change password at logon is there. And finally, what's the last thing? Put it in the ADGroup called the DefaultGroup here, which you recall was the gigantic corporation group up here. So let's look at here real quick, and we'll see if it's in there. And look at that. It is in the group. So we did adhere to all of the company's standards through all of this, and we got our first part of the tool to work. So this is done. There's no more documentation to look at. So before the company may have had a checklist of do this, do this, configure the user this way, configure the user that way, that sort of thing, there are no more mistakes. You can see that I just simply ran New-EmployeeOnboardUser, fed it a few parameters, and it automatically went down through all those rules and created the account with no problem. The script just knows what to do. I've just created the start of my tool and developed it to adhere to my company's standards for a new user account, but what about computers? Every employee at my company has their own computer, and my company has other standards that dictate how computers are wrote out as well. Here's what has to happen. One, the computer account in Active Directory has to be created ahead of time before the computer is added to the domain because the text that roll them out don't necessarily have permissions to move it into the right OU. And second, the computer account must be created at a standard default OU. Using these rules, let's roll this functionality out into our new tool. Now that our tool has the ability to create user accounts, we're now ready for the next task, which is creating computer accounts. Luckily, our company doesn't have near as many rules around computer account, so it will be much easier. Again, because all our tools are going to be reusable, we need some parameters. These parameters will be very similar to the user parameters as you can see here on line 1. This time instead of the first, last name, and middle initials, we just have Computername and Location with a default. This time, instead of the first, last name, and middle initials, we just have Computername, so that will be one parameter. Next, the computer as a user will need to be placed in a default OU. Again, we'll use a Location parameter here just in case the company ever decides to change the default OU location. First and foremost, a good rule to follow when creating tools is early validation. In this case, I need to first check if the computer name is taken, and I'm doing that in line 3 here just as we did in the other lines to see if the username is taken or not. And if it is taken, I don't want to even continue at all because we really don't have any rules set like that for our company standard yet, so it's just going to write an error, say the computer already exists, and return, which is just completely return out of the script just like we did in the user script. So next, if the Computername is free, so if we get to line 7 there, the first requirement was to place the computer into default OU. Since we already specified the default OU path and the parameter again just like the user, we can use the Get-ADDomain cmdlet again to get that domain's fully-distinguished name and concatenate the actual OU path to the default DistinguishedName, so we're doing exactly just like we did with the user there on line 9. Finally, we need to actually create the computer account. Since we're only going to have two parameters here, we won't need to use splatting like we did earlier. The cmdlet we need to use is New-ADComputer with a Name parameter and the Path to where we need to create it because if we don't use the path, it's just going to create it in the default computer's container. So really that's it. Notice how the names of the cmdlets are similar. When creating a new user account, the cmdlet name was New-ADUser, and now it's just New-ADComputer. This is very common with cmdlets. If you're unsure of a cmdlet name, the great verb-noun convention that PowerShell and the module designers use is pretty self-explanatory. So let's just go ahead and try this out and see if it works. I have the script in the root of C here, so I'll try New-EmployeeOnboardComputer. Computername, let's just say, COMPUTER1. Alright, notice that it has that red text again. And as before you remember that the ErrorActionPreferences didn't work, and neither did the ErrorAction. So if you remember the ErrorAction parameter, again, this will not work with Get-ADComputer the way it was coded, so again I will put this in a try/catch block. That way we won't see that error message when the script runs. And we will try it again. So you can see now that there is no error text, but we can try again to see if the account is created, and yes, it was created. And really that's all there is to it to creating the user account. You can get as fancy as you want. So if your company has multiple rules to create a user, say the computer name has to be something specific just like our username was in the last one, you can easily add any kind of logic that you want in here to just go down through every single step and then create a robust tool that will do a lot of this stuff for you that you previously had to do by just, you know, checking some box and doing some documentation with.

  11. Ongoing Account Maintenance I've now got a tool that can provision new user and computer accounts, so I've moved into managing what I've created. This tool will be created so that's my one-stop shop for Active Directory account management. Two common tasks I perform as an Active Directory administrator is resetting passwords and updating names. People always forget their password and need it reset, and single women tend to get married and want to proudly show their new last names - story of my life. So let's go ahead and add some further functionality to the tool, such as changing passwords and updating attributes of a user account. Once the accounts are created now, we shift into the care and feeding phase called account management. Accounts rarely stay the same, and developing a comprehensive tool requires us to account for this. So as same as before we need parameters. What attributes could possible change in this scenario? Well, first the obvious is the username. What else could we add? Well, I might want to change the last name, so I'll add that, the first name, reset the password, the job title, maybe the manager. Wait a minute. This is getting out of control. Pulling up an account in ADUC, you can see that I have multiple tabs. So let's go to one of these users here, so we'll go to Rita Smith. You see all the tabs I have, and under each of those tabs I have multiple different parameters. It's not really feasible to create parameters for each of these, so how else can this be done? The answer is to define a single attribute that's capable of holding multiple attributes inside of it just like a hash table. A hash table can contain many key value pairs as we saw earlier in the splatting demonstration. It's perfect for defining parameters here as you can see them in my Attributes parameter, which is a hashtable. Again, I'm ensuring the username is actually there before I go any further. And if you remember in previous demonstrations, we'll also need to put that try/catch block to make sure we don't receive that red text, that error message again. So I'll go ahead and put a try/catch block around the code where I use Get-ADUser. So on line 5 there I'm actually confirming if it is there, so I'm assigning the output of Get-ADUser to the UserAccount. If the UserAccount is not there, it's going to write the error and return the script because if the user account is not there, if I don't care, I'm just going to go ahead and just kill the script and be done with it. So on line 17 there, this is where I kind of veer off from the previous demos in this module. This is where I am using the ContainsKey method for the hash table. In this script, since there were way too many attributes to set as individual parameters, I use the Attributes parameter to fill it up with key value pairs that will actually represent the parameter. So if you remember in previous demonstrations, I had one, two, three, four, sometimes five parameters defined at the top of my script. In this instance I only have two, but that Attributes parameter here, which is a hash table, that's going to allow me to put unlimited number of key value pairs in here, which I'll show you here in a minute. Since I'd rather not have an Attributes parameter above and a Password parameter, I've decided to accept both of these parameters in one. The benefit of doing this is that it makes the script input much simpler, and the user doesn't have to worry about which attributes go into the Attributes parameter and which one goes outside of it. The downside though is that Set-ADUser cmdlet I'll be using to make the modifications doesn't accept the Password attribute. The Set-ADAccountPassword cmdlet does. So on line 17 there, I'm checking to see if the user included a password key inside of that Attributes hash table. If they did, I need to catch that and use the Set-ADAccountPassword cmdlet, which I'm doing there in line 18. After the account has been reset, I then remove the password key from the hash table because the Password parameter will not work with the Set-ADUser cmdlet. It will fail. So removing it now allows me to pass the entire Attributes hash table to the Set-ADUser cmdlet without any kind of conversion. And on line 23 there is where the actual change occurs. Since I already assigned the user account to the UserAccount variable above to test for the existence of the account, this can still be used, so there's no sense grabbing it again. In this instance I'm using the PowerShell pipeline to pass the UserAccount to the Set-ADUserAccount cmdlet. And then again, there is the splatting where I use to feed all those parameters inside of the Attributes hash table directly to Set-ADUser. So let's go over a quick example of using this script. So now that the script is done, let's go ahead and give it a shot. The console here in the ISE, I'm going to go ahead and execute the script. I'm wanting to change an account called RSmith, so I'll run my Set-ADUser. The username is RSmith, so I'll use that as the Username parameter. Next is the Attributes hash table. I want to change this user's first and last name a little bit. To do this, instead of using a separate last name and the first name parameter as I did in previous demos, I'll just be putting this all into the hash table like this. So I'll define the Attributes hash table parameter, and I will---let's create the empty hash table, and I will give it the attributes that Set-ADUser wants. So the givenName is the actually the first name, so let's name her Trudy, I guess, and then the surname smithley. Yeah, that'll work. One thing to mention here is those hash table keys, so the givenName and the surname. You can see that they're actually they're kind of weird. They're not actually first name and last name. These are the actual parameters that Set-ADUser uses. The hash table keys must coincide with these. So you'll see I have the hash table of givenName and surname, which are the keys, which on line 23 there I'm directly passing the Attributes parameter directly to Set-ADUser, so I'm not doing any kind of conversion or manipulation. So this is how I get away with using the Attributes script parameter directly with the Set-ADUser cmdlet. So first before I run this, let's just see what RSmith looks like now. So we have Rita M Smith, so she's Rita M Smith here, and we'll just make the username right. Yep, it's RSmith. So now, let's run this. Alright, so if this worked right, her first name should be Trudy, and her last name should be Smithley. So let's refresh it. And the display name hasn't changed because that's not an attribute we changed, but you do see that her first name is now Trudy, and her last name is Smithley. So, looks like it worked great, and the user was changed. So this is a great example of modify an existing user. It's going to be a great addition to our tool. Along the same lines as the user account, domain-joined computers also have attributes that must be changed sometimes. I want to keep my tool as comprehensive as possible and not forget anything. I'm going to add functionality to modify computer accounts as well. Let's go over a demo of how I've implemented the feature of changing these attributes in my tool. This is going to be a pretty quick demo, but I still wanted to give you a quick glimpse here. This small script is essentially doing the exact same thing as what we just did with the user accounts, only this time it's with computers. Since Set-ADComputer and Set-ADUser operate in the same manner, we can replicate very closely how our user modification tool was created. You can see the only difference here is that I removed that extra step of resetting the password. The reason is because I don't have the need to change anything about a computer account, so here's an example. Let's say I have a computer called windows-vm in the domain. Let's get some information on that. I'll grab the properties now and to show that it currently doesn't have a Description or a DisplayName associated with it. So I want to change this. I'm going to use my tool to do this. So as with a computer account tool, I'll give the script the Computername I want to modify and create a hashtable that will represent attributes in the script, which will then eventually get passed to Set- ADComputer, so let's go ahead and do that now. So you can see here I have the parameter of Computername with WINDOWS-VM, the Attributes parameter is that hash table with DisplayName, and Description has key names, and then the values of those are the attributes that I want to set. So we apply that, and we run it again. And you can see the Description and DisplayName were changed to how I would like. So really that's all there is to it to this set my ADComputer tool. You can easily add more functions into this at a later time if you'd like.

  12. Building the Toolset By now, you've seen how you can save time by creating PowerShell scripts, but I really haven't actually created a tool just yet. A tool built with PowerShell is easily reusable and preferably can be shared with the community. The problem is all we've done so far is create a bunch of disparate scripts. There's really no cohesiveness in the functionality. These scripts must be converted into PowerShell functions so they're easily reusable, transferrable, and eventually in module seven, in function-format, so we can roll these out into our tool belt by creating a PowerShell module. In the upcoming demo, I'll be showing you how to gather up all these scripts and convert them into advanced functions. Advanced functions are a new concept to us and require a lot of time to properly introduce them. From now on, I'll be building all module functions as advanced functions, and we'll progress more deeply into the topic as each module progresses. Before we can begin converting the script to an advanced function, let's briefly go over this AdvancedFunction example I have here. This is what I'm calling an AdvancedFunctionShell. It's the bones of an advanced function. When first creating an advanced function, it's important to have a template to work with. Let's go over what makes up this shell. The first thing you notice is the function block. This is how every function in PowerShell is defined. This tells PowerShell that all the code inside of this block is meant to be part of the function. Next you have all the comments. All the comments are in green. This is called comment-based help. Do you recall earlier when I was using the help to find information about the AD cmdlets? This is how you can create your own help. If I go over to the console and look at this again, for example, you can see where this help will show up, so let's go to a command here and try it out. So let's do Get-Help, and then let's just do Get-Help on Get-Command. You can see that I have a few different areas here, SYNTAX, the SYNOPSIS, DESCRIPTION. And you can see that SYNOPSIS down in the console matches SYNOPSIS up in my script pane. You can also add DESCRIPTION if you like. And you could even do the parameters if you like if you do -Full. You can see the various parameters. Go up here a little bit. That's examples, outputs, and there are all the different parameters. So if I added anything into this comment-based help and under each parameter, which you see here on this one's, for example, -Verb, you see gets commands, cmdlets, functions, all of that, that will show up in here under our AdvancedFunction. So the comments aren't necessarily required for a function to work, but it's very important, especially if you'll be sharing code with others. At a minimum, you should at least have the synopsis, which is a summary of what this function does, so one or more examples of the usage also. Because if you put an example up here, so let's say we're going to call this New-AdvancedFunction -Param1 MYPARAM, and let's just say this example does something to this and that, so what you see what I have here is what will be down here under the parameter of what you see on TotalCount there, you see Gets the specified number of commands. That text will match up to what's under the example text. Next is the CmdletBinding line. This line is extremely important. In fact, this is a line that defines an AdvancedFunction. I'm not going to go into a whole lot of detail about this in this course, but suffice it to say, you need this line in order to eventually make your function do a lot more things than just execute the code inside of it. So with CmdletBinding, what that does is that makes it advance, which gives you the opportunity to do verbose logging, warnings, you can get to a lot of different things inside of that. It's fairly advanced. We're not going to go over it in the topic, but just go ahead and put it in all of your advanced functions. So next is the parameters block. That's the param block on line 15. This is where we'll place all the parameters to the function. So just like we had parameters in the script, these are exactly the same; however, this is parameters to a function instead. So for example, let's say I have a parameter called Param1 up here and we put a description, This param does this thing. And we go down here, and then we type in it's a---you'll say it's going to be a string $Param1. That's how I would define the parameter in this instance. So you'll notice in this AdvancedFunction that we have param block here, but if I go back over here to an old one we just did, you see the param block is up here. That's what makes the difference between an advanced function and just a common script. So next, we have the process block. This is where we'll be placing all the code from our script. This is a required block. There are also two other blocks called begin and end, which can be placed before or after this block, but in our situations here they won't be needed. So sometimes you'll see blocks like this. (Typing) There are a lot of different nuances that go on in these blocks, but for a beginner to PowerShell you typically won't need these, so I'll go ahead and remove those out of here. Okay, so finally is the try/catch block. Technically, this isn't required for an AdvancedFunction, but I highly recommend making this default practice. This is for error handling. So have you ever had that nasty red text come up to you and doing things as PowerShell? I'm sure you have. This is generated from error objects. When an error is generated anywhere inside this block, PowerShell will throw the error and the catch block will catch it and then run whatever code is in the catch block. So let's say I have an instance here, SOmeErrorOccurrd, and then we're going to throw an error occurred. So then we go over here. You'll see on line 22 there I throw an error occurred string over to line 25, which is the write-Error inside of the catch block, an instance I need to throw the string an error occurred here. Let's go ahead and try that out. Well, first thing I have to do is copy and paste this function into memory here, so I'll copy it into my console. After that, I'll run New-AdvancedFunction. And you can see it says, New-AdvancedFunction, which is the name of the function, and then the string an error occurred - Line Number and then the Line Number 22. And you'll notice that on line number 22 I have the throw line where I had actually threw the error object from the try block. So that's just a simple example of what that try block does. It allows control over the flow of the script in order to easily weed out errors. You've now got an introduction to the AdvancedFunction we'll be creating for the rest of the tool modules. Let's get started transforming our first piece of the ADAccount Manager tool into a few advanced functions. I've already built the toolset to save time, so let's go over how this was done. You can see what I'm defining as a toolset is actually just a PowerShell script called ADAccountMangementAutomator with four functions inside. Each of these functions is designed around a similar object, which is Active Directory. Notice that each function name is the same name as the individual scripts in the tabs above we created earlier. You don't really have to do this, but I did just to keep things simple. We've already done the hard work in creating the scripts. For this toolset, I just converted them to advanced functions. So let's get started, and I'll show you how this was done. To get started, I simply took the shell I talked about in the last demo and copied four instances into this script creating each function. So I took the AdvancedFunctionShell script here, and you'll see just that typical shell that we created earlier, and then I just made four individual functions. And once I got inside of there, then I just filled out the individual information that was required for each function. And next, I filled out the comment-based help, which will assist myself if I forget how to use this function or really anyone else going to use the function at a later time. Next, I filled in the comment-based help as you can see here. I did that for each function. And as we talked about earlier in the AdvancedFunctionShell, I filled out the SYNOPSIS, I also did an EXAMPLE and the PARAMETERS. I did that for each of the functions you can see here. And next I filled out the parameters in all of the functions. You'll notice that these parameters are the exact same parameters I had in the script. So for example, the New-EmployeeOnboardUser function here has five different parameters. If I go to the New-EmployeeOnboardUser script that we created earlier, you'll see it also has five parameters, although look it's in a different spot. This was a script, just a basic script. This is how you define parameters in an advanced function in a param underneath the CmdletBinding keyword. And after we've created the comment-based help out of that CmdletBinding keyword there and added the parameters, then it's just a matter of copying and pasting the code for the scripts inside of the process block. So for example, on the New-EmployeeOnboardUser function here, you can see I have very similar, if not the exact same code that was in the script. So it's just a matter of we've already done the hard work. It's just a matter of copying and pasting it over into the AdvancedFunction. So it's nice that you can share your code. Once you make a PS1 script, you can just easily convert it into an AdvancedFunction that way. So now that you have a brief overview of how I made these functions, now let's go over an example of how this toolset can be used. You'll see that I have a script called CSVImportExample here. This is a script that will use all the functions we created in our toolset by reading a CSV file full of new employee information and just making everything happen. So here's how it's going to work. In line 1, we're doing something new there. This is called dot sourcing. Dot sourcing is a way to load all of our functions into memory so they can be used. This is an elementary way to load functions, but in an upcoming course module we'll be going over a better way to do this via PowerShell modules. Next, we're reading a CSV file, which if I pull this up to show you here you can see the structure. I have seven different fields here from name information, the location where the user account needs to be placed, department of the user account, the title of the employee, and finally the provision's computerName we'll be setting up for this employee. And on line 4 we enter a foreach loop and begin processing of all the CSV rows. We'll use splatting to create the parameters to create the user account, and as a minor exception there is the Location field. This says if the Location field has anything in it, add it to the parameters hash table. If not, don't even add it. This is here to prevent a problem with the New-ADUser cmdlet that doesn't really like it when you try to pass a location parameter with a null value. So sometimes when there's a Location field that's blank like the Adam Bertram one here, then it will throw an error if I don't have that step in there, so let's go ahead and just add that in there. And next we'll provision the user account from our company standard, and this time I'm getting the username that was created from the output of New-EmployeeOnboardUser. This is a minor change from the previous demo for this script. So what I did was in New-EmployeeOnboardUser here in this script, you can see I don't have anything after ADGroupMember. In this advanced function I simply added $Username here to output the username that was generated in this script. So you'll see if I go over to here, you'll notice there on line 64 that I have $Username. That is outputting the username that was created from New-EmployeeOnboardUser. And next, we'll just create the employee's computer account, then modify the computer account adding a description of employee name's computer, and finally modify the user account to set the department. So really we're just combining all of the functions and tools that we created to really make an entire toolset. So this will summarize all of the work that we've done in this module, so let's go ahead and try this out and see if it works. Okay, you notice that it took a little bit, and the reason was we did quite a bit of things here. We have, you know, 31 lines, and in those 31 lines there are multiple references to functions, so we were using all four of our functions. So let's see if this actually worked. So let's go back to our CSV file, and we'll see I wanted to create a username called Adam D Bertram. So first of all---oh, I want to create Adam D Bertram, and you notice I didn't have anything for the location field here. So what happens when I don't have anything for the location? It should have created it in the default location that I have set. So let's go to our function here, and we'll see that the location is set to OU=Corporate Users. So if this worked right, the Adam Bertram account should be in Corporate Users. So here refresh it, and looks like we do have an ABertram account. And yep, Adam D Bertram. You notice that it did the first name, the middle initial, the last name as I have here in the CSV. It looks like it should be set to the Accounting department, and the title should be Manager. So we'll go to the Job Title is Manager, and the Department is Accounting, so all that worked. And finally, we add a computer name for this employee called ADAMCOMPUTER, and we'll go down to this step where we added the computer. Since we're not using a location parameter on the Computername, that's also going by default, so let's check and see what the default path was for that. And you'll see the default path is OU=Corporate Computers. If we go over to that OU, you can see in fact it did create it in there, and it did do the description. So it says Adam Bertram's Computer here in the description, so that one worked. We go to the next one, which was Joe E Murphy, and in this one you see I did a location. I did a location of OU=SomeOtherOU, so let's see if it put it in there. Right there is JMurphy, and Joe E Murphy is in there. The department is Information Services. There you go. And Job Title is The Big Dog. He's the boss man. And finally, JOECOMPUTER is his computer name, so we check that. And again, we didn't do any kind of location there, so JOECOMPUTER is there in the default location. So this was a demonstration of our entire toolset. We'll be doing this two more times in this course. This is kind of the moment where you really get to enjoy the fruits of your labor. It took us a long time to create all this functionality, but once you're able to put all that functionality together in a single tool, it's very rewarding, and it's going to save you tons of time over the long run when you keep using this tool.

  13. Summary The number one takeaway from this module is to wrap your head around the tool concept. Now that you have a real world example of what a tool consists of, I hope you can see the potential here. This example tool I created saves time, but it can be just the beginning for you. Next, you should've seen that using GUI management tool to get things done not only takes more time, but also opens you up to mistakes as any human can make. You can easily fat finger something, accidently not copy and paste the right information, among other things. Even though you took a while to build a tool from that's a single incident, you don't have to do that again. You'll now get back that time in ROI over the lifespan of the tool. Finally, always build the tool with expansion in mind. You'll typically come across some other functionality you'd like to add in later on. Be sure to not write code in a way that doesn't allow you to easily change it later on. In this module, we built the Active Directory Account Management Automator tool. I hope I gave you a sense of the kind of time and effort you can save by using the tool over using the GUI. The tool we created handled two main functions, onboarding new employee accounts, and managing them. And finally, I challenge you to implement the removal piece of the tool. Once you have the process down of provisioning a new employee and managing it, you finally need that removal tool. I personally didn't include this in the tool to give you a challenge to implement that yourself. This tool would then manage the entire lifecycle of both an Active Directory user and computer account.

  14. Tool #2: Log Investigator Log Sleuthing with PowerShell Hi! I'm Adam Bertram, and this is module 4 of my PowerShell Toolmaking Fundamentals course. In this module, we'll be covering our second tool called the log investigator. by the end of this module, you should have a very useful tool to gather logs from various sources and easily readable format. One of the many hats an IT pro wears is troubleshooting. A server goes down in the middle of the day. Why? An application crashed last night. How come? Many times, tracking down the cause of these problems lies in a log file somewhere, but where at? Sometimes you don't really know where to start looking. This is what the log investigator tool will help with. This tool is being created to assist me in tracking down root causes of various problems on Windows servers. Its purpose is to search through every piece of interesting logging data on a system within a specific timeframe and give me a glimpse as to the state of the system at a certain time. When diving into a long troubleshooting session, this tool will get me started. There're not many prereqs in this module. The log investigator tool can be any Windows machine that can get PowerShell v4 installed. This is the only real requirement here. However, to follow along in the demos and get the same experience I am, you'll also need another remote machine that you have administrator control over.

  15. Interrogating Windows Event Logs The main source of information regarding the goings on in a system is the event log. A Windows system has dozens of different event logs with thousands of records strewn about them. PowerShell has the Get-EventLog commandlet to look through event logs, but it soon falls flat when it's faced with parsing through dozens of event log sources looking for various specific events. As a first task, I will be creating our log investigator tool to quickly and efficiently parse through all the event log sources on the local or remote computer based upon a specific timeframe. We've got a problem where we can't find out what went on on a server when we get notification that something went wrong. So, let's say a bunch of users called a help desk and say the server is slow. Well, what happened? When did it start happening? That sort of thing. We need to figure this stuff out. And we're tired of sifting through event logs and text log files and, instead, decide to build a tool to help us. That's what this tool is going to be. So, let's get started doing that. Our first task with this tool is to easily find events and event logs within a specific timeframe. Again, just like the last tool, we're going to need some parameters. What are the attributes that you think we'll need to be able to pass into the tool? Well, obviously, I want to be able to run this on multiple computers, right? So, I'll need a $ComputerName variable. I'm defining a default parameter here to localhost because I want to query the local machine unless I override it by specifying a remote computer name. This tool's going to be designed to search for events within a specific timeframe. So, what makes up this timeframe? A start and an end date and time, right? We're going to need a start time and an end time to define the time window. I'm specifying datetime data types here for the variables because they are especially important to our timestamp parameters. This will allow me to pass strings into each parameter. And PowerShell will know they're supposed to be dates and then automatically can divert them, which I'll show you here in a minute. Initially this tool is going to be searching all event logs else we could also have parameter-like event log name, for example. But in this instance, we are just going to be searching all event logs, but feel free to add this parameter in if you'd like at a later time if you're going to optimize this script. But for now, it's not required. So, now that the parameters are defined, we can search for the events. We have two options here. I can use the Get-EventLog commandlet or the Get-WinEvent commandlet. They both do very similar things. I'm going to be using Get-WinEvent here. The reason is because Get-WinEvent allows me to query all event log sources, not just the typical system, application, security, etc. It's also much faster than Get-EventLog. I don't expect you to know the nuances between Get-EventLog and Get-WinEvent off the top of your head. If you're new to PowerShell, you probably don't know these things. But once you create more and more of these tools, you learn these little things here and there that will optimize your code. Before I get the events, though, I first need to find all of the event log names. To do this, we can use Get-WinEvent for this also using the ListLog parameter. I'm using the $WhereFilter here removing all of the event logs with no records. There's no sense to query the log if there're no records in it to begin with anyway. This will allow me to find all event logs in my particular system. I'll test this out now by manually using the computer name of localhost and see what I come up with. So, let's go ahead and just try this out now, and we'll just run this and see what happens. So, I'm going to replace localhost here just to test this out and see what happens. You notice that a bunch of log names came up. Now, on line 3 here, I don't want the log mode and the maximum size and bytes, and I don't even want record count. For our purposes, I just want log name. To do that, I enclose this whole snippet in parentheses and then added a .LogName here. And what that will do is that will just give us the log name. So you can see it removed a lot of that stuff, and we just have the log name here. So, that's what we're wanting. So, it looks like it worked. And it looks like I've got lots of event logs I'll search through in a bit. So, I'm now ready to query for events. Get-WinEvent has a couple of ways to do this. One method is using a filter hash table. This is a hash table that allows you to specify different parameters to narrow down your search. So, let's just build that now. On line 4, you notice that I'm creating the hash table called $FilterTable, and since I'm looking for all events in all event logs between a start and an end time, I'm using three parameters here--StartTime, EndTime, and LogName. I will use this hash table as the filter hash table parameter value in Get-WinEvent here in just a minute. And, finally, on line 10, I can build a command that goes out and actually does the event log query I'm after. Now, you can see it's pretty self-explanatory. You can see it's getting all the Windows events from the computer, which $ComputerName we have in our parameters, and then using that filter hash table parameter that we created above. And, finally, the ErrorAction SilentlyContinue. From my experience, sometimes you will get errors where the script maybe doesn't have permission to query a certain event log. In this instance, I really don't care about that, so I just go ahead and skip over that. So, let's just go ahead and give this script a shot on my local machine here. Get-WinEventWithin, and let's just do the local machine, so we don't need the computer parameter. StartTimestamp, we'll just do today, which is April 16, 04-16-15. Let's do between 4 a.m., and the EndTimestamp is 04-16-15, and let's do it from 4 a.m. to, I don't know, 2 p.m. So, remember how I told you about the datetime data types above? Here's what I was talking about. Whenever I have the date and time as strings like that, it's a string. But whenever the script runs, it's going to convert those to datetimes. Before I run this, let's just---let me just briefly show you how this works. So, let's just say I have a month, day, and a year, and a time. So, 04:00, 4 a.m. If I do that, it's just a simple string. Now, if I cast this to a datetime, you'll see that it changed significantly. It now says it's Thursday, April 16, 2015, 4 a.m. If you put a lot of detail, a lot of more intelligence behind it instead of just string, that's what my Get-WinEventWithin script is going to do whenever it runs. So, with that being said, let's just run this and see what happens. StartTimestamp, we'll do today, and EndTimestamp, we will do today, 2 p.m. Let's see if there're any events. You can see immediately that a lot of events are coming through. But are they actually the events that we wanted? So, I'll scroll up here, and we'll look. So, it looks like in the ProviderName: Microsoft-Windows-Wcmsvc, you'll see that it does have some events. It looks like these are ordered from most recent to least. So, it looks like on 4/16 at 1:59:30, which is 30 seconds below our EndTimestamp, it was there. And if we go over, it looks like from 1:09, it looks like there was only one there. So, 1:07, yup, that's in our window. 1:07, yup, that's in our window. All these are in our window. So, it looks like it did correctly query only the events in all the windows. And if you let this go, here's a quick thing that you can do. By default, it's just going to spit out all of the events in all of the event log between the timeframe. I'm curious to see how many events there actually are. I can enclose the whole command in parentheses and use the .count property. And what this will do is this will count all of the events that will come back. And if you wait here a minute, we'll see how many events actually happened between that $StartTimestamp and $EndTimestamp. And it looks like it took a little bit, that's why we edited out a little bit of the time, but we came out with 1878 events. So, what that means there is this---my local machine generated 1878 events between 04/16/2015 4 a.m. and 04/16 2 p.m. in military time there. You can see how helpful this can be when doing some troubleshooting. So, it looks like we're all done here with this part of the tool. Let's keep the momentum up by adding text log functionality to this tool.

  16. Interrogating Text Logs Applications sometimes either don't use an event log source or only some events are written to an event log. In order for my tool to gather all pertinent troubleshooting information, I'm willing to cast the net wider than just event logs. I need to look at generic text logs as well. As a second and final task, my tool will help me search for log files that were last written to within the specific time period. This might not necessarily be an indicator to a problem inside the log file, but it is worthy of further investigation. To build this piece of log investigator tool, we again need to start with parameters. Are you starting to see a trend here? The purpose of this tool is to find all log files on a remote or local system that have a log file that has been written to within a specific timeframe. First of all, we'll again need a $ComputerName parameter here since I'd like to use this on more than one system. Second, like with the event log querying, I'll need a start and an end time of the timeframe. And, finally, I'll add a $LogFileExtension parameter here to allow control over the file extensions I'm defining as a log file. Some log files don't end with .log. So, I want to be able to control that. Regardless if this script is querying at local or remote computer, I want to run it identically. So, what's the first task I need to do before I find a single file? Well, first I need to find all the drives on a system. So, how would I actually find the drives on a local system? One way is through WMI or CIM. There's a class that exists in WMI called win32_LogicalDisk. This class lists all the local drives and their drive type, such as a locally attached disk, network drives, heaven forbid floppy drives, etc. The drive type for locally attached hard disk is drive type 3. You can see it there on line 5. What I'm doing is finding all of those logical disks with drive type 3, which as I just mentioned are the local disks. And I'm just finding device Id by wrapping that command in parens. So, I'll run this now on the localhost here to show you some example output. Now, you see on this machine I just have one, a C: drive type. However, you could have multiple ones, and all of them will eventually show here. Starting there on line 7 again, since I can't query drives like C: or D: over the network, I had to actually go a different route here. So, I had to enumerate all of the shares. I need to find all of the admin shares that exist on the machine. And more specifically, these are the admin shares, so it's like the C$, D$, E$, F$, the default shares that are assigned to every drive on a machine. Thankfully, again, we can use WMI queried remotely to get all the pertinent share names. So, starting on line 8 there, what I'm doing is using Get-CimInstance to find all of the share names on a remote machine. And you'll see there on that where block, where the path match and then that crazy-looking string, I was forced to use an obscure regex string here because if I demonstrate the default paths of admin shares, you'll see that I could be gathering a number of different paths. So, let's just go here, and we'll do the local machine and see what it comes back with. So, I'm just going to do the local machine now, but I could just add a computer parameter just as easily. I'm just looking for root drives, so the C: drive, D: drive, E: drive, that sort of thing. But when you're querying win32_share, I get all the shares on the system, so they could be down level folders like you see here--C:\Windows or C:\Shares\XA. I don't want all those. I just want to see the shares that have a C:\, D:\, etc. And that's what that regex string in there, the w(1) and all that, that's what that actually means. They just give me all the shares with a path of a letter, colon, and then a slash. The next on line 9 there, I'm creating a locations array. In this instance, I'm creating an ArrayList. You may wonder why I'm actually creating an ArrayList instead of your typical array. I could have done something just like this--so $Locations = a blank array like that. Using an ArrayList, it's a very similar data type, but it's much faster whenever you add a new element to the array. So, if I want to add an element to the $Location parameter here, I would do that by doing something like test, something like that. Now, what that does there is actually creates a brand-new array. It destroys the previous array and creates a new one. By doing that, it makes things a little bit slower. So, as an alternative, what you can do is use the Systems.Collections.ArrayList data type, which is very similar, but it doesn't actually destroy the array every time. It just appends the index to all of them. And you can see there on line 15 what I'm doing is whenever I find the suitable share that I want to add to that $Locations, instead of doing the +/= there to append an index to that share, I actually have to use the add method. And what that does, that just does exactly what the +/= does. It appends the index to the end. And I had to use the Out-Null, because if we add an element to an ArrayList by default, you'll output the index number, and I don't really want that in this instance. So, starting on line 10, I enter that foreach loop, which is all the shares that you can see there down below, the ADMIN$, C$, IPC$, that sort of thing. So, I enter this loop, and then I define the share name by \\$ComputerName\, and the share name. So, in this instance, that would be something like COMPUTER1\c$, \computer3\ce$, that sort of thing. And then once I define the share, then I have to test it. If I can't access the share for some reason, I don't even want to worry about it, and I'm just going to write a warning to the console if I do so. But if I do, I'm going to add it to the $Locations ArrayList. Continuing down here on line 21 is where I start building that familiar splatting hash table that you've seen before. I call it $GciParams because this is the parameters that we will feed to Get-ChildItem on line 34. And, again, here this should look familiar to you by now. I'm defining the path parameter of the $Locations, which is all the accessible location that we found. The filter is where the $LogFileExtension parameter comes in handy. So, if you recall up here in the parameter, I used log as a file extension. So, it's going to find all the files with a .log extension. Recurse, I'm just going to find every file and every folder. Force looks for hidden files as well, so I just want to make sure and be as comprehensive as possible and get everything. ErrorAction SilentlyContinue--there're going to be some files where I may not have permission to read for some reason. In that instance, I don't care about them. I just want to skip them. So, that's what that does. And, finally, file tells Get-ChildItem that I just want to find the files, so I don't want to find folders and directories at this time. Continue down there on line 31, I build out the $WhereFilter. Now, if you're fairly new to PowerShell, you may wonder why I have built the $WhereFilter up here instead of putting it on line 34 right after the Where-Object commandlet. I tend to do this to make my code look better. I could do this, which would be the exact same thing. But the problem is whenever I do this, this extends out to 146 lines. You can see another feature of the ISE you can see the line in column count there. It extends out to 146 lines. If I define the $WhereFilter earlier, it's only going out to 121 lines, and it looks cleaner there on line 34 when I pass the $WhereFilter script block here to the Where-Object commandlet. So, on 31, I'm building the $WhereFilter, and I have three different conditions. The first one is the LastWriteTime is greater than or equal to our $StartTimestamp. The LastWriteTime is less than or equal to our $EndTimestamp. So, that gets us the LastWriteTime that's between our variables. And, finally, the Length which is not equal to 0. This is a shortcut in here because I don't want to even see a file if it's 0 because I already know there's nothing in it. So, it saves a little bit of time. And, finally, on line 34 is where we actually do the work. We use Get-ChildItem, feed our GciParams splatting hash table to Get-ChildItem, and then I have to pass that to Where-Object because there're some instances like LastWriteTime and Length I don't have the opportunity to filter for that in the filter parameter inside of the GciParams hash table. So, let's go ahead and just try this out and see if it works. So, I'm just going to do Get-TextLogEventWithin, ComputerName, I'm just going to do localhost so I don't need ComputerName. StartTimestamp--today is April 18, so I'm going to do between April 18, 4 a.m., and April 18 at 2 p.m. So, if I run it, you'll see it will go through every single file on every drive. In this instance, you can see there're a couple of files in C:\Windows\System32\sru. There're a few in Microsoft\RAC\StateData. And notice that they're all .log files. And the LastWriteTime on all of them is between 4 a.m. and 2 p.m. So, this is just a quick way to find all files that were changed within a specific time period. You can do a lot more things with this, and I challenge you to add new additional functionality to this. The whole purpose of this course is to give you an idea, maybe get you excited about a particular feature and just to add on. I would love to hear about any other features that you've added to these scripts that we're creating today.

  17. Building the Toolset Each tool has different inputs, outputs, and processing methodologies. It's important to take what we've built in this module and move everything into a set of functions. This way our tool can be easily reused at a later time and critical when we combine all these functions into a module later on in the course. So, we've built a couple of scripts to track down some interesting event log entries and log files. How should we bring these scripts together and build a toolset from them? One way I've chosen again is to convert these scripts into functions. This is exactly the same process I used with our last AD Account Automator tool. I've got four scripts open in the ISE here as shown by the four tabs at the top. The last two on the right are the scripts we created earlier. The one I'm on now, LogInvestigator, is what I'm calling the toolset. Again, I've taken the contents of both scripts, and I have created functions from them. So, if you remember those scripts, the code to do that is exactly the same. I've used the same parameters and the same code as exactly like our ps1 scripts here. If you remember, creating functions from scripts is a great tool-making technique. It makes it easy to add more functionality onto your layer. You can see I've also got another long-winded script name here. Forgive me; I'm not really the best at naming. So, it's called Get-InterestingEventsWithinTimeframe. I've taken the tool concept a bit further this time by creating an example script that leverages the toolset itself. This is an example of a typical script that may be created to use this toolset. It combines both the event Id query script with a log file query script to create a single troubleshooting tool. So, this is what I'll be focusing on in this demo. You can probably tell by now this is a "advanced script" just by seeing that familiar CmdletBinding keyword, and we get all of the functions that come with making a script advanced. Take a look at the parameters. You'll see that I've combined the parameters of both functions. Both functions have a $Computername, $StartTimestamp, and an $EndTimestamp parameter. The Get-TextLogEventWithin function has a $LogFileExtension parameter. So, you can see if I go over here, you'll see that the Get-TextLogEventWithin function has a $Computername, $StartTimestamp, $EndTimestamp, and the $LogFileExtension. And similarly, the Get-WinEvent function has a $Computername, $StartTimestamp, and $EndTimestamp. I'm going to need all of these parameters in the main part of this script here at our param block if I'll be using them in the functions later on down in the script. Now, on line 20, you might notice something different. This is called parameter validation. This is a great way to first validate information passed to a parameter. In this instance, I have a $Computername parameter, which represents the computer name I'll be querying both events and log files on. What would happen if I passed an offline computer here? The script would probably blow up since we're assuming the computer is going to be online. Wouldn't it be nice if we could just somehow ensure any computer name used here is online? Yup, that's where parameter validation comes in. There are many different kinds of parameter validation. Here, I'm using ValidateScript. ValidateScript is a form of parameter validation that runs a script block before things even get started. For this $Computername parameter, I want to make sure it's online. The Test-Connection commandlet is a great way to do this. With the ValidateScript parameter validation, every parameter validation routine has to include parentheses and something inside of the parentheses. In the ValidateScript option, it requires a script block. You can tell it's a script block there because of the curly braces on the left side of Test-Connection and the right side of Count 1 there. This script block when it runs, it returns true if the computer is on and false if it is not. You can see the $_ parameter argument there for ComputerName. That represents the parameter argument that we're going to pass to it. If the script block returns false, the entire script won't run. It's a very handy way to ensure only legitimate data is passed to our parameters. So, let's run this on a bogus computer name now and see what happens. So, I'm going to try this, just some bogus computer name. Notice the error message. It says Cannot validate argument, which is exactly what we want. If the computer name can't be accessed, I want to stop now and not bomb out in some unknown way later on. The rest of the parameters are what you're used to. Now, coming down the script, you'll come across another new concept, the begin block, and on line 29 something called dot sourcing. In an event script or function, there're three blocks you can use--a begin block, and a process block, as we've got here, and an end block, which isn't going to be covered in this course. The begin block is a place where you could place any code that you only want to run once regardless if the script is used in the pipeline or not. It's also a great place to place any small functions that you'd like to use in a script or in this case a dot source. Dot sourcing is a way to include code from other scripts. If I go over to the LogInvestigator script again here, you'll see I've got these two functions. I'm trying to keep my scripts modular as I don't want to just include them in a script. So, technically, I get the same kind of functionality by copying this out, removing this, and pasting in this code. But, you'll see that it just adds a whole lot of extra lines in here that aren't really necessary. We need to modularize that a little bit. So, let me undo all of this, and then you'll see that it just comes back to one line. Both the functions are included here in the LogInvestigator script. So, this is why I'm dot sourcing. Dot sourcing just allows me to call another script and run any code inside of that script. In this case, since LogInvestigator has two functions, this script will allow us just to call them. So, when I run that dot sourcing on line 29, that actually executes LogInvestigator.ps1 in memory and loads our two functions here as you can see. And once they're loaded in the memory, I can then use them later on down the script. There you can see on lines 38 and 39, I'm going to be using them. Coming down in the process log, I'm using splatting again. You can see I'm using the $Params hash table with @Params and @Params on both of those functions. Technically, I could create two different hash tables. But if we're using the same $Computername, $StartTimestamp, and $EndTimestamp, there's no reason to actually duplicate code. It's always better to practice a routine called DRY--don't repeat yourself in code. So, this is what I'm doing here. One thing to note with a Get-TextLogEventWithin is the extra LogFileExtension parameter. I can combine the splatting technique with additional parameters if I need to. This is allowed. Finally, we're wrapping the whole thing as usual into a try/catch block in case errors throw. If an error does happen, it will print out a nice error output to the console with the line number the error happened on. In this instance, I have in my catch block a Write-Error line. I'm writing the error message and the line number so that $_InvocationInfo.ScriptLineNumber is an extremely helpful little snippet that you can put in your catch blocks, and whenever an error is thrown, you'll automatically see what line number that error came from. So, let's run this for real and see what happens. So, I'm going to take out my computer name, and I'm just going to run it on my localhost. You can see it waits a little bit. And once it starts filing events, it goes to the event log. So, what it's doing now is that line 38, the Get-WinEventWithin. So, it's running the function that we created from the LogInvestigator script up there, and then it's going through all of the event Id's and finding all of the events within the $StartTimestamp and $EndTimestamp that I created. After all this is done, you'll notice that it will pause for a minute, and then it will start doing the log files. So, let's just wait a minute here while it finishes all of the events. And after that, you'll notice that pause, and it will start doing log files. Notice that pause. Now, if we wait just a minute, it will start processing the log files as well. There, it started doing the log files. You'll see that LastWriteTime was between our $StartTimestamp and EndTimestamp and the name of the log files. So, it has started them. So, let me just cancel this out, and we'll just briefly look at this here. There're our log files. They're all between our $StartTimestamp and $EndTimestamp, and we'll scroll up here a little bit more. And there're all of our events. So, it just went on line 38 there and got the windows events. And then as soon as it got done with that, it started getting all the text log events. Now, this is nice and all, but I'm going to challenge you a little bit with something in this demo. So, notice how the events and the log files are being sent to the console. I mean they're being sent to the console so they're---it's nice to see these. We can then go in and manually look these up if we need to, but it's not extremely useful. How about practicing your toolmaking skills and outputting this information to a CSV file or some other structured format. Wouldn't it be nice if you had all of these files and events in some CSV that we can easily parse through? That would be a lot easier, right? Just a thought. It's still helpful as is. But just like all the other tools we're creating here, it can be improved upon in a number of different ways. I challenge you to not only in this demo but in other demos try to figure out a way that would make the tool better and add on some additional functionality to it. It will not only give you better practice in using PowerShell toolmaking concepts, but it will also give you great tools to use in your job.

  18. Summary If you learned anything in this module, I hope it was that there is more than one way to get stuff done in PowerShell and to always be on the lookout for faster, more efficient ways to do things. I'm referring to the event log clip. The Get-EventLog commandlet is pretty common, yet the Get-WinEvent commandlet is lesser so. They generally do the same thing, but as you saw, Get-WinEvent has drastic performance improvement over Get-EventLog. In this module, we created a log investigator tool. This tool allowed us to easily search through event log sources and numerous text files at the same time as a first step in tracking down an issue on a server. With this tool, we can now scour an entire machine for any remnants of what happened between a particular timeframe.

  19. Tool #3: File and Folder Management Automator The One File System Tool to Rule Them All Hi! This is Adam Bertram, and this is module 5 of my PowerShell Toolmaking Fundamentals course. In this module, I'll be building the third and final tool in this course called the file and folder management automator tool. A common task among IT pros is managing files and folders. I intend for this tool to be a one-stop shop for all my files as to management needs and be built in a way that allows me to add on existing functionality. The file and folder automator tool I'll be creating in this module is going to concentrate on three areas: Finding files and folders, setting ACLs, and, finally, creating a robust archiving solution that will allow me to archive old files and folders in user home folders.

  20. Building the Tool to Find Files and Folders If my tool can't find the stuff I want to work with, it's not much of a tool. The problem this tool is going to solve for me is providing an easy way to find these files. PowerShell provides a commandlet for this called Get-ChildItem, and it's pretty simple to use. However, this tool is going to be built with the capability of plowing through tens of thousands of files and folders and numerous servers, along with some additional functionality. This needs to be scripted, so we do need a custom script for this. So, let's head over to the first demo and see what we can do with this tool. The first task this file and folder management automator tool is going to do is provide a comprehensive way to find files and folders. Get-ChildItem can do this but, depending on what you need to find, requires a few different techniques. I want this tool to have a standardized and extensible method to find files and folders in any way that I want, so let's get started. Again, we're starting with a script, and we'll build from there. Since this is our third tool, I'm going to skip around a little and only go over the new concepts. I figure you didn't want to hear me ramble on about stuff you probably already know. So, first off, notice on line 1 there in our params. I have a $Computername parameter, which looks a little bit different. Instead of specifying a string type with just the word string surrounded by brackets, I have three brackets at the end. This is how you'll typically see parameters defined that accept one or more inputs. What I'm doing there is saying I want an array of strings, not just one string. So, I'm going to have multiple computer names in there. I can have one. So, for example, I'm setting localhost as the default. I could have one, but I want the option to add multiple parameters. On the second parameter there, I'm using a $Criteria. What that $Criteria is going to do is it's going to allow me a chance to set things like---that I want to see, files with just a specific extension, I want to see files or folders that are a specific age, that are a name, or maybe in a particular directory, that sort of thing. And the $Attributes parameter, which is a hash table, is going to be one or more values that are going to relate to the criteria. So, for example, an extension criteria has an extension, which is a log, a .text file, .doc, that sort of thing. Age may be five days old, six minutes old, five seconds old, that sort of thing. The name is going to be a name matches a specific path, a name matches a specific file name, it could be a folder name, that sort of thing. So, on line 3 there, I'm starting to loop through all of the computers in the $Computername parameter. Since it's possible I could have an array of strings, I need to process each one of those. So, first for every computer I'm starting on line 5, I'm starting to build the CimInstance params. Again, I'm going to use splatting to pass to Get-CimInstance to enumerate all of the shares on the local or the remote computer. Line 6--if the machine is a localhost, I cannot use the $Computername parameter because it will error out. But if it's not, I need to add that in as a parameter to get CimInstance. There on line 9 is where I actually do the drive shares. This is where I enumerate through all of the shares that are on the machine. We did this earlier in a previous module, so I'm not going to go over this. But we are only enumerating the default shares, so C$, D$, E$, that sort of thing. That's what that A-Z regex instance there is going to match for us. Then on line 10 once we get through all of these shares, once we enumerate all of the shares on the machine, we then come down to a switch construct. A switch construct is new to us, but it's not new to PowerShell by any means. It's a pretty simple construct that is essentially the same thing as an if/then/else, but it's a little bit easier to write when you have multiple conditionals. So, in this example, I have the switch with the $Criteria at the top here. What this is saying is if $Criteria equals Extension, do this. If $Criteria equals Age, do this. If $Criteria equals Name, do this. If it doesn't match anything, just match the default block there at the end and just say unrecognized criteria. So, this is the exact same thing as saying if $Criteria equals Extension, then do this, else if $Criteria equals Age, then do this, else if $Criteria equals Name, then do this, else your script block. It's a little bit easier to write. It's a little bit easier to understand once you have more than two conditionals that you need to check for. So, on line 12 there we check for the first conditional, which is extension. In this one, I'm just concatenating the $Computer and $Drive together so we can concatenate whatever share we're processing. And then using the Filter parameter, I am using the extension name in there. So, in this instance, I am setting the Filter parameter of *.log.text.doc.exe, something like that. That allows us to feed an extension to this. With Recurse, I'm just getting all of the folders and subfolders underneath that default share. So, let's just go over a quick example of this. So, let's see. I want to do Get-MyFile, extension, I want to find all the files on the local machine that match .log. So, I run this, and you'll see that if you come up there, there're more, but I'll go ahead and cancel that out. So, you can see that it grabs all the files on all the remote. With this instance, I only see localhost\C$ because I only have one drive. But this would enumerate all of the shares on all of the drives on a local or remote system. Let's just say I want to use age for example. Again, I'll do Get-MyFile, and then I want to say criteria of age. And the attributes since it's a hash table. I'm going to add my attributes in here. So, let's just say DaysOld = 5. I want all the files on a machine that are 5 days or older. I start that, and it starts to enumerate all the files in the machine that are 5 days or older. I can do that because on line 15 I have that age block. I'm setting the $DaysOld = the key DaysOld in our $Attributes parameter. That just gives me an easy way to see what DaysOld actually is. Then on line 18, I'm using Get-ChildItem again. And then in the where object, I'm saying if the LastWriteTime is less than or equal to 5 days ago or older, then go ahead and show that. Now, one thing you might be wondering is, Why am I taking the time to build all of this out when I could just simply do something like that in the console? I'm not really saving a whole lot of time as far as writing the code. But what I'm doing is I'm saving a lot of time creating a standardized interface to build this tool. So, I know that to do this, to get the extension, it may be easy just to do that in the extension. If I want to get the extension log to do this manually, I could do something like that, and that would do the exact same thing. But that negates the point of this tool. What this tool is, we're not just doing one extension. We're building a tool which is going to be a framework for other things. So, right now I just gave you an example of being able to easily find an extension, an age, and then a name all within the same tool and all within the same interface. So, instead of typing this, -Path, -Filter, -Recurse, I could just do a lot of different things by setting a standard parameter, which is $Criteria, and a standard parameter, which is $Attributes. I can easily add different criteria inside of there, so that's a great toolmaking concept. It may seem frivolous to replicate a line like line 13 there, the Get-ChildItem, -Filter. Sure, that's easy enough to put in my console, but that's not really the point. If you want a robust tool that will automatically figure out, Well, should I use the Filter parameter or should I use the Where-Object clause here or should I use some other kind of method? And then once you have all that logic inside of this script, then you can just easily do---add a standardized parameter in there, and it's the same every time. One other thing to note with this that we'll cover a little bit later is the notion of parameter validation. I showed you parameter validation earlier in a ValidateScript, but we'll need to do what's called a ValidateSet in this among a few other things. So, for example, let's say, this'll work, Criteria of Age and Attributes of DaysOld = 5, that works great. But what if I fat-finger something here and I say something like this. You'll just see---it'll just parse through everything, and it'll throw errors like crazy. What we need to do is constrain that input and validate the parameters before it goes any further. And we'll show that a little bit later, but I just wanted to kind of put that in your head that this script is not production ready per se because it allows way too much input. If I can, let's say I want Extension here. And I want the extension of log again, it works great. But let's say I want---let's say I fat-finger this or something, and it's going to find everything-- files and folders and everything because I fat-fingered it. But let's say here I put something like 44 that has nothing to do with the file extension. It's not going to find anything. That's because I'm allowed to put anything into this $Attributes parameter, which we don't want this. We want to constrain it all as much as possible. And in a later demo, you'll see how we do that.

  21. Demystifying ACLs The next task my tool will tackle is permissions. Files and folders have ACLs or access control lists made up of individual ACEs or access control entries. Each ACE has an identity or security principal like a user account or a group, the type of access like allow or deny, propagation and inheritance settings, and a right like full control, modify, etc. ACLs can get pretty complex in a hurry. This is why I've chosen to include ACLs as one of the first tasks my tool will work on. I have a specific way I set ACLs, and I'm tired of remembering syntax every time I want to make a quick ACL change. In the upcoming demo, I'll be adding this functionality to my tool. The tool will easily get, change, and remove access control entries in an ACL with an easy-to-remember syntax. One recommendation when exploring something new in PowerShell is to first figure out how to get information. It's not a wise idea to immediately begin tinkering when adding or modifying or removing things. This is where we're going to start with ACLs. We're following the same pattern here as other tools. We'll start with a few scripts, which are easy to test, and then eventually roll these into functions. The first one we're going to be looking at is called Get-MyAcl. This script is advanced since we're using that CmdletBinding keyword again. We've only got one parameter here, which is $Path. This parameter is mandatory. What this means is that in order for the script to run, I must specify something here. I can't get an ACL for a file or a folder without a path, for example. I'm again using parameter validation by using that ValidateScript. In this instance, I'm using Test-Path to test the parameter argument of the $_. And, again, the $_ represents what you are passing as a value to that $Path parameter. And that ValidateScript says if Test-Path evaluates to be true, then let it go. If not, then it's going to bomb out. The rest of this code is common to you by now other than line 16 there where the actual functionality is. For this script, I'm using the Get-Acl commandlet. Let me give you a brief demonstration of the Get-Acl commandlet here. So, let's say I want to get the ACL of just a random file here, so this one here. You'll see that the Get-Acl commandlet has the Path, the Owner, and the Access properties. The Access property is what we're actually after. The Access property has all of the security principals and all of the different rules in here. So, when we just look at the Access property, you'll see, for example, this Archive-File.ps1 file has BUILTIN\Administrators has full control, system has full control, BUILTIN\Users has ReadAndExecute and Synchronize, and Authenticated Users has Modify and Synchronize. So, this is the property that contains multiple other properties underneath it, which are the ACEs of the ACL itself. So, let's go over an example here of this script. So, the first thing I want to do is run this script, and we'll do Get-MyAcl path, and we'll do the same thing. We'll use the Archive-File. You'll see exactly the same thing. It's doing exactly the same. There's a lot of code in here that sets up the script, but in reality, it does the exact same thing as I just did with just running Get-Acl -Path. So, why would I spend the time to create an entire script just to run this tiny line? The reason is standardization and extensibility. If you're going to build a tool, it's important to have a standardized method of input across all functions and to provide a framework to build on. For example, if I have another script that does something similar to Get-Acl, so another function of my ACL tool, I can program that to accept the same kind of inputs as Get-MyAcl. And also by building a script around Get-Acl, it allows me to build upon a framework to change something in the future. So, for example, let's say I want to add some kind of logging to this. I might go down here and say Write-Warning "This did something" or something like that, I can add in multiple lines here and still do something like this. If I wanted to add in those logging lines and add in different functionality, I couldn't do it if I'm just doing something like this. That's just one single line. So, it's sometime necessary to build wrapper scripts around functionality when you're building tools just to really get that framework in place to add different functionality. The easy part is done, and now it's on to the hard stuff. We're now going to modify ACLs. ACLs are a fairly complex subject, and I don't expect you to know the intricacies here. ACLs are also a very large topic. ACLs can be applied to files, folders, printer objects, Active Directory objects, etc. Perhaps maybe later you can expand this MyAcl tool to do those other sorts of things. But for now, we'll be sticking to file and folder ACLs. But those can be hard enough as it is. To set ACLs requires using a .NET object called FileSystemAccessRule. This object requires five parameters. Since we'll be using this object for the main functionality, our script will have to have these same parameters as you can see here. First, we'll need a $Path parameter to define where the file or folder is on the file system. That object has to be there, so I'm using a ValidateScript parameter validation routine here. Parameter validation is highly recommended to use whenever you can to constrain user input. The $Identity parameter, also referred to as a security principal, is a username, a group, something that's contained in the ACL. The $Right parameter is the level of access you'll be granting or denying, so terms like full control, modify, or ReadWrite should be familiar to you here. $InheritanceFlags is the next. This parameter has to do with how the ACE you'll be setting in the ACL turns on inheritance. This parameter has to be set to None for files but can be used to force inheritance on folders for example. This parameter along with the $PropagationFlags parameter may require some trial and error to get just right if you want to get this modified. Next is a $PropagationFlags parameter, which defines how the ACE you're setting will propagate to down level files and folders. Finally, I have the $Type parameter, which simply defines if I'm allowing or denying permission to a file or folder. So, notice I have a new parameter validation routine there on the last three parameters. I previously used ValidateScript, but now I'm using ValidateSet. ValidateSet is a routine that allows you to define static entries that the parameter argument can only be. In these cases, I know that the FileSystemAccessRule object I'll be using later only allows certain strings here. So, I'm constraining my parameter input to exactly what can be accepted. It also gives me a very nice tag completion, which I'll show you here in a minute. So, that's a lot of parameters, but it's okay. PowerShell allows a near infinite number of parameters if you want. Each tool will be unique in how it needs to be created. I could have used a single parameter here like I used in the previous modules, but I chose not to. So, if I were to use that single parameter, I would have used a hash table to fill in multiple key value pairs. But in this instance, I chose not to. This really depends on what you like to do at the time. But if I were to use a single hash table to store all of these parameters, then I couldn't use parameter validation. So, for example, up here let's say I want to do, on line 31 here with $InheritanceFlags, I only want $InheritanceFlags to be ContainerInherit, None, ObjectInherit, ContainerInherit, that set. If I were to make this a hash table, that ValidateSet routine would not work. So, sometimes it's important to actually use the string parameter and just do multiple parameters that way for the script. So, moving down into the process block here to actually make the change only really requires a few lines. I'm getting the current ACL of the object on line 42 there, creating a new ACE in line 43, applying that to the current ACL, and finally committing the change with Set-Acl. So, notice on line 42, I'm not using the Get-Acl commandlet like I did with the previous script. This was learned the hard way. Without going into too much detail here, the Get-Acl commandlet does not retrieve enough information about a file or a folder, which Set-Acl needs to process it correctly. This is why the GetAccessControl method must be used. This is a great example of the power of a tool. If I would have had multiple instances of that Get-Acl commandlet strewn about at a bunch of scripts and finally realized something is wrong, it would be a hassle to make all those changes. So, for example, let's say I had Get-Acl in this one, I had a Get-Acl here, and let's say I had two more scripts, and I had a Get-Acl in here, and I had a Get-Acl in here. And I was calling these scripts and maybe another script. It works most of the time, but if I ran across an issue to where I noticed that Get-Acl wouldn't work, I would have to go through this one. I'd have to change this one out. Go in here, have to change this one out. Go in here, have to change this one out. It would be extremely time consuming and inefficient of your time. So, a tool comes into play. When you're filtering everything through a single script like Set-MyAcl, it provides one single point of change if something needs to be changed. If you're using this Set-MyAcl for all of your file system ACL changes, you just simply change a line in here one time, and that changes all of your other scripts that are dependent on it. So, let's demonstrate how this works. So, I believe I have a file called MyFile.txt. There's a file called MyFile.txt. Now, let's go in the GUI here, and we'll see what MyFile.txt looks like from an ACL perspective. So, I have Authenticated Users, System, Administrators, and Users. Notice that they're all inherited by the light-gray checkmarks. So, let's say I want to add a user, add my account, for example, my Active Directory account. I believe I have an Active Directory account under my name. Let's see if it's in here still. Yes, so I have an Active Directory account called Adam Bertram. The name is ABertram. So, let's say I want to add ABertram as full control to this file. To do that with our tool, I can just do Set-MyAcl, dash, then do the path to the file, then the identity, my---in this instance, it would be my domain, lab\abertram. If you're not in a domain, you can just remove the lab\ and just use the username. The right is full control, InheritanceFlags is ContainerInherit, ObjectInherit. I know that's not very intuitive now, but if you want to learn more about the FileSystemAccessRule, there's a lot of information about it. It gets a little hairy. But, again, since we're using a tool, just modify it in this one spot. PropagationFlags, None since it's going to be a file. And see if we have our new parameters here. Yup, we have Type, and we want to allow full control. We'll try that. And then it says Cannot access parameter transformation on parameter $InheritanceFlags. Cannot convert the value to type system string. So, what that means is I forgot my quotes. You'll notice that whenever we have parameter validation with a ValidateSet, you'll see up there on line 30, I have that ValidateSet, so that is what allows me to see that IntelliSense feature. So, let me go over here again, and I'll briefly show you this. So, you'll see that the IntelliSense feature has all this. This isn't available when you do not use ValidateSet. The ValidateSet routine is what gives this. And sometimes you just get lazy or just not think about it and just hit Tab and go on. But it doesn't realize that sometimes, and you have to put quotes around your parameter values. So, we'll try it again. So, this time it says the No flags can be set. That is because I have the InheritanceFlags wrong this time. So, again, let's try None, None. So, we'll see what happens here. We'll try the--- look at the ACL. You see there that ABertram does have full control now. You'll notice that I made a few mistakes here and got the error. That No flags can be set error---and the InheritanceFlags is the problem that I didn't include the quotes, but the No flags can be set is an error that was thrown by the SetAccessRule method. Again, there're going to be instances to where the tool that you're building is going to be simpler than what you're building the code around. What I mean by that is this---ACLs are a fairly complex subject. And when you're building a tool like this, you'll do a lot of googling and googling or binging, whatever search engine you want to use, and you'll eventually come to a conclusion where, okay, well, that works. In this instance, this is how I built this tool. I don't know the intricacies of the FileSystemAccessRule object here. It took me a while to figure out and research and troubleshoot and bang your head against the wall to get this to work, and that's exactly what I did here. But the reason that I'm able to present this to you and to get it working in a semi-functional manner is just because I spent the time up front, built this script around it, and then got it to work for a file. Then I maybe got it to work with a folder. You just iterate like that. When you're building a tool like this, let's say it worked with a file, well, you want to continue it to a folder. Well, maybe it doesn't work with the folder. You change this script up a little bit, you make it work work with the folder. Okay, well then I want to convert it---I want to add on more functionality to do printer objects or Active Directory objects. You just add in the functionality to this single script here. And once you do that, then it's available to all your other dependent scripts. Finally, we're on to removing ACEs or access control entries from an ACL. We're through the complication of removing the ACLs now, but now that the tool is built, you don't have to worry about that stuff, right? I ensure you that removing an ACE from an ACL is much easier. I'm going to skip down to the process block here and just go over the meat of this. Removing an ACE is a matter of getting the ACL object, finding the appropriate ACE you want removed, removing the ACE using the RemoveAccessRule method, and, finally, committing the ACL back to the disk. Then that's it. To briefly go over this, line 20 is the familiar line you'll see of grabbing the ACL from a file or folder. Next, I'm looking in this ACL references to the identity or user or group that's in an ACE. If I find it, I'm then using the RemoveAccessRule method to remove it finally sending the output to Out-Null, which will silence anything coming from that method. So, I now have an ACL object without the references I want removed. The only left to do is commit that back to disk using Set-Acl, and we're done. So, let's go over a demonstration here of how this works. So, again, I'm going to be using that Myfile file that I used in the last demo. And I'm going to be removing my ACL that I created. So, in the last demo I created this ABertram ACL here. It's set to full control. I just want to remove this completely. Using our tool, we can easily do this. So, let's do this here and remove my ACL. Path is the path to MyFile.txt. Identity is my domain\username. And we don't need anything else. We just have Path and Identity parameters on this one. So, hit Enter, and nothing shows up. And we'll see if it's done. You see it's completely gone. We can easily do this for multiple files too. So, let's say you want to add some files here. We'll add some files over here, and we'll do a quick demonstration here. So, let's do Files. We'll go in here, and we'll create a few here. So, let's do 1.txt, 2.txt, we'll just---actually, we'll just do two. So, we have two files. And let's just change both of these here in the GUI just to add the ACL. Let's say I want to go here in the GUI since I'm going to add ABertram. I'm going to give him full control of that file. And we'll go to this one, and we'll give him full control of this file as well. So, ABertram has full control of both the 1 and the 2.txt files. Let's just remove them. To do that for multiple files, we can use the Get-ChildItem -Path. It's in the Files, so that will enumerate all of the files. And we're not using pipeline input in this function, so we can just use the foreach commandlet here and then do our function. So, we do C:\, the path to the file, Remove-MyAcl, -Path is the pipeline, so that's going to be referenced to 1.txt and 2.txt, -Identity is me again. Run that, and you see that we got error text here. Now, what happens? Why did we get an error text? Let's do a little bit of debugging. The reason is because you'll see that I used just the $_. So, what I do to get ChildItem, I don't just have---it's not just a string output. It's actually a number of strings---a number of objects with properties. So, you see if I just pipe that to select *, and you'll see I have numerous properties in here. So, what I want is the property that represents the actual file path. And you'll see there that there's a property called FullName, and the value is C:\Files\2.txt in this instance. So, that's the actual property name we need. So, I'll go back here, and I'll use just that property name. And you'll see the reason that actually failed was because I had parameter validation on here. You see Cannot validate argument on parameter. The reason I got that was because of line 12 here, the ValidateScript. That validation again is a great way to ensure that only valid arguments to those parameters are passed. If we were to let that go through, it would have gone through the parameter block here and failed somewhere in here. So, it could have been halfway in, halfway out, for example. So, it's always a great idea to constrain that in your parameter validation. So, let's go and see if that actually removed ABertram. That one's gone, and that one's gone. So, it removed both of them. So, our tool worked. And, again, you can use this for files and folders and whatever you need. And just like the last demo, if you need to extend this now that we have a framework, you can add in other objects like Active Directory objects, printer objects, files, folders, really whatever has an ACL. You can add all that functionality in here once you have the framework of the tool built. So, managing ACLs is a fairly complex topic. But now that we have a tool built, we can abstract away a lot of that complexity, especially the complexity of that Set-Acl script. So, I can now simply just use my tool instead of reinventing the wheel every time.

  22. Finding and Archiving Unused Files For the third and final task of our tool, I'll be using it to archive old files no longer used by my users. I have a file server that contains all of my users home folders. I need some additional free space on this server, so I need to move some files off. I've noticed off and on that users seem to treat their home folders as a dumping ground and rarely use the majority of their files. I need this tool to first find all the old files and then create the same folder hierarchy in an archive location so the user can easily find the file if they need access to it again. So, let's get started on the third and final task. The final task our tool is going to do is get the ability to archive old files. This is a common activity you have to do on a regular basis, and it'd be nice to have a standard tool to do this easily for me. So, let's check out how this tool's built. This tool is designed to take a specific folder path, find all the files in that folder and all subfolders that are a certain age old. Once it finds a file in the folder, it will then archive it off to another folder while still maintaining the folder structure that the original file was in. Again, we're using a ps1 script here. We're using commonly changed variables, which are defined as parameters. We've got the FolderPath parameter that will point at the root of our users' home folders, the maximum Age that the files must be to match what we're calling "old," the place where we're going to archive the files off to, and, finally, a Force parameter in there. This parameter will come into effect if the file already exists at the ArchiveFolderPath. If Force is used, it'll overwrite the previous archive folder. If not, it'll skip it. So, notice the Force parameter is not our usual string data type. It's a switch type. A switch type is used without a parameter argument. A Force is a common parameter that overwrites files. And you can see if I use Get-ChildItem, it has that Force parameter there. That's a default commandlet. Many commandlets have that Force parameter. It's the very same concept. A switch parameter does not need an argument and works all by itself. Since this script will be about finding the age of files, we need to define today's date to make our calculations. So, I'll go in here and find today's date. You'll see there on line 30, I am getting today's date. So, notice I'm defining this early rather than in our foreach loop a little farther down here. Get-Date is a command that takes resources to run. Comparing a date variable to a file age is a lot less resource intensive, so it's always a good idea to assign Get-Date to a variable. So, on line 30 there, you see I'm just doing it one time. I'm using Get-Date and then assigning it to a variable, which is today, which will be stored in RAM. So, whenever I access that $Today variable, I'm not actually running Get-Date over and over and over and over again. I'll actually be just referencing the $Today variable. So, next we're defining old files. Similar to previous demos, I'm using the Get-ChildItem and a where block to get files that are older than $Age days. So, on line 32 there, I'm getting everything that is older than the parameter value, which we specified as $Age. So, let's say I have---I want to only find files that are older than three months old, for example. So, I would have $Age, $Age would be 90 for 90 days. That would match all files that haven't been written to in 90 days. It's always a good idea to account for as many outcomes in your script as possible. I'm expecting to get at least some old files, but you never know, I may not. So, I need to account for that. On line 33, I'm accounting for this instance where there are no files matched at all. If so, I'm using a Write-Verbose line here. Now, Write-Verbose is a new introduction to the course and requires a short explanation. Write-Verbose is a great way to add informational messages throughout your scripts. Write-Verbose outputs yellow or light blue text depending on if you're in the PowerShell console or if you're here in the ISE. It's a great method to use lots of Write-Verbose lines, as well as comments in order to follow along with what your script is doing. So, whenever I demonstrate this script, I'll show you what Write-Verbose does. Continuing on down the script, if there are some old files, we then need to figure out where to put them. First, we'll define the $DestinationFilePath. This is the path where the individual file will be placed in the archive folder. Next, we're testing to see if the file exists in the archive location already. If it does, I'm using a neat trick here to create an entire folder structure. It's important I keep track of where these files existed in their original location. So, I want to keep the folder hierarchy. So, just going down here on this one, in these files here we have C: files, 1 and 2.txt. Whenever I archive these off, I want to maintain---I don't just want to throw in 1.txt and 2.txt into a big folder with a bunch of other files. I want to know that they came from the file directory, so I want to also create this files folder ahead of time also. So, to keep that folder hierarchy, I can use New-Item, which inherently creates this structure automatically. Now, it may not seem obvious because I'm using New-Item -ItemType File, so I'm actually creating a file. Or if you're used to the Linux or Unix command of Touch, it's doing the same thing. It creates a file, but the side benefit of this is it also creates the entire folder structure without going through some complicated method. So, it just creates a 0-byte file and keeps that whole folder structure. It's a pretty cool little technique. So, going down here on line 45, I'm checking if the file exists, and if Force is not present, we'll just go ahead and skip it. So, notice the continue keyword there. That's new. So, look up at line 36, and we'll see that we're still in the foreach loop. While inside of this loop, if continue is used, it will stop the current iteration of the loop and start the next. So, in this instance, I'll skip the actual archiving of the file and begin checking the new one. So, the code is going to go down here into this foreach loop, and let's say the file is c1.txt for example. If the c1.txt is in that archive location and I did not use the Force parameter, it's going to write out a verbose statement saying the file c1.txt already exists in the archive location and will not be overwritten. And it will continue, which means it will stop that current iteration and just go up onto the next file and keep going on and on like that. And then, finally, on line 41 there, I'm doing the actual move to move the file from the home folder in this instance that we're going to work through to the archive location. So, let's go ahead and give this a shot. So, we'll run Archive-File, -FolderPath. What folder do I want to archive? Let's see, well, I'm on my work station, and I want to access my company's file server, so I'm going to use a test machine here. And the folder is UsersHomeFolder. So, this is going to contain all of the files that all my users have. Our next parameter, -Age. How old do we want the files to be before their archived? Since this is just a demonstration, I'm just going to do 1 day to get the majority of them. ArchiveFolderPath--let's say, well, let's create one here on my local machine. So, let's say I have an archive folder of ArchivedStuff. So, we have that folder. And let's say we want to move it to ArchivedStuff. So, we have that. Force? I'm not going to use Force, but I am going to use -Verbose. Since we have those Write-Verbose lines in there, will I be able to monitor how the script is executing? Run this, and let's see if it works. Remember I said the verbose lines are light blue in the ISE and yellow in the console. In this instance, they're light blue. So, let's go down here and see what happened. The first verbose line says The file, the file that it found, so it did find a localhost.mof file in that UsersHomeFolder directory, is older than 1 days. And that came from the Write-Verbose line on line 39. And it'll say it will be moved to C:\ArchivedStuff. And you'll see that the path is actually labdc.lab.local\c$, that is actually creating that folder structure as I've told you. We'll see that in just a minute here. And the next verbose line, the destination path C:\ blah, blah, blah does not exist. Archiving, and I have a little typo there, but you get the point. Copying, the next one is the one that actually starts the copy. So, it's copying from that UserHomeFolder to the ArchivedStuff. And let's see what this looks like. You can see I have the ArchivedStuff folder, and you notice that the localhost.mof is just not dumped in here. This created a folder structure. So, from this, I know that it came off of this server. It was on the C:, so this is the C$ admin share. It was in the UsersHomeFolder directory, and there it is. This is a great way to track not only to get files off of the machines that you need some space on but also to track where they came from. There're a few different ways to do this. Like you could maybe set a text file along here, put them in a file, say this came from this folder and such and such. But I find it's a lot easier way just to create that same folder hierarchy in that you know exactly where all your files are coming from. So, I challenge you to add to this piece of the tool. What additional functionality do you think you could add to make it better? Perhaps a way to check not only the LastWriteTime but maybe also the LastAccessTime or maybe give it the ability to check on multiple servers. You've learned everything you need in previous modules to make this happen. So, what are you waiting for? There's lot of opportunity. All these scripts that I'm telling you about, it's PowerShell. It's not compiled code. You can easily change all of this stuff. So, I challenge you to improve upon it, add upon the tools. It's the only way you're really going to learn.

  23. Creating a Cohesive Tool This tool solved three problems for me--finding files and folders, managing file system ACLs, and archiving old files. In this third and final toolset demo, I'll be taking the scripts that were created and moving the code into functions by breaking down code in these functions that, again, allows me to easily test my code and allows me to accept and output objects from function to function rather than having to worry about how I'm going to set output to script 1, script 2, and so on. So, we've got done building the file and folder management automator tool. So, what's next? Again, like all of our other tools, I took all of these scripts that we built and made them into functions, into one single ps1 script. As you can see here, I have five functions. Each of these functions represented the scripts that we built earlier. So, I have the Archive-File function. Again, the code is exactly the same as what we had before. So, each of these functions is just simply exactly the same code as we did before just like the previous modules. So, with the functions being built, we can then call them like we have the previous modules. So, let's go ahead and go over an example of using this tool. So, over here I have a script that I just came up with that may be a typical example of how you would use this tool. This script is an "advanced" script because I'm using that CmdletBinding keyword again. And I chose to have four parameters for my script--the $RootHomeFolder, $Age, $ArchiveFolderPath, and $FullPermissionGroup parameters. Now, these are parameters that are going to be passed to our functions. So, for example, on the $RootHomeFolder, that is the root home folder of the files that are going to be archived. The whole point of this script is to point a PowerShell script, point our tool at a root home folder. It will first archive all of the files that are older than the specified age. Once it archives the files, then it will add an ACE to the ACLs of all the files to an administrative group. So, one typical example is in an enterprise, there're times when there may be multiple files or folders in there that don't have the appropriate permissions, don't have admins or not even administrators can get to these files. This is a typical real-world example of using our tool. So, we have the $RootHomeFolder, which is the home folder that has all of our users in it, the $Age, that is the age---the maximum age that a file can be that we want to archive, the $ArchiveFolderPath is where we want to put the files, the $FullPermissionGroup is what group that we want to set full control over all the files. So, this could be domain admins group, an administrators group, really any one you want. This is the group that will be set as full control over all of our archive files. Next, I'm doing dot sourcing again, so I'm dot sourcing our file and folder automator tool. This is the tool that I converted all of these into functions so we have all of our functions here. And in our example, I'm only going to be using two of them, this Set-MyAcl and Archive-File. But you can easily just use any of them any other way. So, let's start in there on line 10. That's where I'm going to get a file---archive file count. I chose to see how many files that are actually being archived when that actually happens. You don't have to, but it's just an example of the extensibility of our tool. You can do anything you want here. There may be other better ways to do that. I challenge you to look into that, but for now I'm just going to get the count before and get the count after. And then I'll show you how we're going to report on that afterwards. So, on line 11 there, I'm using Write-Verbose, which is a great way to write out your progression as you progress through the script. I'll be getting the $ArchiveFileCountBefore, so I'm getting how many files are in that archive folder completely before. Then starting on line 16 there, I'll say I'm beginning the archival process. That's when I actually call the Archive-File function. And then I'm passing the -Folderpath of the $RootHomeFolder as we have up here, the $Age, the maximum age, the $ArchiveFolderPath, and, finally, I'm doing verbose. In this instance, I'm doing verbose on the Archive-File function. So, let's go over here and just see if anything's going on here. So, there's the Archive-File function. And since I used verbose here on the Archive-File function, I'll be able to get a glimpse into the goings-on of what's happening in this function. So, I'll see the progress. There're many Write-Verbose lines here. These will all be output in our final example. And scrolling down here a little bit, then after I perform the Archive-File process, then I take a count afterwards. So, I take a count of all the files afterwards. And then on line 25 there, I'm reporting how many files have actually archived. Now, there may be a better way to do this. I specifically left this out on purpose. Whenever you're iterating through many files like I am here, so on line 17 on the Archive-File example, I'm actually reading every one of those files as it goes through. So, I'm hitting it one time. The more efficient way to do this would be to take account of all those files as I hit them. So, I go through Suzy's home folder. Okay, there's a file. There's a file in Georgia's home folder. There's a file in Bob's home folder, that sort of thing. This is a little bit inefficient because I'm getting a count before doing the archive and then doing it after. So, I'm intentionally hitting it three different times. So, I'm getting all the files before, so that's touching every file. I'm doing the archive process, that's touching every file. I'm getting the files after the archive process, that's touching every file. This works, but the performance is not very good. I challenge you to remove the before and after counts and try to figure out a way in the Archive-File process to perhaps maybe output the count in the Archive-File function or perhaps output the object or output the files that you're actually archiving in the Archive-File process. That way we could do something like this. We could do a count. For now, how this is coded is there is no output at all so we cannot do that. If you would output the files while you're archiving it, you could do a count. And that way, we wouldn't have to hit the files three times. But I intentionally left that out to give you a small challenge there. Then, finally, in line 28, I say we're beginning the ACL change process. This is where we're actually going through and modifying the ACL if it's necessary. So, I'm using, again, my Set-MyAcl function. So, see I'm using this function here, and it's that same script that we use earlier in our Set-Acl example. And, again, I'm using the identity here. I have the $FullPermissionGroup. I could have put administrators in here statically, but it's never a good idea to do that. So, if you ever think you're going to change something, you always put it up in the parameters here. So, that's what I've done. So, that's about it. Let's see if we can get this to work. So, we'll run our ToolSetExample, our RootHomeFolder. By default, I'm using labdc.lab.local\c$\usershomefolder, that's fine, we're not going to use this parameter. So, what other parameters do we have? The $Age, I want to do older than 4 days. We don't need $RootHomeFolder. $ArchiveFolderPath, on line 5 there, we already have a default. We don't need to set that. And $FullPermissionGroup, we already had a default administrator, so we don't have to set that. So, let's run this and see what happens. Before we do, we need to use the verbose switch to get our verbose output and see what's going on. So, let's run this and see what happens. So, let's see what happens here. It looks like it archived a total of 0 files. So, what happened with that? Let's go back up here and look. Because of our extensive Write-Verbose, we can easily look at that. So, let's see, we have a ps1 there older than 4 days. It will be moved. Let's see. Now, you can see there're a lot of instances here. Ps1 already exists in the archive location and will not be overwritten, not be overwritten, not be overwritten. So, it looks like I've already done a previous archival process. And because I did that, because I had that logic in there in the Archive-File function, it actually didn't do any of the copies, so it went by very quickly. Let's just delete this, and we'll do this again. So, let me prep the home folder here. So, let's just delete this, and I have a copy of exactly what these are in this other directory. And I'll just check this. So, this is my UsersHomeFolder. I have Adam---you know, just some files and folders in here randomly. There's one without a folder. There's one with a folder with nothing in there. So, just your typical UsersHomeFolder. So, let's try that again now that I know I have nothing in ArchivedStuff. Let's see what happens now. Notice how it's taking longer now. Now, it's actually going through the process. It's first checking to see if a file exists in the ArchiveFolderPath. And if it doesn't, then it's just going to go ahead and put it in there. You'll see that it's actually finding quite a few files now and going through that entire process. So, let's wait around a little bit until it gets done, and then I'll come back a little bit later whenever it's fully complete. Okay, it's done, so let's check on the verbose output and see what happened. Looks like it archived a total of 30 files that time. And just going up through here to make sure we didn't have any errors, it looks like it actually went through fine without any errors. You'll see that I put in these Write-Verbose, which I highly recommend. Pepper your scripts with these Write-Verbose. The more you add, the more informational the output can be. It went through the process, so I found the file---Archive-File count before the archival process. So, that's when I was actually doing the file count, Done. So it was done. Then I begin the Archive-File process, and then these Write-Verbose lines here, these actually came from the output of the Archive-File function. And I went through and did lots of checking. It did the archival process. And when I was done, then it did the count afterwards so I can get that count-- Done, Archived a total of 30 files. So, it gives me a nice informational output of how many files I actually did. And then, finally, we did the ACL change process of setting the local administrators of my machine to full control. So, let's see what's in here. So, it looks like it did create the folder, C$, usershomefolder, and it created everything I needed. So, it archived everything that was necessary. Now, I hope when I go to an ACL of one of these that local administrators is there. Yup, it is. See how it's there, and it was not there before, so it was there. So, that did work. So, what it did here was I leveraged the tool in a specific example, a real-world example. This is an example that I have run into multiple times. This is just a single tiny example of what you could do with these toolmaking concepts that you're learning here. By leveraging a tool, you can easily make multiple scripts that do multiple different things. So, toolmaking is a very good skill to learn as I have given you an example here.

  24. Summary In this module, the most important takeaway you could have is to see the kind of power you have around managing files and folders. This was just a sample of what you can do. I encourage you to take what you have learned here and add on more functionality to automate all the common file management tasks you run into. Second, learn to use the Get-ChildItem's commandlet filter a lot. With the filter parameter, PowerShell is leveraging the file system provider's filtering capabilities, and it's much quicker than using Aware-Object filter, for example. There will be times when you're forced to use the Aware-Object commandlet to do more advanced filtering, but that's okay. It will still work out, but it'll just be a little bit slower. In this module, we built the file and folder automator tool. This tool consisted of three sections--finding files and folders, managing ACLs, and, finally, archiving old files. We originally built these sections as scripts. But going along with our theme, we then converted them into functions, which will now allow us to eventually move these into a PowerShell module in the last module of this course.

  25. Debugging Techniques Debugging Techniques Hello! This is Adam Bertram, and this is module 6 of my PowerShell Toolmaking Fundamentals course, Debugging Techniques. In this module, we won't necessarily be creating tools like we have in the last few. The more you write your own tools, the more code is created. Thus, the more opportunities for your scripts to break. You'll find that scripts can break in any number of ways, and debugging skills are critical in figuring out what went wrong. In this module, we'll go from a situation that's easiest to debug all the way to a hard one. By the end of this module, you should have the troubleshooting skills to track down lots of common problems.

  26. Common Causes of Problems What is the most common source of errors in a script? Is it using the wrong object, the wrong function, or perhaps the wrong PowerShell version? Nope, it's plain old typos and fat fingers. You know what I mean. You have in your head you're going to type x, but you type y instead. And you could swear you typed x until PowerShell tells you otherwise and your script bombs out. Typos are a fact of life that aren't going to stop, but it is my intention to show you exactly how to quickly pinpoint them and get your script running like a top again. I'm here now in the ISE, and I just want to find the serial number of my local machine. That's it. It's been a long day, I'm tired, and I just want to get done for the day. So, I just type up something like what you see here on line 2. Let's get the serial number so we can just get this demo over already, right? So, I'm just going to highlight this and run it. Red text, awesome! What's the first thing you do whenever you see this error text? Is it actually read the error, or do you just try again out of despair just thinking you did something stupid? If you're like me, you'll probably just try it again if it's something small. So, let's just do it again and see what happens. Red error again. Now, I'm actually going to have to do some reading. Looking at the first error text, I can see about the term imInstance wasn't found. I don't recognize that command, but that does look an awful lot like some text from the Get-CimInstance commandlet that I'm using. Glancing up at the line here, it looks like I actually didn't highlight the entire thing. So, it looks like I may have just went, you know, like that. That might do it. So, let me just be a little more diligent here and make sure I highlight the entire thing. Red text again. What now? I made sure I highlighted the whole thing that time. First, Get-CimInstance is throwing the error so I actually highlighted the whole thing. You can see I have Get-CimInstance down here. It actually did recognize it, so that's good. That's not my problem anymore. The error is now saying it can't find some parameter named KeeyOnly. Well, first, what's a parameter? Oh, it's those words propended by a dash to use with the commandlet. Okay, so I know what a parameter is. So, let's look at all the parameters I'm using up here. Scanning across them, I see one called that KeeyOnly that it was talking about, but it looks like it didn't like that one. So, let's drop down to the Help screen to see what it needs. So, run Get-Help on Get-CimInstance, and we'll just do -Full. Looks like there's quite a bit of stuff. It doesn't like KeeyOnly, so I'm going up here looking through all these---okay, look. There's one that looks like KeyOnly, and it looks like it's just KeyOnly, and I put in an extra 'e' here. So, take that out. Let's save our file again. And let's highlight this route again. That looks much, much better. There we go. There is my serial number. That's all it was. Just had a typo. Okay, let's just go down here and try another example. So, in this example, I need to find all files in the root of C:. I've created a hash table called #Parameters where I'll use splatting to feed it to the Get-ChildItem commandlet. Then I'll just run this here. It should be really quick. I see something, but I saw some red text. The term * is not recognized. I know that's not a commandlet, function, script file, or operable program. It says here, Why is this trying to execute this *? I have no idea where to start here. So, I'll just go up to my script and do a search on an asterisk. Maybe I can find it that way. So, it's there. Let's search it. Oh, Cannot find an asterisk. Well, maybe I have to do Search up. Maybe I'm below it. Okay, there it is. It found an asterisk. Alright, great, so we found one. Maybe that is what we're talking about. So, looking at my error text again, it says Line 2, character 13. Well, I did just highlight that code, and I guess technically line 2 is the highlighted code. So, I did highlight it, and line 2 of the highlighted code is that filter thing there. And putting my cursor on here, oh, you can see that is character 13, you'll see down there at the bottom. You have line 8 and column 13 in the lower right-hand corner of the ISE, you'll see that is line 2 character 13. So, it looks like we got it. So, I don't want this to try to execute it. So, let's just put some quotes around it and run it again and see what happens now. No red error, and I got all the files in here. I realize this is a simple example, but it demonstrates the simple debugging techniques that you do. Debugging is really an art. There is really no set-in-stone way to do it. it's just as simple as recognizing patterns, connecting the dots here and there, and just really using the knowledge that you have already to figure out a problem. And it's all about following that error text using Help. PowerShell has a great built-in Help system and, really, all the other resources at your disposal to connect the dots. The more experienced you become, the easier it will get. Sometimes a script or command won't give any output at all. Here's an example: I'm using Get-ChildItem here, and I just want to get all files and directories in the root of C:. I'm setting the -Path and the parameters -Directory and -File because I want to get all files and all directories. Sounds easy enough. Let's run it and see what happens. I got nothing. Definitely not what I expected. There has to be some files or folders in the root of C:, right? Not even an error message. I don't know what's going on. It turns out some PowerShell commands, although run successfully, don't output anything at all. It can be frustrating if that's not what you're expecting. The very first thing you do when debugging is to first make your command as simple as possible. In this instance, it's already pretty simple, right? I'm just using a single line, but it can be even simpler. So, I'm using three different parameters here, -Path, -File, and -Directory. I know I'm going to have to specify the -Path. There's no way around that. But what about the -Directory and -File parameters? Let's see if I remove one, and try this again. Let's see what we got here. I got only directories, no files. Perhaps the directory attribute means to only get directories. I guess that makes sense. Let's make it even simpler. Let's remove this. Run it again and, yes, it looks like I got both directories and files this time. So, it looks like the -Directory and -File parameters just negate each other. Let me verify my assumption here by checking Help. So, we'll go here, and we'll get the help for Get-ChildItem. And I'll use a ShowWindow switch here. This is a pretty cool switch to use, and it'll bring up a different window for your help. So, I'm looking down here, and let's see what the -Directory one does. It says Gets directories (folders). To get only directories, use the Directory parameter and omit the File parameter. On the File parameter, it says To get only files, use the File parameter, and omit the Directory parameter. Yup, sure enough. I can only use one or the other because they're going to exclude the other one. Case closed. Just I'll remember next time, I just won't use---if I want everything, I won't use the -File or Directory, but if I just want -File, let me just make sureā€¦ Then that's just getting the files. That makes sense. Case closed. So, now for the second scenario. I'd like to query the WMI class Win32 computer system. However, I'm thinking I might want to query other classes later on. So, I'm going to use variables to make it more dynamic. I have a badly named variable $a with the WMI prefixed Win32. I'm then defining a string separator that's going to separate the Win32 prefix from the rest of the class name. Then, I'll be concatenating all those variables together to create the full dummy MyClass name. Finally, I'm using Get-CimInstance to make a WMI query. So, let's run it and see what happens here. First, I want to clear my screen, and let's run this and see what happens. Great, more red text. Invalid class. Wonder what that means. I must have tried to use a class name that doesn't exist. That's at least my guess anyway. It doesn't tell me what the class name is. So, I'm going to have to figure out what the variable represented when it was passed to Get-CimInstance. Enter Write-Host. So, let's put a Write-Host name here, and we will output $MyClass. Save it. And you know I don't want to actually run Get-CimInstance now, so let's just stop the script. To halt the script, return. Return just halts it, doesn't go any further. It's another quick and dirty way to do some debugging. Let's run this again. And other than the output that the ISE put in here, it looks like I have Win32, that looks like two underscores, Win32__OperatingSystem. I know that's not right. I'm pretty sure the separator is actually a single underscore. So, looking up here, I notice that the separator value is actually two underscores. That was probably it. So, let's run this again, not actually the WMI query but run this whole instance again and see what that is now. Okay, there it is. It is Win32_Operating System now. So, it looks like I had my concatenation right here. I had all that right. But I just had extra underscore for the separator. So, now I will back out of this and remove the return and the Write-Host since I don't need that anymore. I actually want to do the WMI query. So, let's run this again. And there we go. It works. You'll see a great strategy is to, first, I did the simplification process. As part of the simplification process, I output what the variable is at the time that it's created, as soon as it's created. And I used that return statement to stop the script because I don't want to have it go through. What if there's one line here? What if there're a thousand lines underneath that? I don't want to go through that entire process, and I know it's going to bomb out multiple places. To simplify it, I just want to stop it right there, and that's a great way to do that.

  27. The Simplest Solution Is the Best The most important skill you can master when debugging is simplification. Removing complexity from a script will help you not only debug the problem faster but will keep your script more efficient. The first tactic you should use when debugging a script that's more than one line long is simplification, removing any places where the script could go wrong. And, finally, limiting it down to just the snippet that's causing the problem. In the upcoming demo, I've run into a problem. I'm trying to create home folders for a bunch of new employees, and the folder names are wrong. My script is doing three things. It's reading a CSV file full of employee names. It's creating the username based off of the first and last name of the employee. And it's creating a folder with the name of the username. The script is getting some bad data somewhere, and I'm getting some strange-looking folder names. I need to figure out what's going on by breaking the script down and figuring out where those names are coming from. This is a small script I use to create user home folders for Active Directory. It takes input from a CSV file full of names and is supposed to create home folders based on the users first initial and last name. But something's not right. As you can see here, I have a file called UsersHome. And it has nothing in it right now. I also have a file called Employees.csv, which has five employees in it. Let's run the script and see what happens. It looks like it created some folders. Let's go over here. And, well, it did create some folders, but it doesn't look quite right. My own account in here, that's my last name, but that's not my first initial. Let's delete these because those aren't right and investigate a little bit more. So, first, I'll check my CSV file to make sure nothing's wrong in there. Okay, I have a FirstName and LastName column. That's exactly what I expect. And all the names look right. That one should have been ABertram, it should have been GSmith, and BJones, but it wasn't right for some reason. So, how is the first initial off? Going back to the script, let's try and figure this out. So, I know I'm creating five folders for five employees with a foreach loop. Debugging code when loops are involved is overly complicated sometimes. I need to simplify this as much as possible. I just want a single iteration here, so just a single employee. On line 2 here, I'm grabbing the entire contents of this CSV. Let's simplify that by just getting a single one. To do that, I'm going to use the Select-Object commandlet with the parameter -first. So, I'll pipe this to Select-Object and just do -first 1. So, what this is saying now is import the CSV rows but only output the first one. Skip all the rest. Seeing all those other folders is just going to confuse me right now. So, now let's run this script again and see what happens. It looks like we may have got one. Okay, we only have one folder now, but that's perfect. And that folder is oJones. So, looking at this CSV again, the very first row was for Bob Jones. It should have been BJones, not oJones. So, let's investigate a little bit further and continue our simplification process. On line 4, I'm creating a username. And then I'm going ahead and creating the folder. Since the folder has been created, but it's just named wrong, I'm betting I have a username wrong somehow in there. For now, I don't need to actually create the folder at all. So, I'll just go ahead and comment that out for now. So, I commented that out. I don't need that anymore. So, let's just output what username should be. We run this again. This time I didn't actually go to the folder creation process. I'm just seeing what my variable is. It looks like it is oJones. So, the problem is the username variable creation piece. On line 4, I'm actually creating that variable. So, it has to be in there somewhere. So, I'm using both a first name and last name of the employee. But I'm using a substring method for the first name to only get the first initial. So, let's look into how this substring method works. Using the ISE's IntelliSense feature, I can usually see what a method wants to see, so let's just break this apart. So, let's go over here, and we'll just do Employee FirstName and do this substring method. Remove this out. And we'll do substring there. It looks like there is a start index and a length. The start index is the character spot in the string to start, and the length is how many characters I want to pull out. So, I got an o in my current code. Where could that be coming from? Well, there're two o's in Bob Jones, one in the first name and one in the last name. Since I'm only working with the first name here, the o must be coming from the second letter in his first name. You know I actually remember now from my computer science class years ago that typically collections start with a 0 rather than a 1. I bet the index 1 actually means the second character in his first name because that would match up, because if I look over here in the spreadsheet, the second character in his first name is an o. So, let me change that. Remove this out of here, and we will just do SubString(0,1), and we'll run this and see what happens. Okay, it's a B then. That's perfect. That's exactly what I want. So, let's backtrack a little bit, and let's just put the username after we change that to a 0 because we just found out that it actually means the first character here. So, let's try this. It's great! It's BJones now. So, we need to back out again, remove this. We know that's BJones without changing anything else. So, now let's actually do the folder creation. It looks like it actually did it right. We still have the oJones, so we can remove that, but BJones is there. It worked great. So, again, simplification is crucial to narrowing down bugs. Just take what you have, break it down as simple as you need, test until it succeeds, then slowly back out everything again until you're entire script works.

  28. Introducing Breakpoints A typical script's runtime is probably only a few seconds. A script can run faster than you think, which poses a problem. How are we supposed to figure out what's going on if it fails so fast? We need a way we can pause a script midstream and just take a look around. Traditionally, halting code execution with breakpoints is a common practice among software developers. But not so much for scripters. Scripters tend to be more lackadaisical with their code. We typically aren't fooling around with 10,000+ lines of code in a script, which is how we can traditionally get away with it. However, the bigger the script, the more methodical you must become. And halting your script midstream with a breakpoint is an important skill to learn. A breakpoint is an entity that tells PowerShell to halt the script it's currently executing. It's a way to tell PowerShell, Hey, wait up for a second when the script is running. A breakpoint is a great way to test scripts before you allow them to run so you can be sure they run successfully and don't cause any major harm. There are two ways to create breakpoints in PowerShell--by using the Set-PSBreakPoint commandlet or by using Write-Debug. Each has its own advantages and disadvantages. So, let's go over each method and discuss some situations where one or the other might be needed. There are two ways to create breakpoints in PowerShell. One of them is using the Write-Debug commandlet. So, if you're not familiar with the Write-Debug commandlet, it's a simple way to stop execution of a script at a certain point in time. So, in this example, I have a very simple script example. I'm setting a variable on line 1. I have an informational message that says I'm prepping to do something. On line 3 there, I have the Write-Debug commandlet commented out. On line 4, I actually am doing something bad, so pretend like this is five lines of code in the script, 10 lines, 20 lines, whatever. I'm doing something bad at this script. If I simply run this, it says I'm going to blow up the server, and then it does. It proceeds to blow up the server. Well, let's say I want a confirmation just to make sure that what I believe is set correctly so I don't blow up the server. So, we put a---comment this out and put a Write-Debug line in here and run it again. I don't understand. The Write-Debug should have set a breakpoint. It should have asked me for a confirmation, but it went and blew up the server anyway. It didn't do anything at all. If you remember from the previous demo that advanced functions and scripts need to have that commandlet Binding keyword. That makes it "advanced." And they also have to have a parameter block even if you don't have any parameters at all. So, I went ahead and just added all of these things. So, now technically this script is "advanced." So, it gives you a lot of built-in features that a typical script or function wouldn't have at all. So, let's just run this again. Exact same thing! What's going on? I have my commandlet Binding here with my param and nothing's still happening. It made no difference at all. Well, okay, maybe it was something in the ISE the way I was executing it. Well, okay, let's just run it this way. Same exact thing. Through PowerShell's IntelliSense feature, I have a lot of these built in. I didn't specify any parameters, but it just automatically gave me these. These are the set of parameters that come as defaults when you make a script or a function advanced. So, let's use the -Debug. Now, it looks like it worked. DEBUG: Something bad is about to happen. Now, it's prompting me. Continue? It's giving me the option. I'm feeling lucky, Yes. Blew up the server again. Run it again. Well, I don't know. Let's just run it. Yup, it blew up the server again. So, let's say that you expect the value of $AwesomeSauce there to be a certain value. So, let's just put it in the message. And we will run it again with the -Debug parameter. And you see there it does say gimmesome, so it gives the value of the variable at that time. Let's just stop this, and let's say I needed it to be gimmesome, and if it's not, it'll blow up the server. We change it again. Run it, and now you see it's something else that might blow up the server. Well, that's not what I was expecting. So, I can say Halt Command, and it will just completely skip 6 and everything underneath that saying the running command was stopped when the user selected the stop option. Great way to set failsafes in your script. Using Write-Debug is a very good way to set these breakpoints inside of your script. And I highly recommend putting any variables that you expect to be something at that time or any other information in this message. That way, they're expanded and you can see exactly what the variable is ahead of time. So, Write-Debug is a great simple way to setting these failsafes. The next way to do that is to use Set-PSBreakpoint. This is taking breakpoints to a different level. So, with Set-PSBreakpoint, this actually sets the breakpoint in memory with PowerShell. So, we don't actually have to put anything at all in here. So, let's just take this out. And, I don't know, let's say---let's just put another variable in here just for demonstration purposes. So, this script is in C:\SampleBreakpointScript.ps1. So, let's say I want to create a breakpoint at that same spot where I had Write-Debug before. But I don't want to put all that junk in my script itself. I can set breakpoints outside of this script using Set-PSBreakpoint. So, in this example, I'm saying Set-PSBreakpoint in the script, SampleBreakpointScript in the C: root at line 3. let's just see what that does. We'll run that, and that loads that breakpoint into RAM. So, you can see in the ISE here, it shows that we have a breakpoint. Whenever we're using the ISE to set these breakpoints, it highlights it with this color to tell us well there's a breakpoint here. When you run this script, something---a breakpoint is going to happen. So, let's run this. And you notice I didn't use -Debug. That's another good feature of setting breakpoints this way. I don't have to manually have to remember to put -Debug in there. So, what it did was it executed the script, and it stopped it at line 3 like I wanted it to. Hit Line breakpoint. Now, I'm in a special debugging mode. So, what can I do in this mode? I can hit question mark, and there're quite a lot of things I can do here. So, I can look around, figure out what's going on. I'm not going to go over a lot of this stuff. But let's see, I don't know, let's just play around here and see what happens. And it looks like that gives us a list of everything in this script itself. So, that's great. It sees that line 3, that little asterisk there, that says the line we're actually at. So, we have an interactive console here to where we can get various information. We're stopped at line 3. We can do a lot of different things here. We can step into so we can step over functions and do a lot of things that are not really in the scope of this. But it just gives you a great way to stop it and to kind of figure out what's going on. Let's say I don't want to make this happen. Well, you can just do quit, and it skips over everything else. PSBreakpoints are very cool in that you can do multiple things instead of lines. So, let's remove the breakpoints loaded in RAM, so we can do Get-PSBreakpoint, Remove-PSBreakpoint. That removes all the breakpoints loaded in RAM, and you can see it removed the highlighting in the ISE. So, let's say I want a break on whatever that $AwesomeSauce variable is creating. I can just do Set-PSBreakpoint and do on the variable name of AwesomeSauce. Exact same thing. Great, it worked out fine. Notice it wasn't highlighted because it wasn't a line. However, if I execute this again, you see that it hit a breakpoint. It's a different highlight. It's yellow. It couldn't do it. It has to set that highlight at runtime. But it did break whenever I defined the $AwesomeSauce variable. So, I can stop here. Well, no, that's not what I wanted to do. Well, quit stops the whole thing. There're a lot of different things you can do with breakpoints. These are just a couple of examples. But if you want to get into advanced debugging, Set-PSBreakpoint would be a very, very useful tool for you to use.

  29. Summary This module's most important takeaway is to simplify your code when debugging. I cannot express that enough. Simplify it in any way you want just as long as it's simple as it possible can be. Remove as many variables as possible from the equation to get down to the root of the problem. Next, use breakpoints frequently, not only for stopping the script before something bad happens but outputting some critical pieces of information to the console to ensure that variable contains what you think it contains. In this module, I discussed debugging techniques. We went over some common ways you can pick out typos and fat-finger incidents, discussed the importance of removing complexity from scripts when debugging, and finally went over some basic debugging tools that come with PowerShell out of the box with breakpoints. Debugging is a learned skill. The more you do it, the better you get at it. Take what you learned here in this module, and I challenge you to build upon it.

  30. Use Modules to Create a Tool Belt Introduction to Modules Hello! This is Adam Bertram, and this is module 7 and the final module of PowerShell Toolmaking Fundamentals. In this module, we're going to cover PowerShell modules. Once we get acquainted with the module concept, we're then going to create three different modules to represent the work we did in the course modules 3 through 5. However, if you didn't go through these modules, you will still learn what modules are, how to create them, and how to use them. In simple terms, a module is just a grouping of like functions all contained in a single file. It's just the means of organizing like functions. So, what do I mean by like functions? A module is typically created around a simple application or a theme. In our examples, we're going to create modules based off of the tools we created. Other popular modules are ActiveDirectory, DnsServer, SQL, etc. When you use PowerShell with a vendor, chances are you're using that vendor's module as well. Not only good for organizing code, modules are also created to easily share code with others. You know that vendor example I just talked about? All the vendor has to do now is just to give you a file or two. You import it into your PowerShell session, and you're good to go. No more installing and configuring software, although some vendors still have some complicated scripts that still require some intervention. But it's not technically necessary. There are four types of modules--script, binary, manifest, and dynamic, although we can probably argue that a script module could also be a manifest module as well. A script module is the type we'll be covering exclusively in this course. A script module is simply a text file with a PSM1 extension. A script module contains various functions and sometimes variables. It is the simplest type of module and is the one most often used by IT professionals. Second, you have binary modules. Binary modules are typically created by software developers. These modules are actually compiled code presented as a DLL file. Third, you have manifest modules. Manifest modules are script modules that contain an associated manifest. A manifest is another text file that has a PSD1 extension. The manifest contains information about the module such as the version, prerequisites, and/or various notes about the module. Finally, we have dynamic modules. Dynamic modules are different in that they never exist as a file at all. They are actually created on the fly in code and are used in code exclusively so that they're always stored in memory. They're used to perform various ad hoc functions inside the scripts. You probably never come into much contact with dynamic modules.

  31. Building the Module Let's get started creating our first module. As I mentioned in the intro, we'll be creating a script module, which consists of a single PSM1 file. Technically, you can create a dead simple module by just opening up notepad and saving the file with a PSM1 extension, but that wouldn't be very useful. I also wouldn't have much more to talk about. In reality, it really isn't as simple as that. Once you have the PSM1 file, it's then a matter of deciding what's going to go into your module. Typically, a module would just be a grouping of similar functions. So, if you have a chair module, for example, you might have Get-Chair, Set-Chair, Move-Chair, etc. If these functions previously existed as scripts, you'd simply have to copy them into your module, unwrap function brackets around them like we did in all of our tool modules. We'll go over this a little later. Finally, you can also have variables inside of your module. These variables are typically used by the functions inside your module or even by the module user if allowed to do so by you, the module designer. We'll go over each of these concepts in more detail in the upcoming demos. You can create a module anywhere and still use it, but it's not really recommended. Microsoft recommends placing your modules preferably in the user-based locations in C:\Users, your username, Documents\WindowsPowerShell\Modules. There's also a system-level path at C:\Windows\Systems32\SysWOW64 depending on 32-bit or 64-bit architecture, WindowsPowerShell\v1.0\Modules. But this is reserved for system-level modules only. To find the user-based module path, you can use the $env.PsModuleInfo environment variable. This variable contains one or more paths where PowerShell will look for modules, and more paths can be added if necessary. With an intro to modules out of the way, let's jump into the demo where I'll take our work from one of the previous tools and convert it from a simple script to an official PowerShell module. Let's get started building our module for our AD account management automator tool. Luckily, we've done all the hard work already. We built scripts that encapsulated the functionality we want. And in the toolset building demo, we converted those scripts into functions. It is these functions that will make up 90% of the process of building a module. So, I'm now at the AD account management automator script with the functions we created earlier. To build this module, we must first create a PSM1 file and just copy and paste these functions in. This is what I have already done here albeit with a little bit different name and a few extra things. So, let me explain what's going on here. First, a module should be built around a central concept. In our case, it is AD users and computers. Since our functions in a module are built around that central concept, it's considered good practice to start the noun part of the function with a common prefix. In our example, I've chosen AMA. As you can see here, I have New-AmaEmployeeOnboardUser, New-AmaEmployeeOnboardComputer, so I have AMA in front of all of these things. This is a great way to organize commandlets once they've been imported into the PowerShell session. Next, you'll notice an additional function called Get-AmaAdUser. In our function list earlier, I didn't have any Get functions. I chose to create this function to be what some people call a helper function. It is a function that compartmentalizes common code that many of the other functions in the module execute. So, let's just see what this function's doing here. As you might guess, it gets an AD user, but why not just use the standard Get-AdUser commandlet? Because when all other functions in the module call this function instead of just Get-AdUser, it allows a single point of change if it's necessary. So, before I go into this, let me just do a quick search for references to this function so I can show you how the other functions reference it. You can see in this Set-AmaAdUser, I'm using Get-AmaAdUser -Username. Previously, I just had Get-AdUser in there. Same thing with the new EmployeeOnboardUser. Same thing. I had Get-AdUser in here previously, but now I have Get-AmaAdUser. Well, all of these different functions have code in them that points to the Get-AmaAdUser commandlet function. I can then modify things and modify logging in here, do different things, maybe do a different output. And then it would change the output that all the other functions have. So, by doing this, I'm creating dependencies on the Get-AmaAdUser function. And this is a great way to write scripts and modules. It's about forcing all common tasks through a single function to provide a single place for change if that ever needs to happen. So, going back to the helper function here, you can see that I have a single parameter that accepts one or more user names. This is not natively supported with the Get-AdUserCommandlet. So, let me show you what I'm talking about here. So, let's say I wanted to use the Get-AdUser commandlet, and I want to get two users, abertram and jmurphy. It's going to bomb out because you can't do two users at the same time. But let's just give you a demonstration of this function. Let's just copy and paste this thing out temporarily here to load it into memory. And let's do that again, Get-AmaAdUser -Username. We're going to do abertram and jmurphy again. See it actually got both of them. That's just because I can customize how the function works because it is my custom function. With Get-AdUser, I can't customize that at all. So, it gives you a lot of flexibility. And this function works by dynamically creating the filter parameter, which gets eventually passed to Get-AdUser. So, helper functions are a great way of modularizing code to prevent code duplication. So, going down here at the very bottom, another thing that I wanted to show you was you'll notice a few of those export module member functions. These are functions that can be added to the bottom of your module to expose all or some of the functions inside the module. So, notice how I'm exporting all of the modules besides the Get-AmaAdUser function? This is by design. Since that function is only needed by other functions in my module and not by the module of user, it's best practice to not even allow the user to see it at all. By exporting module members, this lets you set what the module user can use or is hidden away. So, that's everything that went into this module. So, now let's import it so we can actually use the functions. Let's see if it's available by getting the available list of modules here. So, I'm going to do---first let me close this out. And I'll open an ISE again. And let's open that up again. So, let's see if it's available. To do that, I will use Get-Module -ListAvailable -Name, and then the name of the module. It's not available. So, why not? Well, let's get into a spot where it'll show up. It needs to be in its proper place. To find the proper place, I can use the environment variable PSModulePath. I can see that the module should be in the user path there, so Users\administrator\Documents\WindowsPowerShell\Modules. That's where we need to place it. So, let's go ahead and do that now. To start, I just need to even see if the path even exists. To do that, I can use Test-Path. So, let's do that now. So, C:\Users\administrator\Documents\WindowsPowerShell\Modules. Okay, it does not exist. So, by default, that Windows PowerShell folder does not exist and needs to be created. If you remember from our previous demo, a great way to build an entire folder structure is to create a blank file in the folder tree you want to create. This module needs to be in a subfolder of the module's parent folder there. So, I'll go ahead and create that entire folder tree now. to do that, I will just use the New-Item commandlet, and I will specify the path. Let's see if I can get this spelled right, Modules\AdAccountManagementAutomator, and the PSM1 file path is the same as the folder name. And the type is a file, and I want to force this to create it if it's not there. So, I went ahead and created it. Now, let's test that path again, and you see that it's there. It did create it. And it created the whole path all the way to the PSM1 file. So, now let's copy the PSM1 file into that folder overwriting the blank file. For this I will use Copy-Item. My folder is in the root of C:, so I will do that. It's kind of a long path I know. This is where tab completion comes in handy. Copy it over there now. So, now that it's copied over, it's in its final resting place because this is where now PowerShell can look for it. Let's try to get those available modules again and see if it shows up now. See, now it is available. And now you can see all the commands available to us. A manifest is a plain text file with a PSD1 extension that contains a specific structure that includes information about the module. This time, you can't just open up notepad and save the file with the PSD1 extension. A manifest contains a hash table that contains various other variables inside such as variable names, the author, minimum PowerShell version, etc. A manifest is not required, but it is recommended to give the user a better understanding of what your module can do. To properly create a manifest, you can use the New-ModuleManifest commandlet. This commandlet has a multitude of parameters you can use to create the manifest like author, company name, or copyright. This manifest should be placed in the same folder as your PSM1 file. A manifest is easily created using the New-ModuleManifest commandlet. So, let's create one for our AdAccountManagementAutomator module. The manifest consists of a PSD1 file inside of the same folder as the PSM1 module. So, I'm using the New-ModuleManifest commandlet here to create that file. Since our module will require the Active Directory module, I'm using the RequiredModules parameter. Since I have a few commandlets in my module that need PowerShell before, I'm forcing that. It's a good practice to always put the author, your company, and a description of the module as well. I've also put a specific host in there to ensure that it doesn't run in the ISE, for example, so it will just run in the console. However, this isn't technically needed. And, finally, I'm specifying the path to where the PSD1 file will be created. So, let's just run this syntax here and see what happens. It looks like it worked. Let me bring up that PSD1 file, and we'll see if it actually created it right. It looks like we got something. So, notice that there're a lot of different options here we didn't go over. There're a lot of different ways that you can enforce---you can ensure architecture of the machine. You can ensure the .NET versions, PowerShell host versions. You can ensure a lot of things. But we'll see that we have everything in here that I specified. So, author of this module, author is Adam Bertram, that got it. Company name, my company. It put in my description. It enforced the PowerShell version. It has the PowerShell host name of ConsoleHost, which is just your typical console. There it has RequiredModules of ActiveDirectory, so this is not going to be allowed to run unless the Active Directory module is available, which is great. And going down through here, everything else seems to be pretty much default. One thing I wanted to mention here is the function to export from this module and the commandlets to export from this module. This is the exact same thing as ExportModule, remember, that we put in the module itself. So, technically, I could have just removed all of those ExportModule members and put it in the manifest here, and it would have done the same thing. So, that's creating a module manifest. It's not very hard at all, and it's technically not needed to create a module, but it's definitely recommended, especially if you're building tools like we are today.

  32. Giving the Module a Spin We've got our module created now, so let's start using it. Using the module is as easy as importing it with Import-Module or, depending upon PowerShell's auto-loading feature, it may be imported already. Once loaded, you can run any of the functions or variables loaded inside of the module. So, let's jump into a demo now to show you a little bit how you can use the functions inside of your module. So, our module is created now, and we're ready to use it. Let's see what we've got to work with. The first thing I do when I investigate a new module is enumerate all the functions I can use with Get-Command. So, I'll do Get-Command -Module, and then the name of the module, AdAccountManagementAutomator, and you'll see we have the four functions there. You notice I don't have five. I don't have that Get-AmaAdUser function. That's because I didn't allow it because I didn't export it. I didn't allow it to be exported in the module. So, I've got an account in AD called Get-AdUser ABertram. So, I'll just go ahead and run that, and you'll see that the account is in AD there. I want to change the first and last names of it. I just so happen to have a nice function to do that called Set-AmaAdUser, and I will specify a username of ABertram, and I have an -Attributes parameter, which is that hash table. And I want to do the givenName= the appropriate name, because you can see up there the given name now is just gobbledygook and so is the surname. But I want to fix that. Given name is Adam. The surname is Bertram. We'll run that, and it looks like it worked. Run it again, and you can see I have now a given name of Adam and a surname of Bertram. You can see I'm getting the exact same functionality as just using the dot sourcing approach I used earlier in the course. So, I can even come over here and close this and open it up again and set Ama---let's just Set-AmaAdUser. Since I can tab-complete it like that, it works. So, I'll do Get-Command again. I should have made this shorter. And I still have it available. I don't have it in a profile or anything like that because it's getting auto-loaded, the module is getting auto-loaded every time PowerShell starts up. So, modules are a very cool way to write really robust tools. So, now the world is your oyster. Use any of the functions we created right in your console here. Just share the module with others. You can just copy the PSM1 file or that folder, zip it up, and then give it to somebody else, and they'll run it just like you are here. It's a very robust and efficient way to share code. And if you have it zipped in a copy in the right spot, it will work just like it's working for you here.

  33. Summary The main takeaway from this course is, remember, you're building tools instead of just a simple script here and there. So, always use a module. If your tool has more than just a few functions with it, it'll make life a lot easier when you go to create the module. You'll immediately have all of your functions at your fingertips and easily share your module with others. When you really start writing PowerShell scripts, you'll find that you'll constantly need the same functionality over and over again. There's no sense to keep building the same script every time or having to tweak existing scripts. Why not build a fully-featured tool that you can call in at any time and reuse? In this final module, we took our AD account management tool and converted it into a PowerShell module. We went over creating a module and an accompanying manifest in a correct location. We then imported the module and used some of these functions as a demonstration. And you'll see when using modules, this allows you to have your tool at your fingertips.