What do you want to learn?
Skip to main content
HTML5 Web Storage, IndexedDB and File System
by Craig Shoemaker
Learn to use Web Storage (local and session storage), IndexedDB and the in-browser file system. Beyond the basics of the API you'll also learn to wrap up the raw APIs and use them in context of a web application.
Resume CourseBookmarkAdd to Channel
Table of contents
Okay now I've gone through each one of these APIs fairly quickly, so I'd like to show you how they can fit in context of an overall application. Now this application isn't one that we're building in the course, but rather a chance to help you get a deeper understanding of how each API is used in a practical setting. So cookies are used in nearly every application on the web, cookies are used for many different things, but at the end of the day they're often used to identify a user or save some small bit of information about a user's preference or behavior. For this application we could use a cookie during the Login procedure to help identify an authenticated user to the system. Now as I said before web storage is similar to cookies, but has some significant differences as well. In this case you could consider using web storage to do things like saving search histories, preferences, or even progressively saving form data. Next if you consider a website that may want to grant the ability to continue to work whether or not an internet connection is available, then you've got an application that can run offline. The offline API allows pages to be reliably served up to the clients even if there's no present connection to the internet. So in this case we could build the application to continue to add homes into the system and save that data off somewhere, but the key point is that the pages still work even if there's no internet connection available. Sometimes you may want to save large sets of data on the client. So like with the previous screen shot if we had the add home screen we'll want to save that data somewhere. Now we could put it into web storage or perhaps we could write it into the database found right here within the browser. So if the application is working offline, you have that client-side database available, and that can be really handy. In other context perhaps you may just want to load up a set of data into the database for quick searches even though you're continuing to work with that application as a regular web app. Really when you have a database available on the client the possibilities are endless. Lastly the file system, file, and file reader APIs are often used together, though not always, but here consider an application where you had the opportunity to create files about notes on a specific home that you're working with. And later you want to send that file to the server and add that as an attachment to an email. Other uses for the file system API might be to perhaps have a web based game where you have images and sounds and all the files that you need in order to make that game work already present on the client and available through the file system.
Narrowing Down the Scope
I hope this brief overview of covering each one of these APIs has given you an opportunity to see how each one of them fits into the context of an overall application. Certainly there's a little bit of overlap between different APIs, but for the most part each API fits a specific type of use quit naturally. So now that you've seen the broad spectrum of the client-side storage related APIs, it's time to narrow down the scope for the purposes of this course. Here you'll be getting a working knowledge of web storage, indexedDB, and the HTML5 file system.
Concepts: Immediately Invoked Function Expressions
Concepts: $$result Module
Now as we wrap-up this module I wanted to make one guarantee for you, there's no guarantees. What I mean to say is that as we're dealing with client-side persistence APIs we're talking about saving data on the client, but by no means are you ever guaranteed to always have that data there. Because there's often more than one way to remove save data or disable the APIs from working within the browser. Now the average person won't know how to delete this data manually, but you need to factor the chance that it might not be there into your development strategy. So anything that you save on the client may not be there the next time you want to use it. Now speaking of deleting all this data, when you're in development scenarios sometimes it's really handy to know how to clear it all out so you can start with a fresh version of the browser and I'd like to show you how to do that now.
Clearing Local Data
Since I'll be using Chrome as the development browser for the code samples found in this course I'd like to show you how to clear out not only cookies but also data stored into web storage, indexedDB, and even the file system. So here I'll go up to the menu and I'll choose Settings. Then I'll come down to Show advanced settings, scroll down a bit, and then go into Content settings. From there we can go into All cookies and site data and at this point we want a search for a local host. And when I click on that you can see that I have data in the file system, indexedDB, local storage, and on and on. Now you can click on this X to delete them all or you could go on to individual ones and remove them individually, but either way if you want to clear out all that locally stored data within Chrome it's as easy as a few clicks.
In this module we took a high-level survey of the different APIs used to save data on the client. Now while the data may not always be there we as developers can now rest assured that we have the tools required to save not only small bits of data on the client, but also large amounts in the browser. While there's much to choose from, in this course you'll learn how to use web storage in both local and session flavors. You also learn how to use indexedDB in order to save data into the local database and also how to save, read, and manipulate files directly on the browsers file system. Coming up next we'll dive right into web storage and I'll show you how you can save large amounts of information directly into storage and have it persist for a long time or just for the browser session.
Hello and welcome back to Pluralsight's HTML5 Web Storage, IndexedDB and File System. This is Craig Shoemaker and in this module you'll learn all about the HTML5 API called web storage. Just a recap where we're at in the course overall the web storage module is the second module in this course. And things are about to get interesting. So as we move on I have a question for you. What's in a name?
What's in a Name?
The web storage API is one of those technologies that has had a number of different names associated with it during its lifetime. As this modules named the official name given by the W3C specification is web storage, but at times in the past the use of the term DOM storage has been used for this API as well. So this is important for you to know as you search around the web because you might see the same API referred to using either name. As we look inside the API they're really two flavors of web storage. The first is local storage and when you save data in local storage the data persists beyond closing the browser, rebooting the computer, or even a visit from the in-laws. The only time local storage will not stick around for you is when the user is using your website with in-private or incognito browsing. And in that case it acts more like session storage. When you save data into session storage it only persists for the current session. Now this session is a bit different than one you may be used to thinking of in a context of a server based environment. Often when dealing with a session on the server there's a time limit on the session, like 20 minutes or so, but here there's no time limit. Here the context of a session is scoped to the use of a browser tab. So if you change the window or tab then you have a new session.
What is Web Storage?
Here I'm running the Web Storage Support Test application. This website will write individual bytes and will report back the type of capacity it encounters. So here I'll go ahead and start the test and as that's running we can scroll down at the browser compatibility chart. And in this table you can see a number of different browsers with a number of different version numbers. And this gives you an idea of what kind of capacity is expected in each one of those browsers. Notice that the capacity for most browsers is about 2.5MB. So it's a pretty extensive table and you can scroll through here and look to see if the browser that you're looking to target is on the list. So here it now says that the results have been saved, so if I go back up to the top of the page you can see that it was able to write out 5101 k characters into both local storage and session storage. And again global storage is not supported, that's a deprecated branch of web storage that's pretty much not supported in just about any browser. So it's all surrounding local storage and session storage at the this point, but during your development process if you're working with different browsers and you'd like to know what kind of storage capabilities you're looking at, you can run this tool and it'll give you a fair idea of how much space you have to work with.
As you look at web storage there's a number of high-level features that will be important to you as you move on. First of all we're working with sandboxed data, just as you're used to with cookies, data in web storage is sandboxed. Which means that as you save data in the browser under website A, website B cannot read website A's data. Here let's take a look. So here as we take a look at the samples in the browser you can see that this shows Local Storage and all the values from this website which are set into Local Storage, but if I go to a different tab and where I've navigated to Google, you can see that all of the values that were saved under Local Storage are no longer visible. Here I just have access to what's been saved under google.com. And the same thing for Pluralsight and on and on. So as you're writing or reading values into Local Storage you only have access to the values that link to the domain of origin. The next important aspect of web storage is to know that all the data remains on the client. Now with having roughly 2.5MB of storage space available it's important that that data does stay on the client. Because unlike cookies, where data is sent to the server upon each request, data stored in web storage stays put. In coming modules we'll look at a few other APIs that allow you to save full object graphs into the browser, but for web storage you're only allowed to save strings. Now you can either opt to stick with just strings or you can easily serialize your objects to JSON and then save them that way. In the upcoming demos I'll show you how to work with both simple strings as well as serialized objects.
When we start to look at browser support the outlook is actually pretty encouraging for an HTML5 API. Here's the screenshot from caniuse.com taken in late 2013. The first thing that I want you to notice about this chart is how tall it is. Chrome has implemented web storage since its infancy and even Internet Explorer has supported it since IE8, which says something. The only real outlier here is oper mini, but in light of that if you take a look at the global stats at time of recording a whapping 89% of browsers in the world support local storage. Now you'll have to decide for yourself if that number is acceptable to you or not, but suffice it to say that there are polyfills and fallback libraries that you can use to boost that number even higher.
Fallbacks and Polyfills
Okay now I've mentioned a few times the fact that there are a number of different options available as fallbacks and polyfills. The all-in-one entirely-not-alphabetical guide to HTML5 fallbacks is a community maintained list of available options to help less capable browsers attempt to support web storage functionality. What I like about this list is that it's maintained on GitHub so the community can curate the list and keep it updated. So it doesn't suffer the same type of aging as a blog post out there that just tends to get old and stale. So as you begin to start using this API I suggest you follow the link down at the bottom and take a look at some of these polyfills if you're looking to get to close to universal support of web storage.
Demo: Getting and Setting Values
The web storage API has a very simple interface and we'll take a look at it here with a local storage flavor. Now first I'll just run this code and if I run it you'll see that at first it evaluates and says that firstName is not in local storage. I'll click it once again and then it writes out the fact that firstName is set to Craig. So I'll start off by setting a variable of storage and make that equal to window.localStorage. Then from there I can take a look at storage.firstName and look to see whether it's defined. If it's not then I can log out to the RESULT pane, say that it's not in local storage, and then I can set the value. So notice here it wasn't defined within storage, but in order to create it all I need to do is do storage.firstName and I can set the value. This line of code creates a key within local storage and also sets that value. There are functions you can use, but I wanted to show you right off the bat you can simply set nonexistent values using this approach. And then to extract the data out of web storage you can use the .operator here to call storage.firstName and that returns the value. Now the .operator is just one way that you can get and set values, you can also use the Get and Set Functions. So as I run this you'll see that lastName is not in local storage, but when I run it once again it reports back that lastName is set to Shoemaker. So here I'm doing the same thing is I'm creating a variable, setting window.localStorage equal to that variable, and then using the function of getItem. GetItem takes in the key that I'm looking for, so I'm looking for storage with the key of lastName, if that equals null then lastName is not in local storage. And then I can use the function of setItem passing in the key and then also the value to set it within local storage. And so if it is defined then I can call storage.getItem passing in the key and that returns the value that's set in local storage for that key. Now notice one small distinction here, when you're calling getItem if it's not defined within local storage it will return null, but when I use the .operator since that key is not defined on the object I'm looking to see whether or not that's undefined. So depending on how you access data within local storage you need to be able to tell whether or not you're looking for undefined or null values. Now there's one other way that you can get values in and out of local storage and that's by using Brackets. So here my twitter handle is not in local storage, but when I run it once again it returns my twitter handle. So here with local storage you can call storage and pass in the key within brackets and look to see whether or not that value is undefined. Here we're looking for the same evaluation the getItem function will return null, but either the brackets or the .operator will return undefined if it's not within local storage. And of course setting it is just as easy, by using the brackets I can set my twitter handle equal to @craigshoemaker. So those are three different ways in which you can set and get values in and out of local storage. And depending on how you're working with local storage within your application that will often dictate which approach you want to use. For the most part though I like using the .operator because it's easy to read and cleans up the code quite a bit. Now certainly there's reasons of why you might want to use the bracketed approached, say if you don't know which one of the keys ahead of time you're working with, then this is a really good approach to use as well, but you have all three of those as an option depending on what you're doing in your application.
Demo: Remove Item
Now if you want to remove a single item out of local storage you can do that by simply using the Remove Item function. So here I'll run the code and you'll see that lastName is removed. So now when I go back to this item that uses the lastName key and run this again and there's the value for lastName. So by accessing local storage you can simply call the removeItem function and pass in the key of the item that you want to remove out of storage and that takes it out of local storage for you.
Demo: Keys and Length
Now there's a few other aspects of the API which can come in handy and that's having access to all the keys within local storage and also being able to report on the length of the keys. So let's run the code here and you can see that using the key function it can iterate out all the values within local storage and that I can also extract those keys using Object.keys. So let's take a look at the code and see how this works. Once again I'm creating a variable called storage and setting that equal to window.localStorage. Then by interrogating storage.length I'm able to loop through each one of the keys found within local storage. Throughout each iteration the i variable will have the index of the storage item and so by passing in that index to the key I can extract out the value of the key stored for that item. And then here I can just log out to the RESULT pane the value of that key, so here you see firstName, and then at this point you can see how using the brackets can be valuable. Because now I can call storage, pass in that key, and then I'm able to get the value out of local storage for each one of the keys. So that's one way that you can extract the keys out of local storage, another way is that you can use Object.keys and then pass in the storage object and that returns an array of all the keys within that object. And then once I have that array I can pass it into my log function and that will iterate over it and log out each one of the items within the array to show the keys that are associated with the local storage object.
And finally if you want to clear the entire contents of local storage you can simply call localStorage.clear. Now what this does is this clears out everything within local storage. Remove Item is the function you want to call if you want to remove a specific key value pair out of local storage. Once you run this though all the contents of local storage are destroyed and there's no going back. So now if I go back to these other items they'll all report that the values are not there, put lastName back in, my twitter handle, and once I call clear that's all gone again. So if you find yourself in a situation where you need to clear out all of the values within local storage you can do that with this very easy to use function.
Demo: Session Storage
Now the interface to the session storage API is identical to local storage and vice versa. You interface with both storage types exactly the same way, the only difference is that the data stored in session only sticks around through the lifecycle of a single tab. Remember it's not time based, it's not like a server type of session, as long as you have that browser open it can last for as long as you want, but once you close it down that session is done. So each one of these code samples are exactly the same as what you saw with local storage, it's just that they're using session storage instead of local storage. So here I can go through and set each one of the values and even refresh the page, press F5 to refresh, now when I run this again it reports out the values that are found within sessionStorage and they're all still there, but let's close the browser now. And open up a new one and when we go back to the page you'll see that none of those values exist anymore. So each one of the samples, like I said, operate exactly the same way, here I'll add back in lastName, I can remove the lastName item, put the values back in so that we can iterate through them here with the keys in length demo and there's each one of the values saved within sessionStorage. And also if I want to clear out the contents of sessionStorage it's sessionStorage.clear and that removes everything out of sessionStorage. So as you've seen me do in each one of these code samples what I like to do is set aside the strategy, whether it's local storage or session storage, into a variable so that I can code against the interface of web storage. And decide ahead of time or even maybe choose to change it of whether or not I'm using local storage or session storage. So create a variable early and set that aside and it'll help clean up your code a lot.
Demo: Exceed Quota
Now if you recall there's a capacity limit on web storage, around 2.5MB, so I've created this sample here for you to see what happens if you try to write too much data within the web storage. Now I've separated out the data file into this big-file.txt file, so that if you download and work with the code your text editor won't lockup because it's opening up such a big text file. So that's the reason that I'm using this AJAX request, it just makes it easier to work with a code file and leaves the data separate. So here I'll request this big text file, which is about 8MB in size, and then I'll set that item into local storage and we'll see what happens. So you notice it returns back an error object, An attempt was made to add something to storage that exceeded the quota. What I received was a quota exceeded error and we can even take a look at this in the developer tools and notice that it comes back as a DOMException and so when you have a DOMException with code 22 you know that you have the quota exceeded error. So if there's a chance that you might be hitting those thresholds of adding too much data through web storage or you're just working with some big files, you'll want to be looking for this error object in order to make sure you can deal with the fact that you've gone over quota.
Demo: Storage Event
Now when you first take a look at the web storage event you might think that it fires anytime you make a change to a value within something in local storage or session storage, but in fact it does not. Like the note I added to this demo here, what you need to do in order to see the effects of the storage event is to subscribe to the event, but then it only fires if a change in web storage is made in another tab. So here I'll run this code and now I'll open up another one of the demos in a separate tab and so now I'll write into local storage and notice I made a change here to a value that's in local storage. And now I'll come back to the storage event tab and you'll see that I have the results of the event. Now modernizer is writing into web storage, so you see that here, and then also if you come down you can see the change that I made where I was writing into the key of firstName the new value is Craig and the old value is null. Now if we take a look at the developer tools we can dive into the storage event here and you can get at the current target. Like you saw in the RESULT pane the key, the new and old value, and even the URL of where it came from. So if you happen to find yourself in a position to where you need to keep data synchronized that's being written to from having multiple tabs open, the web storage event can help make that happen for you. Now the other thing I want to point out is the fact that I subscribed to this event using addEventListener. If you try to subscribe to the storage event using jQuery, using the on syntax, or some other method, you probably won't get the results that you're looking for. The most reliable way to subscribe to this event is to use window.addEventListener and that way the event args that are passed into the callback function correctly reference the event args of the storage event.
Demo: Persistent Form Demonstration
Okay now it's time to bring everything together and something that's a little more of a real-world type of application. Now the idea behind this form is that it's a persistently saved form. So consider an application where you have a form that users are filling out data and perhaps it takes some time, maybe it's an insurance form or maybe it's something from some type of financial institution where perhaps you have to look up data in order to fill everything out. What would be nice in a case like that to whereas you're filling out that form the data that's in the form is being saved. So in case something happens on that page, if the user has to abandon the page and then comes back, all the data that they've entered can be reinstated onto the form. Now as I've been talking you've probably noticed in this area a little message, there it is, flashing saying that it's been saved. So every 10 seconds this form saves within the background. So let's go ahead and fill it out here. So here I fill out the form, now I'll wait for that save message to appear and there it is, and now when I refresh the page, I'll press F5 to refresh, there's all my information. Now you'll notice that I'm scoped to localStorage, if I switch it to sessionStorage all of my information is gone. So we'll add in some different values here. And we'll wait for that to save, there it goes. And now when I refresh the page I have my values. Now I can go back to what's in localStorage and that seems to show up just fine. And so what we can do is terminate the session, so let's come out of full screen here, close the browser window, now bring it up. Back to my page you'll notice all the data from the sessionStorage is gone, but if I switch back to localStorage there's all my values. So I have this Send button down here which really does absolutely nothing, but there's a scheduled save action that happens every 10 seconds on the form. Okay now that you've had the chance to see it in action let me show you how I built it.
Demo: HTML Markup
Demo: Stepping Through the Code
Web storage is a client-side persistence medium that allows you to save data in both local storage as well as in session. You have about 2.5MB worth of storage space available and you can save the data that you're working with and key value pairs. Not only as regular strings, but also as serialized objects which gives you quite a bit of flexibility in saving data in the browser. Up next we'll look at indexedDB, you'll learn not only how to setup and delete databases, but also how to handle CRUD operations as well as learning how to separate concerns of code on the client where you can isolate data access and the model of the database itself.
IndexedDB: Introduction and Concepts
Hello and welcome back Craig Shoemaker and in this module we'll begin the section of modules on indexedDB. Now there's a lot of good stuff to come, but if you started this course at this point welcome, but you may find some value in checking the concept clips at the course introduction which gets you up to speed on some of the patterns and techniques I'm using throughout this section of modules. Here you'll learn about the foundations of indexedDB including the event lifecycle, capacity, capabilities, browser support, fallbacks, polyfills, and more. Let's start by defining indexedDB.
What is IndexedDB?
Understanding the event lifecycle of indexedDB is important as there are certain actions that must happen in specific events. And everything starts with a request. In order to work with the database you need to start by initiating an open request. We'll talk about this more in a minute, but just about everything you do in indexedDB will involve an asynchronous call. So you need to start thinking in terms of making requests right off the bat. When that request is made you'll pass to the database the version number of the database you're trying to open. If the version you're requesting is higher than the current version, and this includes opening the database for the first time, then the upgrade needed event fires. When this event fires you now have the opportunity to create and remove object stores, you can remove and create indexes, and this is the only time when you can do those types of changes to a database. You can even opt to see data into a store during the upgrade needed events, but you can obviously do that in other times as well. And you'll see this in action very soon in the demos. Once work in the upgraded needed event is complete then the success event fires. Here you're able to get reference to the open database for you to use in your script. And of course if you're attempting to open a database and you're requesting the current version of the database then the upgrade needed event is not fired and the success event fires, as long as there's no problems. And of course at any time if there's an error during upgrade or trying to open the database the error event is fired. Now one thing for you to note, as you begin to read about indexedDB on the web you'll see remnants of a changing specification. In the past the set version function was used to help control the database versions, but now that function is depreciated. So just be aware if you see any mention of set version just note that you're reading some old content.
Let's take a few minutes to discuss the features that you can expect from indexedDB. The most common way you'll interface with indexedDB is through the asynchronous interface. This is a good thing because as you begin working with large sets of data you'll want your webpages to remain responsive and available to the user. And the asynchronous model allows this to happen naturally. If you've had experience using AJAX then you have a pretty good idea of what it means to work with an asynchronous API. You make requests for data and then provide callback functions for success, failure, and other conditions. Now there is a synchronous API available to you as well, but the only appropriate place to use this is in the context of a web worker. Web workers are the equivalent of threading in the browser, if you're interested in learning more about web workers may I suggest you take a look at my course here in Pluralsight, HTML5 Advanced Topics where I cover web workers and a number of other useful HTML5 APIs. Just as the asynchronous APIs will have you thinking in terms of requests, just about everything you do in an object store is executed in a transactional context. Even read operations use a transaction to execute. You'll see this in action quite a bit upcoming in the demos. Now we already discussed versioning a bit, each iteration of the database is associated with a version number and that's how you'll manage the evolution of a database with its stores and indexes. And I have a specific demo available for you that shows how versioning works. And of course as I mentioned before, all data is only accessed via the same origin domain making all data sandboxed.
IndexedDB was built to hold significant amounts of data as in lots. So let's talk about capacity for a moment. If you take a look at the specification for indexedDB you soon see that the W3C makes no recommendation of how much space should be allocated either to indexedDB as a whole, to a specific database, or even a single object store. Therefore it's up to the browser makers to decide how to deal with storage quotas. I have two demos for you coming up that deal specifically with capacity and performance. The first one attempts to load a 57MB object into a datastore. Now this is the demo executing in Chrome and here you can see that there were 10 successful inserts into the object store of a 57MB object. In here you can see the data in the store. Now this is just raw data, there are no indexes on the store. And if you take a look at the amount of space that all of this takes up on disk you can see that over 600MB are consumed by this one database. So you might read on the web about there being a 50MB limit, well that's simply not true. What is true however is that at 50MB it seems to be a quota threshold where some browsers begin prompting the user for permission to continue before allocating more space. And each browser handles this situation a little bit differently. As you see here Chrome, as of version 30, does not prompt the user. IE10 pops up a confirmation box at the bottom of the screen and Firefox 25 has a little more noticeable dialog near the top by the location bar. Opera17 does nothing, it allows you to do the inserts just as Chrome did. And sadly Safari does not support indexedDB yet, so well yeah. We'll dig into browser support more coming up, but for right now I'd like to make another point. In the performance demo the page will generate 500K objects and load them into the database. Now granted these objects are very small in rudimentary, but the point is that 500K objects consumes a little over a 100MB on disk. So my point is that you'll probably never need to load 500K objects into a local database. And quick hint from Uncle Craig, if you really think you need to do something like that you're probably doing something wrong. It probably also shouldn't be trying to load 500+MB objects into a single store. So the bottom line is that you can still work with large amounts of data and keep it performent and keep it relatively small in size, but be judicious about what you really need to load onto the client. Just because you can do something doesn't mean you should.
Okay so now we start to really talk about the real world here. As I showed you in the course introduction module the browser support for indexedDB is far from universal, but on the desktop the story is actually quite encouraging. Save for Safari, indexedDB has a fair amount of support. Yes, support only starts in version 10 for IE, but Firefox, Chrome, and Opera have included support for a number of versions now. On mobile, support is just getting off the ground, but again the outlook is good everywhere but Safari. My expectation and hope is that iOS Safari as well as desktop will _____ support for indexedDB soon. You might want to take down the URL for this page on caniuse.com to keep up to date on which browsers and what versions support indexedDB. In the end at the time of recording here in late 2013, there's a 61% of worldwide support for indexedDB. Now you and I both know that's just not enough penetration to be satisfactory. So let's turn our attention to available fallbacks and polyfills.
Fallbacks and Polyfills
And before I talk about the libraries available to help fill in the gaps, let's talk for a moment about progressive enhancement as well. Just because the indexedDB API may not be available in all browsers, that shouldn't necessarily rule it out as an option for your application. Depending on your needs you could build an application to use indexedDB only if it's available. And for those browsers who are late to the party, perhaps you use a local storage or even AJAX to talk to a database on the server. Again it all depends on your needs, but I just want to make the point that you might do well to simply use it where it's available until the browser support landscape is more widespread. There are a few libraries that are perhaps worth your time found in the all-in-one entirely-not- alphabetical guide to HTML5 fallbacks. And here as you can see is a polyfill which falls back to using web SQL if indexedDB is not available. Alright well I want to talk about web SQL here for just a moment. What's web SQL? Well web SQL is an in-browser database, but this is a relational database and it uses structured query language to interface with the database as opposed to the object database we find with indexedDB. Unfortunately web SQL is now abandoned by the W3C, which you can probably tell from the subtle little message that they've included on the first page of the spec. This is a specification that suffered from a number of setbacks, which caused the consortium to recommend the use of indexedDB instead. If you want to read more about the depreciated spec you can go to this bit.ly address here, but what's interesting is what happened between the time the spec came out and when it was depreciated. Browser makers spent a significant amount of time adding web SQL support to many browsers, in fact this is the current state of browser support for web SQL. Notice specifically how iOS Safari and Safari desktop both support web SQL among other browsers which don't support indexedDB. So when you start to consider fallbacks and polyfills the story for client-side databases becomes really compelling, in fact let's lay the support for indexedDB on top of this screen shot to see what the landscape looks like for client-side databases in general. Here the indexedDB support is outlined, which brings the database support to virtually every modern browser, although IE is still a sticking point. So while supporting a fallback or polyfill we'll introduce some complexity and perhaps some redundancy to your code. If you really need a database on the client falling back to web SQL may be one viable path for you to take.
Now there are a handful of issues that you need to be aware of as you delve into indexedDB development. And the first one is kind of a big deal. There's no like operator in indexedDB. This means that building wild card adhoc queries perhaps like you might be used to doing on the server is not possible in indexedDB. Now that doesn't mean that you can't still continue to refine results after you get a set from the database, but as far as the API of the database itself is concerned you're basically left with doing searches against existing indexes in the database. As you'll soon see the API for indexedDB while straightforward is based off of pervasive transactions scopes and requests for every asynchronous operation. This can test your patience at times in that you can't just code linearly, you have to make sure your account for all possible callbacks which include success dates as well as errors and aborted requests and the like. Near the end of this module I'll demonstrate to you one approach to simplify working with the database. While this may not necessarily be the best way, you'll be able to clearly see the dividing lines between separations of concerns in dealing with the database as well as learn some techniques on how to create a more simple interface to the database. Now we've discussed this at length, but I'd be remiss not to include browser support as a caveat to using indexedDB. You'll have to decide when it makes sense for you to use it. And lastly this is an issue that doesn't seem to be brought up very often, but when you start populating and using a database you can potentially load a significant amount of data onto the user's machine, but what happens if your users kid niece little Suzy Sparkles accidently changes the default browser on the machine? And even though the user is at the same website, data entered into the database won't be there because it's in the other browser. The higher level issue here is that you need to build into your development strategy ways to compensate for expected data not being present. And changing browsers is just one instance of when that can happen. Okay I think we've covered enough of the high-level information let's go ahead and get started on the demos.
In this module you learned about the foundations of indexedDB. You learned about the transactional nature of the database, seeing how you can store objects without serialization. You learned about the purpose of the event lifecycle and seeing how with versioning you can manage change sets that allow you to maintain the structure of your database. Coming in the next module you'll learn to initialize, create, and delete databases as well as implement the basics by handling create, read, update, and delete or CRUD operations.
IndexedDB: Initialization & CRUD
Welcome back to Pluralsight's HTML5 Web Storage, IndexedDB and File System. This is Craig Shoemaker. Now with the foundation set we can turn our attention to some of the basics of working with any database. And that's implementing CRUD operations, but before we do that let's take a look at what it takes to initialize an indexedDB database.
Demo: Opening a Database
The first demo here covers the basics that you need to know in order to begin working with an indexedDB database. Now the first thing that you'll notice here is that I'm creating a variable called openRequest. Now remember like I said much of what you do with indexedDB is surrounded by requests. So I suggest that as you're naming your variables you name it based off of what you're doing with the request suffix so that as you have different requests and context within your code you can easily see which one is which. So then from here I'm going to window.indexedDB and calling the open function. Now whether or not you're database is created and configured you still call the open function in order to have access to it. By calling open you pass in the name of the database you're attempting to open and the version number that you're attempting to open. And right now this database is not created within the browser, so as I call open it will go in and fire the upgrade needed event. Now once you have your request created you want to make sure that you subscribe to the different events that are necessary in order to handle just about any condition that may happen. So you notice here I'm creating an implementation for onupgradeneeded, we'll come back to that in a moment. I am also making sure that I'm dealing with onerror and onblocked and I'm setting that equal to my log function which will take the event args and log them out to the RESULT pane. And then also onsuccess. So let's go back up to upgradeneeded. So remember this is the event where you can create data stores within your database. So as this executes I can take a look at the event args and then access the target.result and that's the new version of the database. At this point I can take a look at the new version and look at the objectStoreNames to see if it contains an object store called courses. So if it does not have courses in the object store names then I can go to newVersion.create course and then pass in the name of the object store I'd like to create. Now we'll talk about keys more in just a little bit but for right now what I'm doing here within the second parameter is passing in an object that says how I want the key to be handled within the object store. And in this case I'm saying that I want it to autoIncrement. So as I'm dealing with a key value pair this will auto increment that key value each time you add a new object into the object store. So by providing the name and the options which say how to handle the key once you've executed the createObjectStore function that object store is now created. Now once again if things go wrong for onerror or if the database happens to be blocked, I'll just log out any of that information to the RESULT pane. And if you remember the event lifecycle what will happen is that onupgradeneeded will first run in order to setup the database and then it will fire onsuccess. So here it'll say the database is open and then I can get referenced to the open instance of the database by going to the event args, looking at the target, and accessing the result. Now as this comment says I've declared initDB as a variable outside of the scope of this function so that when we go to delete the database the function that's run under that tab we'll be able to see it. So let's run the code. First you'll see that it fired the Upgrade Needed function and then it says the database is open. So that's what happens if the database did not exist. If I run this once again, really what I'm doing is calling open and then whichever events need to fire will happen after that. So when I run it again notice it just says Database open. So that's a quick overview of basically the lowest common denominator that you need in order to open and setup a database.
Demo: Deleting a Database
Now deleting the database basically comes down to running the function of deleteDatabase off of indexedDB. So as I run this you can see it's Closing the database, Attempting to delete the database, and then it's finally deleted. So right here at the top first I'm accessing the reference to the open database that I had. You can't delete an open database so you need to make sure that you close it first. Now you might think, like I did when I first encountered this, that this should be an asynchronous call to the database, closing a database may take a certain amount of time, but unfortunately it's not and in fact it returns immediately. And the execution for closing the database happens on a separate thread within the browser. The connection doesn't get closed until all transactions against the database are complete and so if you try to do anything on the database while the close operation is pending you'll get an error. So error trapping and error logging and being aware of errors is very important throughout working with indexedDB. So next I'll just log out that I'm Attempting to delete the database and then setup my deleteRequest here and call indexedDB.deleteDatabase by passing in the database name. And I set up my error and blocked handlers and again just log that out to the RESULT window. And then for onsuccess I can log out to say that the database is deleted. And for the final bit of the code here I just have some guard conditions here basically to say that if you haven't already created and opened the database, so as long as there's an instance of initDB to work with, then I can run all of this code. So if I Run this again. So as long as you make sure your database is closed then you can delete it. Now in coming demos what I'll often do is delete the database and recreate it on page load so that it can always start with a fresh database, that way I know that the code samples will work as expected.
Demo: The db Model Object
This next demo shows you how to do some basic CRUD operations against an object store. Now the database that's created for this page is much like what you saw in the first demonstration. I've created a new database called a CrudDB and I created an object store within that called courses. The key in the courses store is set to autoIncrement. Now as you can probably tell in working with the database, and I've been alluding to the fact of the asynchronous API kind of creates a lot of code. So right here I'm beginning to work with what I call a DB model. I want to be able to model the database in an object and provide some ways in order to help streamline the code a little bit so as we go through some of these samples you're not lost in all the ceremony that's needed in order to create a database, create a store, and do all the basic actions. And we can focus more on the task at hand. So what I've done here is I've created an object and I've called it db. Again the purpose of this object is to model the current state of the database. So I've got the name here. The reason I'm doing this is because any time I need to access that database name I want to be able to get rid of just stray strings or magic strings lying around in the code. So by placing it inside this object anytime I need the database name I can access it through the object and I don't have to keep typing the string over and over again. The same type of thing with the version. So by creating an object and tucking the version number into that object now if I need to change versions for the database I can do that all in one spot. This is a placeholder for the instance of the database once it's opened. So now when the success event fires I can go to the db object and set the instance of it to the open database. And along the same lines it becomes even more important as the number of stores within your database begins to grow. So here I've created a nested object for all the store names within the database and so by calling db.storeNames.courses I can get to the name of that store. As we begin to use the object it'll become more clear of the value of it if you're not able to see that right now. Now the other thing that I want to do is to make sure that for a number of the different events where I'm handling errors if it's blocked where there's an error in the request, is to be able to have a default function that can handle what's going on. So here I've created a function called defaultErrorHandler and as I call that that'll log out any error information to the RESULT window. Along with that I have a function here that allows me to set the defaultErrorHandler. Now the value that this will give me is that anytime I have a request instead of having to code against each one of the events every time I use a request, I can call this function and if it has onerror it will set it to the defaultErrorHandler. If it has onblocked I'll get the defaultErrorHandler as well. Again the idea is just to clean up the code so that as we look at these different operations we're dealing with what it takes to create an object or read an object, update an object and delete an object, rather than some of the ceremony that's required to make sure that we're handling all the cases within the code.
Demo: Create Object (Insert)
Demo: Read Object
Demo: Update Object
Demo: Delete Object
Now deleting an object out of the store just requires that you create a delete request. So once again I'm creating the transaction the exact same way that I did last time. I have the readwrite transaction type and I'm creating the transaction scope against the courses store. Also, same as before, I'm getting a reference to my objectStore within the transaction scope by calling transaction.objectStore objectStore and passing in the name of the store. Also the same way for getting the key out of the text box and converting it to a number. And so now I can call store.delete and passing in the key which creates a deleteRequest for me. After I set my defaultErrorHandlers, when the success event fires I know that the course is deleted out of the object store. And then for the purposes of this demo I just clear out the value that's up in the text box. So I can Run this to delete the course, so if I go to read it, it was the key of 1, I Run it, shows that A course of key of 1 does not exist. And in fact if I go through to create a new course you'll see that key is 2, key is 3, key is 4, I can read the item at key 4 and then also delete it and then also try to read it again and it is in fact out of the object store.
In this module you've learned to model the database to help keep your code clean and learned how to insert, read, update, and delete objects in an object store. In the next module we'll learn to take the next step and see how to manage large sets of data. Here you'll learn to use a cursor to list objects as well as explore strategies for filtering and sorting data found in the database.
IndexedDB: Cursors, Indexes and Ranges
By now working with indexedDB should start to feel a little more comfortable. This is Craig Shoemaker and once again welcome back to Pluralsight's HTML5 Web Storage, indexedDB and File System. In this module you'll learn to work with sets of data as well as seeing how to perform searches and filter and sort results. So let's get started by discussing ways to simply code by modeling the database itself.
Demo: db Model for Cursor, Index and Range Demos
Now as we get into the demo here for cursors, indexes, and ranges notice what's happening as the page loads. The first thing that it does is creates a database and then creates the people store. And then automatically seeds that store with 100 people objects and then makes that database open and ready for use. Now let's take a look at the db model over here, you'll notice that again I have the name of the database here, which is CursorDB and the version of 1. And then the placeholder for the instance of where the open database is placed when that open request succeeds. Just like I had last time I have my defaultErrorHanlder and then I've also made sure that when I set that defaultErrorHandler I can look at each request that comes in and if it has an onerror handler I provide an implementation for that, for the defaultErrorHandler. And the same for blocked and abort. That way I'm handling all the conditions that can happen when I'm running cursors against just an object store or an index. Now in order for you to see what's happening as the new version of the database is being created it provides an exert here out of what runs under the event that fires for onupgradeneeded. So just like you saw before I set aside the new version, which is available through the event args.target.results and I can take a look and see if the people object store is in that version. If it's not I need to create the object store. So here by calling createObjectStore and passing in the string of people in order to name it. And say that the key is set up as autoIncrement to true. Then once I have that store I can then create an index within the store. What the index will do is create a way to do searches against the data that's not against the primary key of the object within the object store. So if I want to search against last name I'll create an index for it. Now the parameters that you pass in to create index are first to give it a name. So here I'm calling the index lastName. The second parameter here is the path within your object, so although these look identical it kind of means something different. And here let's take a look at the data so you'll see what I mean. So you can see the two indexes that I've created on the store and I want to create this index here for last name. So the name of the index is lastName, but the key path that it uses where that second parameter is to say to look at the root object and then find the path. And so it's not deeply nested or anything for right now for these purposes, but this lastName points to the name of the index and this lastName points to the path of the object in which the index is being created. Now I don't want this index to be setup as a unique constraint, so I set unique to false then I pass that in as an options object as a third parameter to create index. So I create the index for lastName, create the index for age, basically doing the same thing, and then I can seed the store with some data. Now if you're not familiar with mockJSON I covered that in the course introduction. Basically what I'm doing here with this call to generateFromTemplate is creating a 100 new people objects and setting them equal to an object array here called people. So then I can iterate through each one of those people and add them into the store. So I have some buttons up here that will help you out as you want to test working with the code. So I can delete the database and then I can refresh the page and it'll create those new objects because it's running the upgrade needed event. So now that all that's set up the store is set up, there's data in the store, then we can run a cursor against the store in order to extract out a list of items from that store. And we'll do that next.
In order to extract a set of data from the database you need a cursor. Cursors in indexedDB work asynchronously and follow the same request pattern that you've seen for the other types of interactions with the database. So in order to request a set of data from an object store you execute a cursor against the store. As the cursor runs the success event fires for each object which is extracted out of the database. In order to keep the cursor running, or in other words to get the next result in a series of results that a return from the cursor, you need to call the continue function on the result of the cursor. Calling continue inside the success function will continue to return the next result in the series until there are no more results. And at that time the cursor ceases. So now let's take a look at an example of a cursor running against an object store.
Demo: Cursors - Selecting Sets of Data
Now that you've seen how a cursor works within indexedDB let's see it working in action. So here I'll get my transaction by taking a look at the database instance and opening it up as a readonly transaction. Of course I get to the objectStore from the transaction scope and have that available here. And then I can open the cursor which creates a cursorRequest doing that off of the store. Now I'll set my defaultErrorHandler against my cursorRequest and then handle the success condition. So each time there's a new record being retrieved out of the objectStore onsuccess will fire. So the first thing that I want to do is create a placeholder for the person that's coming out of the object store. And I can take a look at the result by referencing e.target.result. So if the result does not equal null then I can go and extract the value out of the result and that's the person that's being extracted out of the database. From there I have access to the key, which is separate from the data. Now you might be used to using O/R Mappers or object relational mappers on the server where when you do an insert it updates say like the person id in the database. Here this is a key value pair, so the key is separate from the value of the object. So you access that separately from the data itself. So to get access to the key I'm looking at result.key and then I can get the values from my person object by looking at the object itself. Person.firstName, person.lastName, person.age, so all of this will fire every time there's a new object being extracted out of the store. And if I want to go to the next item or allow the cursor to continue to run I need to call result.continue. Once it no longer finds data within the store then it won't fire the success event and then execution will cease. So when I press Run you can see that it lists out each one of the objects within the objectStore and all 100 of them. So the key to the cursor is that you go to the store, you open the cursor, and then in the success handler you need to call result.continue in order to continue to get the results that come out of the objectStore.
Demo: Indexes - Selecting Individual Objects
Now the cursor, the way we set it up in the last example, simply went to the store and grabbed all the data that's in the store, but if you want to narrow it down to specific values within the store then you'll need to search against that store. Now if you don't use an index you're stuck with dealing with the primary key value and that's useful when I showed you the CRUD operations. So if you need to extract out a particular item by its key that's very useful, but here what I want to do is be able to search for all of the people within my object store that have the last name of Anderson. So here let's type that in and when I Run this I get 1 record back, but if I go back over to the cursor and run it you can see that I've got Charles Anderson, I also have Larry Anderson, and there's also other Anderson's in the data store. So first I want you to understand the concept of the index, which in this case returns only one value. And once you have a firm grasp on that we'll add ranges into the mix which will bring an entire list. So here let's look at the index individually. So the first thing that I'm doing within this sample is extracting out the last name that's typed into the input box. And then I just have a guard condition here that says you need to type in a last name. So this should become fairly familiar to you by now, so I'm creating a transaction here and it's a readonly type of transaction. Now one of the things that you can see through what I'm doing here is that you can begin to chain together the commands in dealing with indexedDB. And I'll always recommend this, it's usually a good idea maybe to separate things out line by line, but I did want to demonstrate to you that it is possible. So from the transaction I can access the objectStore. I'm looking at the objectStore of people and then go directly to index and I'm looking for the lastName index. Now, from that index I can call get. Now if this were a unique index and you just had a specific value you wanted to pass in, then just getting this single result back could be very useful. Here I'm working with lastName and then I opened up the getRequest. Of course I set my defaultErrorHanlders and then I can take a look at e.target.result if I have an instance of that person then I'll log it out to the RESULT window. So that's calling get on an index which will match against the first value it finds within the index. So next let's take a look at passing ranges into the index so we can get a list back instead.
Ranges are necessary to use when working with cursors in order to constrain the results brought back from the cursor request. Consider for a moment a dataset like this in an object store. Now there are a handful of ways you can pass a range into a cursor request in order to shape the results that come back in the cursor. First you could ask for an upper bound range. What an upper bound range does is it places a constraint on the returned results so that all the data is returned up into the given bound. In this instance defining a range with upper bound of 28 would produce a result that included all the data but stops at the given value. So what you're seeing here is how the data would look if the cursor request was made of the age index so that the boundary of the constraint is at the upper level and everything else including the given value and below is returned. Now bounds can either be called open or closed. In this case by passing in true to the second argument, the given value is not included in the result set. So the same data is returned as in the previous call to upper bound, but this time Heidi is not in the result set since her age is 28 and that's the value that was passed into the function. This argument is called open where open or a true value means that the given value is excluded from the result set. I find the labels of open and closed kind of confusing so to help you remember just think that the second argument is asking you should I exclude the given value in the results. The default value for this argument is false. Now lower bound gives you the opposite of upper bound. Here by calling lower bound 28 the result set goes from Heidi to Jason, where Heidi at age 28 is the lowest value in the result set making it the lower bound. Again by passing true to the second argument Heidi is removed from the set and just the greater values are returned. Remember the argument's asking shall I exclude the given value from the results? If you want to specify a range that's bound at both the top and the bottom ends you use the bound function. Here you can create a range that is restricted at the lower bound at a value of 26 and an upper bound at 29. Calling the function like this produces a result of Dan through Mary where the ages start at 26 and end at 29 and include the given values. Of course this function has options for open parameters as well so you can decide if you want to include both values, just one, or neither of the values that you pass into the bound. The third argument is the lower open parameter and the fourth parameter is the upper open parameter. So calling the function with these values produces a list where any object in-between the age range of 26-29 are returned, but the records equaling 26 are excluded. So depending on how you want to shape your results you can change the open constraints for the upper and lower bounds of the range. Well the last range type will allow you to specify a single value. The only range accepts a single value and it returns a result made up of the set of objects that all match the given value. So in this case if you're looking only for the people 27 years of age you can do that with the only range constraint. Okay, well now that you've had a chance to get a grasp of the concept behind ranges let's go ahead and put them into practice.
Demo: Numeric Range
So now let's start taking a look at using ranges. So let's start off by adding in 40 to the range and running it and see what happens. So you notice here that this is running an upper bound range, so it's starting at the lowest age in the objectStore, which in this point is 19, and going all the way up into 40. Now I can also pass in a range to this. So let's say I wanted to look at everyone between let's say 38 and 40. So I can Run that and now I have all the records between 38 and 40. So let's take a look at the code now. So the first thing that I'm doing is looking at the input box, grabbing the value, and splitting that value based off the of dash. So at that point if I have just a single number then I'll have an array with a single value in it. Otherwise if I have something like this I'll have an array with two values in it. I'll initialize startAge and endAge to 0 and then I can convert whatever's found at the index 0 position of my parts array and call that the startAge. Now if I've passed in an ending age then I can set endAge equal to parts at index 1 in converting that to a number. Now I know that the values that I've generated for this data store start at 18, so here I just have a guard condition that makes sure that the startAge is greater than 18. Now here I'll create my transaction, much like you're used to seeing me do in the other demos, this is a readonly transaction type. So then I can get reference to the index by going to the transaction, locating the objectStore, and then calling the index function by passing in the name in order to get the index that I'm working with. Now I'm creating this range variable which will hold the range that I'm using. So if I'm dealing with a startAge and an endAge then I want to use a bound key range, otherwise I'll just do an upperBound and then I can uncomment this out so you can see how this changes as we switch to lowerBound. And decide whether or not to include the given value in the results. So after running through here I'll either have a range that's bound or upperBound depending on whether or not I have a start and end age. Then from there I can create a cursorRequest. So I go to the index and say openCursor and pass in the range that I'm looking for. Of course set the defaultErrorHandlers and then onsuccess, this runs the cursor just like you'd seen in the past, which fires each time a result is extracted from the objectStore. So again I pass in say 25 and Run that. Then I get all the values from the beginning up to 25 and I've set this to include it so it looks like there's not a person who's the age of 25. So, let's put it all the way up to 30. So you see here it skips from 24 to 26, so I get everything from the beginning all the way up to 30. Alright let's change that to a lower bound now. We'll refresh the page. Type in 30, come to the numeric range, and you'll see now I've got a lowerBound and I'm saying not to include the given value in the result set. And so here I'm starting with 30 and I get 31 since that's at the lowerBound all the way up to 85. And so that's how you can work with a numeric range inside a cursor.
Demo: String Range
Next I'll show you how to use the only range. In here I'm using it with a string, but obviously you can use it with numeric values as well. So here let's add in a last name, so here I'll use Walker. And now when I run the range against the cursor I getting only the records that match the last name of Walker. So here I'm extracting the last name out of the text box and then I have a guard condition to make sure I have a last name. I'm getting the transaction, again this is a readonly transaction type, and then I can get access to the index by going into the transaction, into the objectStore, and then calling out the index by name. Now I can create a range here by saying I want an only range and then pass in the last name that'll be used as the constraint for the cursor. So now I have a range that I can pass into the cursor. And then I have the same pattern I've been doing before. I have the cursor request, I set the defaultErrorHandler, and then as each object is extracted out of the objectStore I can log it into the RESULT pane like I've been doing in all the other demos. The important thing to remember here is that in order to get to the result you need to go to e.target.result. Once you know that you have a value for that then to get to the actual data you go result.value. So that's the only range, but I have a few other ranges here that I've commented out. So let's switch between those and see how it affects the results. So I'll refresh the page and you'll notice now I'm working with an upperBound, so I'll type in Walker once again. And when I Run this I get all of the objects in the object store that are seen through the lastName index. So notice how they're sorted alphabetically here and since this is the upperBound I get everything all the way up until Walker. So this is almost everybody in the store because W is at the end of the alphabet. Let's take a look at it with a lowerBound. I'll refresh the page, type in Walker once again, and you can see here it is set for the lowerBound on the range. And now I have Walker all the way down through Young. So depending on how you have your indexes set up within your objectStore providing ranges to the cursor that you open up gives you quite a bit of flexibility in order to filter and narrow down the objects that come back from your request.
Demo: Controlling Cursor Direction
Now once you begin working with ranges often times what you'll want to do is change the order of the data as it comes back in a cursor. And there's an argument you can pass in when you open up the cursor which allows you to do that. So for the last demo we took a look at a lowerBound range, so let's take a look at that same lowerBound range except what we'll do this time is change the direction to reverse it. And here using the abbreviation prev for previous is what will allow us to do that. So let me Run it. And you'll notice that before we started off by having the listing from the cursor end at Young and start at Walker. Now we're going from Young to Walker. So it took the same range out of that cursor and reversed the order of it. So the next direction is the default direction. So if you want to reverse the order you can call prev. Now nextunique and previousunique take a look at the key and it will return unique results based off of that key. Now if you have more than one object with the same key, here in this instance I have a number of different Walkers by calling nextunique or previousunique you'll only get one result. So that's what it looks like with lowerBound just so that you have an opportunity to see we'll constrict it down to just one name, so I'll switch the range to only. And now when I refresh the page there I only get the Walkers because I'm using the only range. And if I switch it back to next refresh the page again and now by using next as my direction I get the opposite order direction based off of that range inside the cursor.
In this module you learned to model the database to help keep your code clean and learned how to insert, read, update, and delete objects in an object store. In the next module we'll learn to take the next step and see how to manage large sets of data. Here you'll learn to use a cursor to list objects as well as explore strategies for filtering and sorting data found in the database.
IndexedDB: Keys, Capacity, Performance and Versions
Welcome back to Pluralsight's HTML5 Web Storage, IndexedDB and File System. This is Craig Shoemaker and I'm glad you're here because in this module we'll discuss some strategies for uniquely identifying data, as well as its implications, and we'll push the limits a bit by taking a look at the capacity and the performance of databases with large amounts of data stored in the browser. Lastly, you'll learn to use the versioning system to help maintain different versions of your database as they evolve. Alright let's go ahead and get stared.
Unique Identifier (Keys) Concepts
Demo: Creating Object Store Keys
So this demo shows you how to create object stores with the different key types. So you'll notice over in the RESULT pane over here I've created a number of different stores which each relate to the different type of key. And then I added 10 people objects into each store and all of that is placed into the KeysDB database. Now the code that you'll find under the tabs is code that's run in the upgraded needed event. So as you've seen in previous code samples, usually I call the database that we're working with during upgrade newVersion. And so I can go into the newVersion of that database, create an object store, give it a name, and then set up the key a certain way. So this is the autoIncrement key, which also happens to be the approach that I've used for just about every object store that we've used so far in the course. So let's take a look at the data that's placed into a store that's setup with autoIncrement. So here we can look at IndexedDB and here we're looking at the KeysDB database. So as we look at the AutoKey store what you'll notice is that the key for the object here is 1, but it's not found within any of the data points with the object that's stored within the object store. So if I'm extracting this data out of the database through some sort of index or find it in any other way other than using this key, then I have no way to uniquely identify this object. So the other way that you can handle that is to create a keyPath. So here I've created an object store and I've called it EmailKey and I'm saying that the keyPath is email. So in other words the email property of the object will define the uniqueness of the data. Let' take a look at it in the database. So here I have EmailKey, can see that the Key path is set to email. There's the value of the email address and when I look in the object you'll notice that the value here matches what's over there. So the advantage of this type of approach is no matter how you get your data out of the database, if you know that the email is the item that creates uniqueness among your data, the you always have access to it. And you can extract the data out using the email, but what if there's no way to create uniqueness from the data within the objects that you store? Well one of the ways that you can handle that is by setting AutoKeyWithPath, and so you'll notice here I'm giving a keyPath of id and then I set autoIncrement to true. So this gives me the best of both worlds. I get that autoIncremeting nature that we get with just the regular autoIncrement, but then I can point it to a path by saying that this id will continue to autoIncrement. Let's take a look at it in the store. So here I have the key of 1 and when I look at my object you see that I have an id with the value of 1, here a key of 2 id of 2. So every time you insert a new object into the object store that's set up like this you get the key value, but it's also inserted into the object because we've used both the keyPath and the autoIncrement option. Now you can go with something like this or another approach that you can use is to create a GUID that's generated on the client. And so here you could just set keyPath to something like clientId and so you see here that the key is that GUID value, but also it's available as part of the object through the clientId property. So whether you wanted to use an integer id or you wanted to create something like this, which gives you a chance to create uniqueness a different way. You can support either one of those options. In the end though, what you want to do is find a way to have uniqueness that lives inside of your data, whether it's through a GUID, whether it's through an auto key with path, or some other means, so that no matter how you're extracting your data out of the database you always have a way to identify it again. So these buttons up here just show what the data looks like depending on what type of key you set up. So here the key is 1, Richard Johnson, but this is just auto key, that key is not a part of the data. Here the key comes back as the email address, AutoKeyWithPath, this looks the same in this view, but in this case the key is the key of the object and also is a member of the data. And for the GUID here you can see those long GUID values and the data coming back as well. So it's important to pay attention on how you will key the data within your object stores.
Demo: Loading 500k Objects into a Database
This next demo is less about the code and more about you getting an opportunity to see and experiment with a datastore that has a lot of data in it. If you'll notice down here what I've done is loaded up a 500000 people objects into the people store. Now if you have a chance to download and work with the samples browsers in this code make sure you take this message up here to heart. It may take a while for that data to be loaded into your browser and to become open and ready. After you get past that hurtle though, the database responds actually pretty well. So let's start off by taking a look at the data so you can see for yourself what it's like. So if we come in and take a look at the performance database you'll notice that I have the people store and two indexes set up, one for lastName and one for age. So this is much like the type of store we were looking at when we looked at cursors and indexes. The only difference is that now this store has 500000 people in it. So if we come in you can see that it's a very basic object footprint, all I have in there is age, firstName, and lastName and I've set it up to be auto key. So again it's not that complex of an object, but when we go and begin to work with a database let's put in a range here. Let's say that we want to look at everybody who's an age range from 50 to 51. It starts the query, you'll notice that there's over 15000 objects returned as the result and then what I do is show you the first five and the last five in the set. There's no need to really print out all 15000 to the screen. So that was arguably a pretty quick response from querying that many records. We could also take a look and see what it looks like to even open it up a little bit more. So let's say 50-53. So here I'm up to almost 30000 records being returned and again depending on your needs I would say that was a pretty quick response, but that's a numeric value. Let's take a look at the string. Let's use Walker once again, Run that. Again about 15,000 objects as a result, and here's the first 10 Walkers. And let's try Anderson. And again about 15,000 results, but this time 15,800. So, I don't know, for my money I think that's a pretty fast response for half a million records, but ultimately you'll have to do some bench marking and decide if that's performing enough for you in order to work with a database that size. Now before I explain all the code here to you, and most of its fairly familiar, there's one thing that I'm doing is allowing the script to know when it's done running through that cursor in order to be able to return a message here that says there's 15847 objects returned in the result. Remember in earlier demos what I've done is written out to the RESULT pane within the cursor, within the success event of the cursor, but here what I want to do is make sure that that query is completely finished and then write out to the RESULT pane. Where there's a specific approach you need to take in order to be able to separate out what happens through the data access code and what happens through the UI code. So let's talk about how you detect when a cursor is done.
Detecting When a Cursor is 'Done'
One of the issues that you quickly run into when dealing with cursors is the fact that there's no way for the request to signal to you that the cursor is done returning results. This can be an issue because in some instances you may want to draw a hard line between your data access code, yes even though everything is running in the browser, and your UI code. In situations like this what you want to do is send the request to the database for the data that you need and then get a single notification back once all the data is returned. So while there's no built-in way to detect when data retrieval is done, there is a technique that you can use to request a set of data and get back a single response with all the requested data in it. The way you detect when a request is done is to first request a count of the data in the set you're requesting. Then as the cursor is firing you keep a running total of how many iterations have elapsed. Once the iteration count equals the total count of records in the set then the full set has been extracted from the database. At that point you can then run a custom callback which exposes the full set of data from the database. So done is when the iteration count equals the cursor count. Now with this concept in mind let me show you what it looks like in code.
Demo: Working with Large Sets of Data
Okay let's start back over at our numeric range. And again a lot of this code will be familiar to you to what was implemented in the last demo about ranges, but the first thing that I want to do is take a look at the value that's placed into the input box and split it based off of a hyphen to figure out whether or not I have a range. The startAge will be at the 0 index of that array and the endAge will be at the 1 position of the array. Now I've got some guard conditions in here to make sure I enter in numeric values. This is important because if you try to pass in nonnumeric values into a numeric index you can have some problems. And then I just want to make sure that the data entered matches the data that I know is in the system. So just a few guard clauses and then also you want to make sure that when you're working with your bounds that the startAge is definitely less than the endAge. So if you do a range, say 25-19, that doesn't make sense I want to be able to say something about it so I don't get any errors. Now what I want to be able to do is run this done function when the cursor is done extracting all the data out of the database. So when it is it'll pass in the set of data and then I can log out to the RESULT pane the total amount of items that are in that RESULT set. So that's just the string saying that's the number of objects as a result of my query. And then I'll show the first 5, here's for the numeric range, a standard forloop that increments through five items and then stops. So I'm looking at the data, passing in my iterator, and then I can access firstName, lastName, and age. For the last 5 I just start my index value with a starting position of my loop at data.length -5, and then from there I can go to the end of the list. And again what I'm doing is looking at the data, passing in the iterator value, and looking at firstName, lastName, and age as well. So that's what happens when I'm done getting all the data out of the database. So you can think of this as logically separated UI code. And what's down here is logically separated database code. So of course I get my transaction the same way that we've done in the past, this is a readonly transaction. And I want to get at my object store by going to the transaction, calling out the object store of people, and I also need to take a look at the age index so I can get that by calling the index function off of my object store. Then I'll build up my range. If I have an endAge then I know I have a bound range that I'm working with so I can pass in the startAge and the endAge. Otherwise I'll just work with an upperBound of startAge. Then I log out to the RESULT pane that I'm starting the query. And I'll begin by creating an empty array in order to act as the container for the data that comes out of the database. Now what I want to do, if you remember from the slides, is the first thing I want to do is to request the count of all the items in the set that I'm looking to get. So as I pass in the range into my index.count that creates a count request for me. Now of course I got to set those defaultErrorHandlers for the count. And then once I have a successful response from that count I can do something with it. Notice I'm calling this countRequestEventArgs. So I can get to the total count of items from the event args target.result. At that point I can create a cursor request and then open up another cursor off the index passing in my range. So here I'm calling openCursor and up here I called count in order to get the count. So now I'm opening the cursor in order to iterate through each of the items in my result set. Setting that defaultErrorHandler and then once I have a successful response back each time through that cursor then I can take the data out of the cursor and push it into my array that will hold the final results. So here I create a placeholder for the person that's coming in from this firing of onsuccess and I also want to take a look at my result. So if result does not equal null then I can access its value, that value will be the person, from there I can go to my array and push in a new instance of the person. And here if my length, if the array that I'm putting all the data into, if that length equals the total count that I got from the count request then I know I'm done. And at that point I can call the done function, pass in the data, and in that way I'm separating out my data access logic from my UI logic. Now it's not a clean separation at this point, of course the data access logic knows about this function that's being called, but it's a much better separation than what we've had in the past. And so if we're not done then we'll just call result.continue which will allow the iteration to continue to happen in order to extract out the rest of the data from the database until it hits the end which is data.length === totalCount. Now the string demo over here is the exact same approach, the only difference is that I'm going against the lastName index. And I have an only range against the lastName, but the pattern is the same. I start off with a countRequest and get the totalCount and then from there after I have the totalCount I open up the cursor and make sure that the data length is equal to the total count, at that point I'm done, otherwise I can continue. So this gives you an effective way of being able to tell when a cursor is finished.
Demo: Managing Database Versions
Now as you work with your databases they'll begin to evolve. You'll need to add stores, you'll need to change indexes, you'll need to make basic fundamental changes to the database. And in order to do that you'll need to maintain different versions of the database. So you notice here I have my db model object, I have the name of my database here which I'm working with, which I'm calling versionsDB, and then I'm setting my version value to null. I'm just starting off as null and then we'll go through the progression of the different versions of the database here in a moment. Again I have the place where I'll set the open instance of the database into this instance property and then the defaultErrorHandler, which will just logout any errors to the RESULT pane for me. And then my setDefaultErrorHandler which you've seen a number of different before, which just subscribes to each one of these events so that if something happens I can know about it. Alright let's take a look at version 1 of the database. So what I'm doing here is setting my db model .version = 1, so when I create an open request of the database I pass in the name and then the version. So in this case it's a version 1. Set my defaultErrorHanlders for that request and then onupgradeneeded will fire and I can work with the database at that point. So here I just have a little message saying that I'm upgrading to the newVersion, let's Run this and you can see it working. So there you see the message Upgrading to version: 1, Creating Employee store, and then saying that the database version is open and ready. So I set the store name here to Employees and just like you've seen in the past I'm looking at the newVersion of the database. I want to make sure that the object store names does not include the store name that I'm trying to write into, so in this case Employees. So if that's not there then what I'll do is create a new object store. At that point I pass in the store name, and I give the keyPath of clientId for my store. So really all I'm doing at this point is just creating a new object store. And then of course onsuccess when that happens I can set the instance of my db model to e.target.result and that's the current open instance of the database. So now when we go to version 2 I'll Run this and you'll see that what it did is it created the Widgets store, it added an index named email to the Employee store, and then it has the open version of the database. So anytime you're working with a database, if you delete it, or you're going to reopen it, you want to make sure that the current instance is closed. So here if this page had loaded for the first time and I wanted to open it under a different version number, I wouldn't need to worry about closing it, but since I want you to be able to run these consecutively I need to make sure to close that instance. So in order to work with version 2, here I'm setting db.version = 2, so I'm passing in the database name and the database version here. And then I have an openRequest at this point. And so now that I'm working with a new version of the database I can look in and see that the Widgets store, I'm setting Widgets equal to storeName here, if that is not in my current version then I can create that object store and just set autoIncrement to true on that store. And then I'll look at Employees and then I need to make sure newVersion, object store names contains instead of not contains, contains the Employee store. And if it does I can access the store within the current transaction scope and I do that by taking a look at e.currentTarget.transaction.objectStore and then by passing in the name of the store that gives me the object store within the transaction scope. So then from there I can take the store and create an index on it. And again here this is the name of the index, this is the path of the property that I want to index, and then I need to say whether or not I want it to be a unique index or not. So that sets up version 2 and of course onsuccess and then I grab the instance of the database. So if I go back to version 1 and I try to open that I get an openRequest error. So let's take a look at this in developer tools. And if you take a look at the source elements this will give you an error and it says that The requested version 1 is less than the existing version 2. So once you upgrade to a higher version you can't go back and that's a good thing. Alright so let's try going over to version 3. So I just upgraded to version 3, I deleted the Widgets store, and now the database is open and ready. So here I'm setting version 3 onupgradedneeded fired because I changed the version number from what it used to be and now what I did was I just went in and deleted an object store called Widgets. Once that's done the upgraded needed event completed, then it ran onsuccess, and then I get that instance of the current version of the database. So now if I want to delete the store I can just go in and run indexedDB.deleteDatabase and once that delRequest is successful then I want to make sure I set my db instance to null so that I know that that's been deleted out. So I'll Run this and now once it's deleted we can just go through the motions here and try changing the versions, going back a version, upgrading to version 3, and running through the whole cycle again. So versioning a database is the way that you want to be able to manage change sets too basically the structure of the database, which is the stores and indexes that you have defined within a database.
Demo: Capacity Capabilities
And for the last demo that I have prepared for you in this module is the capacity demo. Now I'm not introducing any new code concepts here, so what you'll see used here you've kind of seen before. Really the value that I want you to get out of this demonstration is to see how much data you can load into IndexedDB. So notice over here I've set up the demo to be prepared for use. So I deleted the database that I'm using here, CapacityDB and then once it was deleted I created a new version. And so now a blank instance of CapacityDB is ready. So as I run this it's requesting a very large data file, 57/58MB is being requested through AJAX and then attempted to be inserted into the database. There it was successful and you can see that that object was added to the database. Now I've opened up the developer tools so we can keep track of the count and the other thing that I did was I kept that data file in memory. So that as I press Run once again I don't have to make that long request I can just keep trying to add it into the database. So I can keep pressing this button and attempting inserts and it'll keep adding those large data files into the database. So again this is more of an exercise of you being able to see how much data you can pump into an IndexedDB database. Again, be responsible though, just because you can put a lot in there doesn't mean you necessarily should, I just want you to see what's possible. So on the slides I showed you how I was able to very easily add 10 instances of this 57MB object into the database. So different browsers will handle the quotas different way, in Chrome it just writes that data to the database. In other browsers the users will be prompted in order to give permission to go over that 50MB quota, but in the end the way that this work is I have an insert function. And I'm going to the local database and I'm calling the function createPersistentObject. And if you recall from the clip where we created the local database module createPersistentObject creates a new object with a clientId giving it a new universally unique identifier. And then also properties for insert and modified dates. From there I can take the largeDataFileContents, this is that huge blob of data that comes back from an AJAX call, and set that equal to the data on my object. From there I can call localDatabase.insert, I can pass in the name of the database that I'm writing to, in here I'm using a db model to keep track of that. Then I pass in my object and then the success function runs if everything works out the way it should. At that point I can log out to the RESULT window that the object was added to the database. And of course I set my defaultErrorHandler to catch any problems that might arise. Now for the AJAX request first I take a look at my largeDataFileContents and if that's null then I'll make this request out to my huge-file.txt file. And once I get a response back from that I can take that response and add it into largeDataFileContents and then call insert. Otherwise if largeDataFileContents is not null then I can just call insert again. And so that's the code needed in order to run this demo. Again the idea here is just to show you how you can experiment with IndexedDB and push a significant amount of data into the database.
In this module you've learned different strategies for uniquely identifying data on the client, as well as on the server. You saw how the database performs with a ridiculous amount of data in it, and saw how to test the capacity limits of the database. In the next module you'll learn to take what you know about IndexedDB and see it in action with an integration sample, which implements a full create, edit, list, and delete screen all while building on an abstraction layer on top of native IndexedDB code.
IndexedDB: Abstractions & Implementing an Edit Screen
Demo: Introduction to the Homes List Screen
Demo: Homes List Markup
Demo: Homes List db Model
Demo: Abstracting IndexedDB - Error Handling
The local database module is a module that wraps up all the underlying API calls to IndexedDB. The idea behind this is to create an abstraction layer on top of IndexedDB in order to create a simpler interface to working with the database. So I have two nested objects here that give me a chance to expose a number of different pieces of functionality. So I've got some error handling logic, object creation, and then finally the general data access pieces. Basically what I want to be able to do is open a database, in some instances I want to get all of the data within a data store, insert data into a data store, delete an object, update an object, and even get an object by its id. I also want to make it easy to delete a database and you'll see how I use the error handling and object creation functions as we go along. So let's start off by looking at the error handling. So here I've created a nested object called _err. Now the way this works, this module is set up for you to pass in the publisher or the function that you want to run if an error should happen. So if you notice down here in this function for publish error, it'll look to see if there's an error publisher defined. If so, it'll publish the error message to that publisher and if not it'll just create a warning out to the console. So the way you set the publisher is by calling this function, which simply takes the publisherCallback and sets it equal to publisher. So now I can define a function up in the calling code that gives instructions of what to do if there's an error by calling set publisher to pass in that function and then when an error is encountered the publishError function is called and then whatever implementation I've decided upfront will run if an error happens. Now if I don't explicitly pass in an error handler or a fail function then what will happen is I'll run the defaultErrorHandler. So when the defaultErrorHandler runs it'll simply publish that error. And the way I'll decide whether or not I'm running my customFailHandler or the defaultErrorHandler is by running this function here for getFailHandler. So you'll see it working in context in a moment, but what happens is that I can pass in a fail function that's being used by a particular function. And if that fail is undefined then I'll use the defaultErrorHandler. Otherwise it returns the same function that was passed into it. So all this does is says if I don't have a fail function defined I'll use the defaultErrorHandler. So now as I'm performing data access operations I can call setDefaultErrorHandler and pass in the request. Notice what will happen and you've seen this in a lot of the code of the demos before, is that it'll take the request and hook into the different events for onerror, onblocked, onabort and so on. What it will do is call getFailHandler and take a look at the customFailHandler that I'm passing in. So if this is null what happens is that it'll return the defaultErrorHandler. Otherwise what it'll do is pass in the reference to the function that's passed in here. And so that way if there's any sort of error or blockage or the request is being aborted, then it'll run the function for customFailHandler. So all of this handles basic plumbing code of being able to handle errors. So if you don't want to have to pass in a fail function for asynchronous requests, you don't have to, and errors as they occur will be caught by the default handlers defined here within this object. Alright now that you see how error handling is happening, now let's dive into the data access functions in this module.
Demo: Abstracting IndexedDB - Delete and Open Database
Now that you've seen how we'll handle errors in the local database module, let's take a look at the data access functions. So right off the bat the first thing that I'm doing is creating a placeholder for the open instance of the database. So as the database is opened and that onsuccess event fires, we'll take reference to that open database and set it here into this instance property. Now anytime we're working with the database we have to work within a transaction scope. And there's two types of transactions, they can either be readonly or readwrite. Here what I'm doing is nesting these strings into an object so that it works more like an enumeration. So I can simply call transactionTypes, readonly, or readwrite and that returns these strings. Now as you saw me working with the clientIds I had the function that creates unique identifiers. And this is a code sample that I got off of stack overflow, which if you follow this link you can read all about it there. This link here is to the specification for generating globally unique identifiers and so this is the implementation used in order to generate those UUIDs. So I'll be using this strategy in order to create the UUIDs for each one of my objects. Again you can read more about the code here. Now I have a function for createPersistentObject and what it will do is create a new object, so it's returning an object with a clientId and I'm generating the value for that clientId. And then creating an insertDate and modifiedDate properties on the object. So this is the base level of what I need in order to save an object to the database. Obviously as this object is returned other code will add more properties to it in order to save it down into IndexedDB, but this just gives me the basic stubs that I need in order to work with the persistent object. Now for the rest of the functions that you see here you'll notice a fairly standard signature on each one of the functions. So I'm taking in a databaseModel or objectStoreName and usually have success and fail callbacks. Some of them like insert and delete take a little bit of different arguments, but for the most part it's either the databaseModel or the objectStoreName that you're passing in and then having callbacks for success and fail. Now if you remember from database model this has the database name, this has the version number, and what it also has is the implementation or what to do when an upgrade is needed. So we'll use that when we get to open, but first let's take a look at deleteDatabase. Now the first thing that we're doing is looking at the current instance that we have within this module and looking to see if there's an instance there and if the close method is a part of that instance. If so the database is being closed and then I create a deleteRequest. Now this is basically the exact same code that you've seen before, here I'm calling indexedDB.deletedatabase going into my database model, grabbing the name out of it, so this gives me the name of the database, and then I have a deleteRequest. Then I can set the defaultErrorHandler which works out whether or not I'm passing in an implementation for a fail callback here or not. If I pass something into this fail callback it will use it otherwise it will use the default function that I've created and then subscribe to error onabort and onblocked events of my request. So all I need to worry about is the success condition. Now we're deleting a database so if that is a success I want to set the current instance to null and then pass my event args up to the caller. So that's deleting a database. To open a database, here I create an open request by going to window.indexedDB and then open. I'm bringing in my database model so I can get to databaseModel.name and databaseModel.version in order to open whatever the latest version of the database is. I'll set up the defaultErrorHandler and then for onupgradeneeded I'm pointing to the upgrade function of my database model. So whatever needs to be done in order to upgrade to the current version of that database is handled within the database model definition. Here all I'm concerned about is the success condition and if that works then I can go to db model, which is this module, and set the current instance equal to the open instance of this database. Once that's done I can call the success callback passing in the event args.
Demo: Abstracting IndexedDB - Get All
Now once the database is open, often one of the first things that you'll want to do is get all the data out of a store. So in the context of my homes page when I list everything in the table on the right, I'm calling getAll in order to list each one of those objects. So this function takes the objectStoreName and a success callback and an optional fail callback. So on this first line here what I'm doing is I'm taking a look at the fail callback and I'm calling getFailHandler. So if I'm passing in null what this will do is return the defaultFailHandler, otherwise it will return the function that I'm pointing to here. So either way, even though this is an optional parameter, I'll have something to call should errors arise while I'm running the getAll function. So first I want to take a look at the db.instance. And fail fast, if I don't have an instance of my database right now, and say that you can't read data from a store when the database is not open, and then I'll send back up the store name. So if the instance of the database is equal to null then I'll just return at this point. Then what I can do, if everything is in working order, is get the transaction by going to db instance.transaction and again I have the array of objectStoreName. Now this module is set up only to work with one store at a time, so I'll only be passing in that one objectStoreName. And then I'll look at the transactionTypes and pass in readonly. If you recall from before this is an object that basically encapsulates the string of readonly, so I have IntelliSense working on my side and I can also make sure that I don't run into any silly typing mistakes by passing in the information this way. Then I can set the defaultErrorHandler of the transaction and then I'll send in my fail function to be hooked up any sort of errors that might happen on that transaction. From there I can extract out the objectStore within the transaction context by passing in the given objectStoreName. At that point I have an instance of the store, then what I can do is create a countRequest of the store. Now if you recall from my discussion about figuring out when a cursor is done, what I first need to do is take a look at the count of the set that I'm looking for. Once I have that totalCount then as I open up the cursor and iterate through each one and the totalCount equals the iteration count then I know that the cursor is complete and I can return a final set of data. So with that count request I'll set the defaultErrorHandler, passing in the countRequest, and my fail function then create an empty array that will hold all the data that I'll send back up to the calling code. So when I have a success of the countRequest then I can get the totalCount out of the event args by going to countEventArgs.target.result. Once I have the count then I can open up the cursor on the store which creates a cursor request, make sure to take care of any errors that might happen. And then when it succeeds remember this is the event that fires each time an object is extracted out of the database. So what I can do at this point is grab the result and I'll get that out of the cursorEventArgs.target.result and if I have an instance of that result then I can access its value and that's the data item. So then I can take this data item and push it into the data array. This is the array that eventually gets sent back up to the calling code of all the data that was requested from the data store. So I'll push in each one of those items and then I can take a look at data.length to see if it equals the totalCount. If it does then I know I'm done, I can call the success callback, and send the data up with it. If not I'll tell the result to continue which will go to the next item that's being extracted out of the database. Finally if that result does equal null then I know that the objectStore is empty so I'll call the success callback and I'll just be sending up the empty array at this point. So then the calling code can look for the length of the array to make sure that there's objects in the array in order to process. So that's the logic required in order to have a single request in a single callback for getting all of the objects out of a data store.
Demo: Abstracting IndexedDB - Insert, Update and Delete
Demo: Homes List View Model
Demo: Stepping Through the Code
So now that you've had a chance to see all the code in different blocks, what I'd like to do now is show you it running altogether in the browser. So here I've set a breakpoint to stop at the jquery load function and the first thing that I'm looking to do is to see if IndexedDB is supported within the browser. Since it is it'll go through and apply the bindings through Knockout and then we can take a look at the initialization function. So here you can see me setting the publisher and then attempting to open the database. So just a refresher, the DBModel will have the name of the database, the version, information about the stores, and as well as the implementation for the upgrade function. Upgrade won't run here because the database already exists. So here I'll just be able to open the database. So this will create the openRequest and then once it's open it'll fire the onsuccess event. So here e.target.result is the open instance of the database. And then I'll just pass through event arguments up to the success callback. So now that the database is open what I can do is go to the database and get all items out of the home store. So I'll step into there and I'll setup the transaction, of course this is a readonly transaction, and go through, access the store, and create a countRequest. I also want to make sure I set up an empty array that acts as the container for the data. And so once that countRequest comes back as a success, then I'll have a count and here right now I just have one item in the data store, but either way now that I have the total count I can open up the cursor. Here's the success event of the cursor firing and I can go through and take a look at the data that's coming back from the cursor. So there's the result coming through the cursor. And if I look at the value you'll see the date object right there. So if result is not null then I have my data item and I can put that into the array. And then if data.length equals the totalCount then I know I'm all done. So what gets placed into the event args of the success function here is the array or all the data that's in that store. So I can take that array and put it into my observable array here in order to begin binding to the table. So that's all the initialization steps for the page, let's go ahead and select this home here. So this is the select function. I'm able to extract out of that button element the GUID Id, because it's placed into a data --attribute of the button element. Now I can query the home store based off of this key and return the matching object. So after setting up the transaction, the error handling, getting the store, I'll call get based off of the key that's passed in, and then I'll call the success callback based off of the onsuccess event that happens here. So once I have a successful condition there, I can take a look at the home, if it's not null, then I can take all of the information out of the data object and put it into my observables here, which will update the UI. And of course since I'm selecting it now I'm editing an item so I want to switch the button label to update. So there's all the information, so let's make a change here, Plumb Avenue, call Update. This goes into the save function. If clientId of length is greater than 0 then I know I'm doing an update. So here I'm stepping into the update function of localDatabase and to do an update first you want to do a get and then after that you want to do a put. So I'm creating that getRequest and if that's successful then I have the original data coming out of the database there. So I'll set aside the original insertDate and create a new modifiedDate and then I can put all of that data, with the latest information, down into the data store. If that succeeds then I can send up the data along with the original event args back up to the caller. And then I'll update the UI by replacing the old version of that object with the new version of the object. And that keeps this table up here in synch with any changes made in the form over here. Now if I add a home (typing) this goes back to the save function once again, although I won't have a clientId this time. So now I'll drop down to the logic needed for an insert, I'll generate that clientId, and then be able to do an insert into homes with the data object of the new home here. So here the important parts are going in and setting the insert and modifiedDates and then sending it down into the store using an insertRequest. Once that succeeds the callback that's passed into this function will be run based off of the onsuccess event. So here once I have a successful insert, I have that new home with all the latest information that I can now push into the homes array, again so that Knockout keeps the UI updated. So there's the table rebound again and then to delete an item, goes into delete, I'm going into the store name here of homes, the selected key that came out of a data-attribute of this button right here. And the important part here is going to the store and calling delete based off of the key and then running the success callback when the onsuccess event fires. So once I have a successful delete I can take that object and remove it out of the observable array. So that's the entire progression of creating, inserting, updating, selecting, and deleting items out of an object store using IndexedDB.
Congratulations, this has been a densely packed module and along the way you've learned a lot. You've learned about the basics about IndexedDB, including object stores, indexes, transactions, cursors, and even ordering results. You've also seen how you can create separations of concerns among data access and UI code, and how to create an abstraction layer on top of the raw IndexedDB API, and saw how to bring it all together in a full create, read, update, and delete screen. Coming in the next module you'll learn to use the HTML5 file system API to create directories and files directly in the browser.
File System: Introduction, Concepts & Initialization
Hello and welcome back to Pluralsight's HTML5 Web Storage, IndexedDB and File System. This is Craig Shoemaker, and in this module you'll soon be acquainted with the client-side file system available through HTML5. So here we are now at the third main section of the course. And if you're starting here I'm excited about what you're about to learn, but note that it may be a good idea to check out the Introductory module where I cover some overarching concepts that will serve you well as we continue on on the rest of the modules in this section. Alright, well let's begin by discussing the file system as a whole and see how you can initialize it for use.
What is the HTML5 File System?
The HTML5 file system is much like how you might think of the low-level file system on your computer. Here you have the ability to create, copy, move, and rename files and folders. When dealing with files you can read their contents, replace entire files, or even just parts of them. So now let's talk about a few of the features of a file system.
So let's take a peek at what you can expect in terms of capacity when using the file system. If you look at the specification for the file system you soon see that the W3C makes no recommendation about how much space should be allocated either to the file system as a whole, specific directories, or even a single file. It's up to the browser makers to decide how to deal with storage quotas. Now I have a capacity demo later on in the module available for you that attempts to write relatively large files into the file system. Here you can see how Chrome allows me to blissfully write some pretty large files into the file system. Now this probably won't be the same for all browsers, more on browser support in a minute. As we saw with IndexedDB some browsers require a confirmation from the user to write files over 50MB to the system, but for now you can see there's 10 files each at about 58MB in size in the file system and if we look to see how much space is consumed from writing these files it's a little over 560 MB. So you can store a significant amount of data in the file system. And of course I have to say it, please be careful just because you can write that much data doesn't necessarily mean that you should.
Now there's no way around it, at the time of recording here in late 2013, support for the file system is sparse on desktops and mobile browsers. Here you can see from caniuse.com that Chrome and Opera are the only desktop browsers that feature support and only the most recent versions of BlackBerry, Opera Mobile, and Chrome for Android are supported today, and the native Android browser will soon enjoy support. To stay up to date with the latest browser support stats, make sure to visit caniuse.com. Okay well if that's all that there is in terms of native support what about fallbacks and polyfills?
Fallbacks and Polyfills
There are a number of different libraries available which attempt to pick up the slack where browsers are found deficient in terms of file system. One in particular is a library called idb.filesystem by Eric Bidelman. Now you can probably surmise from its name this library attempts to polyfill the file system API and uses IndexedDB as the fallback persistence medium. And that can be great, but only as long as your users browsers also happen to support IndexedDB. So there's a number of different libraries here and you can find them all in the all-in-one entirely-not-alphabetical guide to HTML5 fallbacks. Which is available here at this bit.ly address.
Now as you approach file system development you should be aware of the following issues. Like I've already said for now browser support landscape is spotty at best, but don't let this stop you from learning the technology now. This API could prove to be a valuable asset in your development arsenal. As you'll soon see the API is asynchronous and very callback heavy. This can make your code look a bit cluttered at times. You need to account for success and failure callbacks and in order to get a file you first have to open the directory and so on. So the callbacks tend to stack-up fairly rapidly. As opposed to IndexedDB where nearly everything you do is in a transaction context, here there's no guarantee that your changes persist without corruption to the file system. Now you do have success and failure callbacks, but that's not exactly the same as being able to rely on atomic sets of changes. Once you write data into the file system there's no indexing of the content of any way for you to search the names or the contents of the files. So be aware, if you're saving data that requires heavy searching then you're probably in need of something more like IndexedDB instead. File and folder names are case sensitive. So you need to be careful and consistent in how you name and attempt to retrieve files and folders from the file system. And lastly as I pointed out in the IndexedDB module as you begin to store significant amounts of data in the browser there's a risk of your web application not being able to find what you or the users thought was previously stored data, if they happen to have changed the browser. So this is a signal to the fact that you need to work into your development strategy how to recover from the instance where the data you or the user expected to be there isn't.
Now we're just about ready to dive into the code samples, but first I want you to have a clear understanding of the difference in the storage types as you use the file system. There are two storage types you can write into, temporary and persistent. Applications using temporary storage are automatically granted access to the file system without requiring explicit approval from the user. So you can go ahead and write into temporary storage, but the only problem is that the browser can elect to delete the files saved there at any time. So if you have a transitory need for the file system, then requesting temporary access might do you well. For example, the demos used here in this course largely use temporary storage, so that if you run the code in your machine, any space that's taken up by the examples can be recollected automatically by the browser if the space is required. Applications that use persistent storage must wait for approval from the user to grant permission to write to the file system, but any files saved using a persistent request remain intact until either the application or the user removes the files. Okay well, with this in mind let's take a look at what it takes to initialize the browser's file system.
Demo: Initialization (Temporary Storage)
So this is the demo that shows you what's required in order to initialize the file system. And I'll show you both the temporary and persistent approaches. Now one of the things you'll first notice here at the top is that this is the first time that we're using vendor prefixes anywhere in this course. So again because the browser support is quite a bit spotty at this point, this vendor prefix will come in handy. So it's looking to fill out requestFileSystem, and so if there's a need of implementation of requestFileSystem it'll go ahead and use that. Otherwise it'll fallback to webkitRequestFileSystem. Now one of the first things that you need to do as you begin working with a file system is it's a bit of a two-step process. First what you need to do is request the quota, once you've been given a quota size, then you can go in and request the file system itself. And so that's what's happening here. The first thing that I need to do is ask for a quota of a certain size. So I have this function calculateBytesByMegabytes in order to convert the number of megabytes that I want to use into bytes. Because the function that we're using to request quota is looking for the size in bytes, but I like thinking in megabytes, it's just a whole lot easier. So this function here calculateBytesByMegabytes takes the number of megabytes that you're looking to use and multiplies that by 1048576 with a number of bytes in a megabytes. And that's what will give me the size of the quota that I'm looking for to use with file system. So in order to request that quota you need to go to window.navigator and again another vendor prefix webkitTemporaryStorage and then request the quota. You pass in the size for the number of bytes that you need for the file system and then there's a success callback that runs that exposes the granted number of bytes that the browser is giving you. So here what's happening at this point is I can log that out, I can say Attempting to request the file system and then grantedBytes are this. So for my purposes here I'm saying if the grantedBytes equal the size that I originally asked for then I'll go ahead and try to open up the refile system. So by calling requestFileSystem you need to pass in a number of different parameters. First I'm using a constant here off of window and this points to either temporary or persistent storage. So notice that these match, here I'm passing in window.temporary into requestFileSystem and I'm also using webkitTemporaryStorage.requestQuota. When we go to do it for persistent storage this will change and this will change, but as I request the file system, I pass in the granted number of bytes that I'm looking to open up for the file system, and then it'll run the fileSystemSuccessHandler or the ErrorHandler. And also the requestQuota has an error handler as well. So this function right here is the success function and then I have the error handler being passed in down here. So first let's take a look at the success handler for requesting the file system. If the file system is opened up and request has been granted and I have access to the file system itself. So if I Run this you can see it attempts to request the 2MB worth of space in the file system and then once the file system is initialized it returns back this object. Now let's take a look at that object for a moment. It returns a DOM file system object and inside of that I have the name, which shows http_localhost 1505 temporary. So this is the origin that it'll use in order to enforce the same origin policy on the data that's written into the file system. It also has reference to a directory entry and we'll use directory entry a lot as we begin talking about directories specifically coming up in the next section. Each directory entry has reference to the filesystem, so this filesystem here is the same reference as this filesystem down here. And then you can see that the fullPath is the root. Now in windows we're used to seeing paths delimited with back slashes, but here since we're working within the browser paths are delimited with forward slashes. So the single forward slash is the root of the file system. And then there's some Boolean flags whether or not this is a directory or is a file, so true for directory and false for file. The DirectoryEntry class inherits from an entry class so you'll get some commonality between directories and files based off the of base entry class. So that's kind of the happy path, everything went well, we were able to open up the file system. Let's take a look at the error handler for a moment. And there's quite a bit here in order to be able to tell us what went wrong if an error should have occurred. So what's passed to the error handler is the fileError, so you'll notice I'm starting off just by saying there is an error in initializing the file system. And then we can take a look at the fileError.code. And that maps to fileError and then these codes, which are basically an enumeration value, so NOT_FOUND_ERR is equal to 1. So the many things that can happen when trying to access something in the file system is that the file or directory may not be found, could have a security error, an abort error, the file or directory may not be readable, there could be an encoding error for the file. Or perhaps modification is not allowed, it could be in an invalid state, a syntax error, invalid modification, perhaps you've exceeded the quota, invalid file type, or the file or directory already exists and of course everybody's favorite the unknown error. So if there's an error I just build up that message and log it out to the RESULT window. Everything went well for us so that wasn't necessary, but the fact of the matter is all of this code is essentially required for you to open up the file system and even begin working with it. Okay that was temporary, now let's take a quick look at opening up a persistent storage request.
Demo: Initialization (Permanent Storage)
Now when you're making your persistent request the code is really much the same, there's a little bit of variation, but for the most part it's the same. So here at the top I'm working out request file system using the vendor prefix in order to get the webkit version. Then I'll calculate the number of bytes by megabytes and that's the same function as I had before. So I'll ask for 2MB worth of space. Now as I come down and go to navigator I'm going to webkit persistent storage instead of temporary storage and then calling requestQuota. I'll pass in the size and again I get a success callback that has the grantedBytes. So here I'll logout the fact that I'm attempting to request based off of the grantedBytes. Now if the grantedBytes does not equal the size that I requested then the request for file system access is denied. Now technically you could get less bytes back even though the user may have approved it, but for the purposes of this demo I'm trying to keep it simple in just saying if it's not equal then the access is denied. So if these are equal then I can go down and requestFileSystem this time using the constant of window.persistent rather than temporary, passing in the grantedBytes and the success handler and the error handler. And of course the error handler is passed in as well to requestQuota. So I'm doing the same thing for the success handlers and even the error handler here. So let me run this and you'll notice up at the top you can see I have this dialog that comes up that says that this website wants to permanently store data on the computer. Let's hit Cancel first. So then Attempting to request 0 bytes of a file system space, file system access was denied. So since I said cancel or no the grantedBytes turned out to be 0. Now let's try and Run it again. This time I'll say OK, now it granted the quota and also had a successful request of the file system. We can look at the object here as well. So here's the DOM file system and again I have the same thing DirectoryEntry, root of the filesystem, but here the origin is localhost 1550 persistent. So if I put something at the root here in the persistence storage and put another file at the root of temporary storage, even though it's running in the same browser that's two separate locations. And once you've granted access to the file system I can Run this again and you'll notice that it's not asking me to grant access once again, you only need to do that once. As long as you're within the same quota size. So this is quite a bit of code that's needed each time that we'll use the file system. So in the next clip I'll show you, I'm going to clean that up a little bit and make it easier so we can just focus on the specific tasks at hand.
Demo: Wrapping Up Initialization Code
HTML5 File System Explorer (Chrome Extension)
Now one of the difficulties in working with the file system is that there's no built-in way to the browser to inspect the file system. If you take a look at the developer tools we can look at frames and web SQL, IndexedDB, on down the line, but there's nothing specific for the HTML5 file system. Now there's a Chrome plug-in that you can use called the HTML5 FileSystem Explorer. It's also sometimes known as peephole, but if you search Google for HTML5 FileSystem Explorer what you'll get is a browser plug-in that will let you explore memory, and that's temporary memory, as well as persistent memory. And the other nice thing, especially in development context, is you can click that button to delete all. So using this browser plug-in can be really useful so that you can see when you're writing into the file system you'll know exactly what's happening.
In this module you were introduced to the HTML5 file system and learned to initialize the file system using both persistent and temporary storage. In the next module you'll begin to look at working with directories using the low-level API.
File System: Directories - Create, List, Delete, Move & Copy
Thanks once again for joining me here in Pluralsight's HTML5 Web Storage, IndexedDB and File System. In this module, you'll begin to learn about creating and managing directories. By using the raw API, you'll learn to create individual directories, as well as subdirectory hierarchies, list contents of directories, and move, copy, and rename directories. So let's go ahead and get started with some demos.
Demo: Create and Read Directory
So let's start things off by talking about directories. Now when you request the file system that gives you access to the root and from the root of the file system you can begin to create directories and files. So here I'll add a name up here, or a name of a directory, and we'll just call this Documents. And as I Run the code you'll notice that Documents is created and it returns back the directory entry object. Let's take a look at it in the console. Here's the directory entry, you can see that the fullPath is /Documents, and it is a directory it's not a file and its name is Documents. So let's take a look at the code required in order to create that directory. So first I'm extracting out the directory name just by using jQuery to get the value out of the input element. Then I'm using my module for localFileSystem in order to request the file system and then it makes the file system available here in the callback. Now, from the file system I can go to the root, the root of the file system, and call getDirectory. In calling getDirectory I pass in the directory name and then I have an options object that I pass in as the second argument. Here I'm saying create: true. So I want to create that directory. Now if I Run this again, even though that directories already created, it still returns the directory. So if we look in a console you can see there's the second instance of it being returned. So even though you tell it to create if it already exists it'll just return back the reference to that directory entry. Now you can also pass in a flag for exclusive and this will return an error if you try to create it and it already exists. So depending on how you want your code to behave you can pass in a different type of options into the second argument. And then I have my success callback and then the fail callback. Here if you recall my discussion on the localFileSystem module I have my defaultErrorHanlder which can look for all the different error codes if something were to happen while talking to the file system. So now on the success callback it returns back the directory entry and then I can just say that the directory name was brought in from the text boxes created and then log out the directory entry to the RESULT pane. So I know it seems a little odd, but in order to create a directory you call getDirectory and pass in the option of create equal to true. Now reading the directory takes pretty much the same form because we're still calling the getDirectory function. So here once I have an instance of the file system I can go to the root and call getDirectory. Now the root, as you saw before, is a type of directory entry. So you can call getDirectory from any other instance of directory entry. It just so happens that root happens to be the directory entry for the root of the file system. So here I'm extracting out the directory names and then in my options object that's just blank. So I'm not telling it to try and create the directory, all I want to do is read it out. So at that point I have my success callback and my fail callback. And here again in the success callback it returns the directory entry object and I can readout that directory and logout some of the details about that object.
Demo: Create Sub Directories
So let's start things off by talking about directories. Now when you request the file system that gives you access to the root, then from the root of the file system you can begin to create directories and files. So here I'll add a name up here, or a name of a directory, and we'll just call this Documents. And as I Run the code you'll notice that Documents is created and it returns back the directory entry object. Let's take a look at it in the console. Here's the directory entry, you can see that the fullPath is /Documents and it is a directory. It's not a file and its name is Documents. So let's take a look at the code required in order to create that directory. So first I'm extracting out the directory name just by using jQuery to get the value out of the input element. Then I'm using my module for localFileSystem in order to request the file system and then it makes the file system available here in the callback. Now from the file system I can go to the root, the root of the file system, and call getDirectory. In calling getDirectory I pass in the directory name, and then I have an options object that I pass in as a second argument. Here I'm saying create: true. So I want to create that directory. Now if I Run this again, even though that directory's already created, it still returns the directory. So if we look in a console you can see there's the second instance of it being returned. So even though you tell it to create, if it already exists, it'll just return back the reference to that directory entry. Now you can also pass in a flag for exclusive and this will return an error if you try to create it and it already exists. So depending on how you want your code to behave you can pass in a different type of options into the second argument. And then I have my success callback and then the fail callback. Here, if you recall my discussion on the localFileSystem module I have my defaultErrorHandler which can look for all the different error codes if something were to happen while talking to the file system. So now on the success callback it returns back the directory entry and then I can just say that the directory name, that was brought in from the text boxes created, and then log out the directory entry to the RESULT pane. So I know it seems a little odd, but in order to create a directory you call getDirectory and pass in the option of create equal to true. Now reading the directory takes pretty much the same form because we're still calling the getDirectory function. So here once I have an instance of the file system I can go to the root and call getDirectory. Now the root, as you saw before, is a type of directory entry. So you can call getDirectory from any other instance of directory entry. It just so happens that root happens to be the directory entry for the root of the file system. So here I'm extracting out the directory names and then in my options object that's just blank. So I'm not telling it to try and create the directory, all I want to do is read it out. So at that point I have my success callback and my fail callback. And here again in the success callback it returns the directory entry object and I can readout that directory and logout some of the details about that object.
Demo: List Directory Contents
Now if you want to create a series of directories or subdirectories underneath a root directory, what you can't do is just come in to getDirectory and pass in a path giving the directory name that you want it to create. You have to create all the parent directories in order to create the hierarchy for subdirectories. So here let's try this. I'll create a path of Documents/Works/Drafts. So now I have this folder path or this directory path so I can Run this and you can see that first it created the Documents folder, then it created the Documents/Work folder, and then Documents/Work/Drafts. And in fact if we go into the explorer you can see that there's all the folders right there. So we need to work with a little bit of recursion in order to create the root directories and then work it's way down the line until it finally creates the last subdirectory. So the first thing that I want to do is create a little alias or a shortcut here to my fail callback function. So I'm setting fail equal to the defaultErrorHandler. Then what I'll do is create an array of all the directory names that need to be created. So by splitting this string on the slash documents will be in index 0, work will be in index 1, and drafts will be in index 2. So here's a function that I've created for createDirectory. Like I said this uses a little bit of recursion in order to create the directory hierarchy. So this takes in a list of directory names, and so this is the array, and also the root directory. So this will start off as the file system and then at other times when it's called it'll be a reference to the directory entry of the parent. So the first thing I want to do is look at the directory names and make sure that there are actual values in that array. And then I'll get the directory name that's in the index 0 position on the array and then start off with the root directory, which remember is passed in as the second parameter as the function that we're calling here. Go against rootDirectory, which is an instance of directory entry, and call getDirectory. Then I pass in the directoryName, set the create flag to true, and then I have a success callback, and then I pass in the fail function for the fail callback. So as this directory is created I can say that it's created and access the directories fullPath. And then if there's still values in the directory names array I call createDirectory once again except this time what I'm doing is removing the item at index of 0 by calling directoryNames.slice. And then I pass in the current instance of the directory entry. So each time this runs it picks off one of the directory names from the list and it sends the reference of the parent directory into the next iteration of createDirectory. Then in order to get it started I'll request the file system and then call createDirectory, passing in
Demo: Delete and Recursive Delete
my names array, and start off at the file system root. After that it'll iterate through each directory name in the array and always passing in a different root in order to create the hierarchical structure for the subdirectories. Now listing the contents of a directory entry requires a little bit of recursion as well. So here as I press Run you'll notice that it's starting at the root of the file system. It's looking at the first folder that it comes to, which is Documents, goes inside of Documents, comes to the first folder that it finds, which is the Work folder. And then the first folder that it finds there which is Drafts. Drafts is then empty. So the real value of this demo is to show you how you can use create reader and read entries out of a directory entry. So everything is done through this list function and you'll notice it takes two parameters, the root directory and the directory name. Now I'm going to skip all of this code here in the middle for a moment and take you down here and you can see how it's called. So I start off by accessing the localFileSystem with that instance of the file system I pass the root of the file system into the list function. Up in the function I have this nested function inside, which we'll come back to in just a moment, but what it does is it looks to see if a directory name is passed in to the main list function. If there's no directory name then it just takes the root directory and passed that to _list. So this is basically saying if there's no given directory name then just start at the root directory and begin listing from there. If there is a given name we'll go to the root directory and call getDirectory and use the directory name. And again we're just reading information so I don't have any options for creating the directories at this point. And then once I get a success callback for that then I can call the nested list function by passing in that instance of the directory entry. So now let's go up and take a look at the nested list function. So here this just takes in an instance of a directory entry and from there you can call createReader. So by calling the readEntries function what that does is it gives you a callback with all the entries that are within the given directory entry. So here if the entries length is 0 then I know it's empty and I can log that out onto the RESULT pane. Otherwise I can take the entries and iterate through each one. This iterator function will have as a parameter the current entry that we're looking at and so I can find out whether it's a directory or if it's a file. Now if it's a directory I can log that out to the RESULT pane saying what the fullPath is, but then I also want to call list again by passing in the instance of the directory entry. And also the name of the directory so that it can continue the recursion through the directories in the list. Now again the value here is for you to see how you can use a reader that you create off a directory entry and how you can read the entries out and figure out whether or not you are dealing with directories or files and then you can continue your logic from there. Now up and to this point you've seen how to create and read directories in different ways. So now I'd like to show you how to delete a directory. So we know that we have a directory created in there already called Documents. So I'll click on this and I get an invalid modification error. And the reason for that is because documents has a subdirectory in it of work and inside that there's a subdirectory of drafts right now. So you can't delete a directory by using the method that I'm using
Demo: Move, Copy and Rename
here if there's something in it. So let's backup a little bit and let's go down to Work and then Drafts. And now when I Run this code that will delete the Drafts directory and then I can back it up to delete Work and then finally delete Documents. So let's take a look at the code and you can see how I've done it. So the first thing that I did was set aside the defaultErrorHandler into a variable called fail. And then I have the path, which is basically either the path or the directory name that I'll be trying to delete. Once I have the file system, then I can go to the root of the file system and call getDirectory. There I can pass in the path. So here this could end up being Documents/Work/Drafts and that will still return in the success callback the directory entry for that directory. Now I'm not trying to create the directory so this is an empty object and once the success callback fires then I have an instance of that directory entry and then I can call remove on that directory entry. When the success callback fires for remove then I can say that the directory at this fullPath is deleted. And for any of the asynchronous requests I need to make sure I pass in a fail handler so that if something bad happens I'm notified of it. So that's how you delete a directory that's empty, but if you want to do more of a recursive delete I'll show you in the next sample how to do a more deep delete. So I'll go back over to subdirectories and I'll create each one of those directories. And then we'll list out the contents, so that you can see that they are in fact there. Now what I'll do is try to do a Deep Delete. Now last time when I tried doing a delete just against Documents I came back with an error. So now let's try running this code just against Documents and so now you can see that Documents and all of its contents are deleted. So if I go back to list the contents off the root you can see that the root is now empty. So let's take a look at the code to see how this is being done. So I've got my fail handler and my path, I've got the file system, and then I'm doing some error checking to make sure that I'm not trying to delete the root by making sure there's something in the path. So you can't delete the root of a file system. If I have a path then I'll go to the root and call getDirectory by passing in that path. Then when getDirectory calls the success callback the parameter here is a directory entry and then I can call directory removeRecursively, rather than just remove, and that has a success callback. Of course you have to pass in the fail callbacks as well for each one of the asynchronous operations, but as that success callback runs then I can logout to the RESULT pane saying that the directory name is deleted and all the contents are deleted as well. So basically the difference between the two is either remove or removeRecursively off of a directory entry instance. And so you can delete a directory that has files and folders in it by calling removeRecursively. In this module you learned to work with the file systems low-level API to create, read, delete, move, copy, and rename directories. In the next module you'll take what you learned about working with the raw API and wrap up the logic needed to work with the directories into an abstraction layer that greatly simplifies the code required to work with the file system.
File System: Building an Abstraction Layer over Directories
Thanks once again for joining me here in Pluralsight's HTML5 Web Storage, IndexedDB and File System. And in this module you'll learn to create an abstraction layer over the code needed to work with directories. By wrapping up the low-level operations which are required to work with directories, you're able to cleanup your code in significant ways and along with that make it easy to deal with common ways to handle errors and success conditions.
Demo: localFileSystem Module - Error Handling
Demo: localFileSystem Module - Request File System
The next set of functions to look at are the functions that are required in order to interface with the file system at its lowest level. So a lot of this you've already seen. So the first function is calculateBytesByMegabytes and the implementation remains unchanged. Then I have a placeholder for the fileSystem. So once we've requested the fileSystem, I don't want to have to keep requesting it over and over again for multiple operations. So I'll set that aside and place it into this property on the object. So now let's take a look at requesting the file system itself. So here when I'm making a request I just want to be able to pass in a success handler and a denied handler. So the first thing I'll look at is look to see if I have an instance of file system already available. In other words has it been requested before in the past? If there's an instance there and we've already asked for it, then I'll just call the success handler and pass back the instance of the file system. After that I can just stop execution because there's nothing else that needs to be done. Otherwise I'll come down in here and yes I have this hard coded to 2MB, but again this is more for illustrative purposes rather than something that you would move into production. If you want to create something that's more robust for everyday development you can expand out what I'm showing you here, but I'm trying to keep it fairly simple so that you can understand the concepts. So here I'm calculating the bytes by megabytes and then I have the total size of what I'm looking for, for 2M. Then I go into navigator.webkitTemporaryStorage and request the quota of the given size. At that point I'll have a success callback that includes the grantedBytes. Now technically speaking since I'm using temporary storage, a little bit of this logic isn't necessary, but I do want to show you how to deal with the fact of being denied access to the file system. So I've included that logic here in this implementation. So here I'm looking to see if my requested size is equal to my grantedBytes. If that's true then I can request the file system, I'm looking for a temporary file system at this point. The request includes the grantedBytes and on the success callback I can set a file system equal to file system that's passed back in the success function. Then I can call a success handler that was passed into this function and expose the file system through there. Now of course if this request fails then I want to run the defaultErrorHandler. Now if the requested size does not equal the grantedBytes then I get into the situation where I've been denied access to the file system. So then I get the denied handler and again I'm calling getDeniedHandler and what this does is it takes a look at what's being passed into this function and if it's not null it'll just return back what was passed into the function here at the top, but if nothing was passed in, in other words this is an optional parameter, then getDeniedHandler will return the defaultDeniedHandler. So at that point if I have a deniedHandler then I can run that function and pass in an object giving a message saying that the amount of grantedBytes by the HTML5 file system is less than the requested size. And then I can say what the requested size and what the grantedBytes are. And of course if the request or the quota fails then it will run the defaultErrorHandler as well. So this takes care of all the logic of needing to request a quota and the file system, so all I have to do is call the request function and provide a success handler and optionally a deniedHandler. Now there's two more functions that you can see here after request, createOptions and parsePathOrOptions. Now instead of showing you all the code here within the editor outside of the context of how they're used I'll explain these functions as we step through the code of the functions that use these functions to operate.
Demo: localFileSystem Module - Create Directory
Demo: localFileSystem Module - Directory Exists
Now let's take a look at what it takes in order to simplify getting a directory. So the goal that we're trying to get to is localFileSystem.getDirectory, you pass in the name of the directory and you get back that directory entry. So let me pass in a name here like Documents. So I'll Run this and this one is really straightforward, once I've requested the file system and I go ahead and parse all those options. Let's take a look at the options as they've come in, my directory path is Documents and the root is the root of the file system. At that point all I need to do is call getDirectory passing in the path and then I have a success and of course my fail handler. So once I've got that I get back the directory entry that I requested. So let's take a look at what it takes to wrap up finding out whether or not a directory exists. So I want to call directoryExists, pass in the name or the path of that directory, and I'll get back a result. At that point result.exists means that the directory is there, otherwise if it's false the directory does not exist. So let's try one that I know that doesn't exist, so that directory does not exist, but Documents, that does exist. So let me Run that again and we'll step through that code. So here's the directoryExists function. I'm starting off by setting up the results object and defaulting exist to false, then I call getDirectory. Now if get a successful directory entry from there or a success callback, then I can set result.exist equal to true and send that up in the success callback. So let's do that, but then let's change this to hello again. Now what's happening at this point is I'm getting an error and that error code is 1 and that means the file or directory is not found. So if that code does not equal 1 then I'll call the fail handler, otherwise I'll call the success handler, but at this point I'll just return back the result and that has exists set to false. So I just need to trap for that error code of number 1. So Run that, rest of the way, and that directory does not exist.
Demo: localFileSystem Module - Get Directory Entries
The next abstraction that we'll look at is getting the contents or getting the entries that are inside a directory. So here what we'll do is we'll call getDirectoryEntries, pass in a path, and the callback to that will include the entries that are inside that directory. So if that array comes back as empty then we know that the directory is empty, otherwise we can iterate over each one of the entries and extract out the fullPath. So here if I do that for Documents you can see that inside Documents it has a Work directory and a Personal directory. Now let's bring up the development tools and step through the code. So here for this function, getDirectoryEntries, bring in path or options, and this is just the path of Documents at this point. And then I'm just calling my underlying getDirectory function, which makes things a whole lot more clean. And at that point my success function includes the directory entry of the item that I'm getting. So here this is the Documents directory. Then I can create the reader and from that reader call readEntries then I call the success callback that includes the entries that are found within this directory entry. So as I let that run that lists out each one of the directories that are found in the given directory.
Demo: localFileSystem Module - Delete
The implementation for deleting a directory here takes into account of being able to do a recursive delete as well as just a regular delete like we saw before. So what I'm starting off by doing is getting the path and then creating the options, that way I have an object that has a placeholder here for the recursive flag and then I can set that to true. And then I can take those options and pass them down in to deleteDirectory. So first let's take a look at Documents/Work. So as I run that that folder's deleted, but also if I come back and take a look at the Documents directory you can see that still includes personal. So I'll come back here and now just try to delete Documents itself and hit Run and now Documents is deleted. So that deleted that directory even though it already had something in it. I'm going to go back and create a few directories here. So now when we run delete again we'll be able to see this working as we step through the code. So now I can open the developer tools and Run this. And here stopping at the breakpoint let's take a look at path or options. So right now I'm sending in the options object and normally we've been working with just strings at this point. So parsePathOrOptions here will take this and what it will do is it'll place a reference to the root of the file system here at this property. So I'll let this run and you notice what I'm getting back is that same object, but like I said that's the root of the file system. So then we'll call getDirectory and then we'll take a look at the recursive flag off of our options object. Now if recursive is true we'll remove recursively, otherwise we'll just call the regular remove which will return an error if that directory isn't empty. So as I let that run we see the Documents/Work is deleted.
Demo: localFileSystem Module - Move, Rename and Copy
So now let's take a look at move, rename, and copy all together. The concept behind each is basically the same. We'll call the function, give the source path, and the destination path, and then we'll get a success callback telling us that in this case the move happened. For rename, we'll call renameDirectory, give the path of the existing item, and then the new name in which we want it called, and the success callback. And then for copy we've got the path and the destinationPath giving us success callback. So I'll open up the developer tools. Let's make sure that we have the directories that we need in order to move things around. So I'll start with the Documents directory and I'll create that, and then also create another directory called Test. So now we've got those two to work with. So now we'll go over to move and I'll move the Documents directory into Test. So as we run that the move directory function executes and you can see that we have two options objects that are being used at this point. So this is the options for the source directory and then we'll have another set of options for the destination directory. So I'm moving Documents into Test. So now here at this point I'm calling createDirectory, so just in case the destination directory doesn't exist I want to make sure it is in order to move into it, so I'll call create directory. Now the nice thing about create directory is that if that directory already exists it still continues to operate successfully and won't throw any errors. So I'll create the destination directory and then I'll get the source directory and then once both of those tasks are done, then I can take the source and move it to the destination. And once that's done I can simply call the success callback. So there's the directory is moved. So now let's go back and create the Documents directory again. And now we'll rename it from Documents to Documents-Old. So here I just have an options just for the Documents directory and then I can get the directory. Now the interesting thing about renaming directory is what you're really doing is moving it, but once I get my directory then what I want to do is get its parent. So I can call getParent, the success callback of that will expose the parent and then I can take the directory and move it to the parent so it's original location with the new name. So here's my parent, you can see that's the root of the file system. I take the existing directory, which is Documents, and I'll give it the new name of Documents-Old and then call the success callback. So now the directory's renamed. And we can check by coming over here and calling Get Contents of the root and so there I have Test and I have Documents-Old. Alright now let's do Copy. So once again I'll create Documents and then I'll copy Documents into Test. And just to be sure everything's working right we'll make sure to delete Test/Documents first because we moved that in there originally. So now what we'll do is copy the Documents directory into the Test directory. So passing in the path and the destinationPath and then getting a success callback. So here copyDirectory is run. Again I have two different options, one for the source and one for the destination. Here's the source and the destination is Test. So once again I'll call createDirectory just to make sure the destination directory is there and then I'll get directory of the source item. And then I can copy the source to the destination. And once that's done I can call the success handler and the directory is successfully copied into its destination. So with that now you've seen how we've been able to wrap up and make many of the common tasks that we would do in dealing with directories much more simple by creating a layer of abstraction on top of the file system API making it much easier to work with directories. Next let's turn our attention to dealing with files.
In this module you learned to simplify the work needed to work with directories in HTML5. By wrapping up the native calls into the file system API, you're able to make your code more expressive and provide a consistent strategy for handling errors. Well, now that we've tackled directories, let's turn our attention to individual files. In the next module you'll learn the basics of working with files, as well as learn to build an abstraction layer over the file API.
File System: Files - Create, Read, Write, Delete, Move & Copy
Hello again, I'm still Craig Shoemaker, and in this module you'll learn to work with files. Here we'll start picking up some speed so we'll work with both the low-level API, as well as abstractions together in this module. By the end of this module you'll have all you need to know to work with the file system as a whole. Alright let's go ahead and get started.
Demo: Create and Get File
Now all the work that we just did in order to abstract away the complexities of working with directories is really going to pay off now that we'll be working with files. So you'll notice here what I'm doing is creating an alias for the localFileSystem called lfs and then I can call getDirectory and any of the other functions that we've now created in order to make it easier to get to those directories. Now the first thing that you'll notice about this sample is the little message over here in the RESULT pane that says the empty directories of Files/Drafts are created for this demo. And so we'll use those directories in order to place files, rename, copy, move, do all the fun stuff with files that we just did with directories. So let's take a look at what this does, if I Run this code you'll notice that what it did is created a file under the Files directory called notes.txt. So in order to place a file you need a directory first, that could be at the root of the file system or it could be some other instance of directory entry. Once you have that directory entry then you can call getFile. Now getFile works the same way as getDirectory. You pass in the name of the file you want to create and you set the create flag to true if you truly indeed want to create it. And this also has the exclusive flag. So if you try to create it and it already exists and if you want an error back from that, you can set exclusive to true. Then there's a success callback and that includes the file entry as a parameter. So from there I can just log the file entry and then also make sure you pass in a fail handler. So that's creating a file. Now getting a file is much the same type of operation the only difference really being is that when I call getFile instead of passing in an object with the flag to create I'm just passing in an empty object, but other than that all the code is exactly the same. So now as I run this I can get that same file even when it already exists on the file system.
Demo: Read, Write and Update File
So now let's take a look at what it takes to read, write, and update to a file. Now I'm going to do it in the writeread update order because I need to write something into the file before we can read it and update it. So here once again I'm getting a directory with the given directoryName and at this point we're just looking at the files directory. Then once I have that directory entry I can call getFile. Now at this point I'm just getting the existing file, so that returns an instance of file entry. Now in order to write to the file what I first need to do is to create a writer. Now this works as an asynchronous call. So here I call createWriter and the callback for that returns a writer in the parameter. Once I have that writer I need to hook into a couple events. So for here onwriteend I'll know when the right operation is complete, I also want to subscribe to the error event so I can be aware if something goes wrong. And then in order to write into the writer I need to create a new Blob. Now the way that you work with a Blob is the first parameter is an array of the file contents. Here I'm just working with a single item, so this is just an array with one item in it. And then you pass in the content type of what you're writing. So here I'm just working with plain text. So the type is text/plain. Once I create that Blob I can write the Blob into the writer and once it's done writing it'll fire onwriteend. So here let's type something and I'll just call this prepended text. Now as I Run that the writes to notes.txt is complete. Now let's try to read that text out of the file. So here I'll switch over to read and when I Run it I get prepended text. Again it's the same process, I have to get the directory, I have to get the file, once I have access to the file entry, then I need to call the file function. Again this returns a callback which gives me an instance of the file, now this is different than the file entry. The file entry is a reference to information about the file, like its path with a flag that says whether or not it's a file or a directory, but it's not the file itself. If you happen to be from a .NET background it's kind of the difference between working with a stream and working with a file info class. If that doesn't mean anything to you it's okay, basically the difference is file entry is metadata the instance a file right here is the file in and of itself. So now in order to read out the contents of the file I need to create a new file reader. I want to subscribe to the load end event and this way I know when everything's done being loaded out of the file. And also onerror just for error logging. And then I can call readAsText because I know I'm working with the text file, so rdr.readAsText, and as I pass in the file, when load end is complete I can take a look at this .result, which will have the value that gets printed up into the RESULT window. And when the load end event fires, the this pointer is pointing to the context of the executing function. So this .result will contain the data that's read out of the file. Alright well, what about updating? So now I'll change this to postpended text and Run that, you'll see that notes.txt is updated. If I read it again now I have prepeneded text and postpended text. Now say that five times fast. Alright so let's take a look at what it takes to update that file. So getting the directory, getting the file, here I'm creating a writer. Once I have an instance of the file entry I'm creating a writer. Now the difference of what I'm doing here on update versus what I'm doing over in write, is write is just writing to the beginning position of the file. With this update and creating the writer, what I'm going to do is seek to the length that's already there. So I'm making sure that I only add bytes to the end of the file at this point. So I go into fileEntry, createWriter, that creates a callback for me which includes writer as the parameter, then I call writer.seek. And if I want to go to the end of the file I pass in writer.length. At that point I can create a new Blob which includes the file contents in an array and the content type, which at this point is text/plain. And then just like I did before, when it's done writing it'll fire onwriteend and I can log out to the RESULT pane that that operation is done. Now in order to initiate the write, so that the events will eventually fire, I call writer.write and pass in the Blob. So this effectively writes bytes to the end of the file.
Demo: Delete, Move, Rename and Copy File
Now since the file entry and directory entry classes share the same base class much of what we'll do for delete, move, rename, and copy is very similar to what we did with the directories. So let's take a look at delete first. So I have to access the file by first getting to the directory and then calling getFile off of the directory. Then the success callback there for the file will return a file entry and then I can just call remove on that file. So when I Run the code it says File/notes.txt is deleted. And I can verify that by trying to read it and so then I get a file or directory not found error because it's deleted. So let me create it again and this time what I'll do is move it from the Files folder into Files/Drafts. So first I'm calling getDirectory against files. Then I get the file that I'm working with, so this is files.txt, once I have that file entry then I'm going to get directory on Files/Drafts. So this is a destination folder. With that directory entry now available I can call fileEntry.moveTo and pass it the destination directory which is Files/Drafts. So as I run this notes.txt is moved to /Files/Drafts. And again I can verify by trying to read again and it's not there. Alright so let's create the file and now we'll rename it. So here the file name starts off as notes.txt and I'm going to rename it to notes2.txt. So I'm getting the directory of Files, getting the original file of notes.txt, once I have that fileEntry I can call moveTo and again renaming is moving to its original location under a newFileName. So I Run this, notes.txt is renamed to notes2.txt, I can try and read notes.txt to make sure it has been renamed, and that's not found. So the last operation is to copy. So let me create the original file one more time, we'll go into Copy. Notes.txt starts off in the Files directory, so I'm calling getDirectory, then getFile, getDirectory again so that I can get the subfolder or the destination folder that it's going into. So once I have that fileEntry I can call copyTo and pass in the destination directory entry that I want it to go to. And then I just have this message that fileName is copied to and then I pass in the fullPath of the directory entry. And so one more time let's read notes.txt and it's empty, in fact I can go to get to show that it's still there, since we copied it, it shows up in both places. And even to verify further we can come and take a look at the file explorer and so in Files I have notes.txt. We started out with that as the original file and then copied it into Drafts and then we also at one point renamed it to notes2.txt and then in Drafts I have notes.txt. So again working with the fileEntry object is much like working with a directory entry.
Demo: File Abstractions Overview
Now just as we did in working with directories in this next section what I'd like to do is show you how to wrap up the complexity of working with files. So there's a number of functions that we'll implement here. So we have createFile, getFile, looking to see if it exists, read the contents out of it, prepend, append, delete the file, replace it, move, rename, and copy. Now if you take a look at the parameter list for each one of these functions again just like with directories they're largely the same. So we'll take a look at path or options and you can see how that object gets filled out based off the fact that we're working with a file. So each one of these functions the first parameter will be pathOrOptions and then each will feature success and optional fail callbacks. Sometimes there's other parameters that need to be passed in like data that we're saving down and the contentType, but for the most part we're working with very similar parameters as we did with directories. So once again instead of showing it to you here within the editor what I'd like to do is step through the code so you can see it working and you can see how each one of the these functions helps make working with files a whole lot easier.
Demo: localFileSystem Module - Create File
So let's begin by wrapping up the creation of a file. And again as I step through this code there is some common operations that I'll explain here up front, but then just skip over because they're common in just about every function is going to do it. So let's open up the developer tools. And create the Projects/todo.txt. So here I am in the createFile function. Now pathOrOptions as they're coming into the function the value is Projects/todo.txt. I have a success callback and I have an undefined fail callback. So it's requesting the file system and then I'm looking to create my options object. So let's step in to parsePathOrOptions and see what this looks like when we're dealing with a file. So again I have my test variable here, so I'm looking to see if I'm dealing with a string, of course I am at this point. So then we'll go into createOptions and here it's passing in the root of the file system as the root that will be used in createOptions. So recursive and exclusive have been defaulted to false and so now the options object as it comes back has the path of Projects/todo then it has reference to the root that it's working with. Now here with working with directories most of this code didn't get a chance to evaluate, but at this point what I'll do is take a look at the path parts. So here I have the directory name of Projects and the file name of todo. What the code that it executes next will do is look at this array and look at the last item in the array and decide whether or not there's a period in the file name. If it finds a period it takes that for the file name and then removes the file name from the array and places it in a file name property. So here by looking at parts.length-1 and calling indexOf on a period, if that does not equal -1 or in other words if that exists, then I take the array and I pop off the last item and set that equal to file name. Then what's left in the array, so after calling pop, that file name is gone so I can join what's left and here there's only one item, but otherwise it would recreate the path with the slashes in it. And so now when I take a look at options I have the directory path of Projects, exclusive is set to false, the file name is todo.txt, the fullPath is Projects/todo.txt, and recursive is false. And there again the directory entry of the root of the file system. So that's how it deals with building up options for a file. And I'll get my fail handler and then in order to create a file if that directory doesn't exist well I want to go ahead and create it to make sure it has a place to go. So here I'm calling createDirectory and once again createDirectory works just as well even if that directory already exists. So I'll let this run to the callback and now I need to take a look at the result to make sure that I'm dealing with the lastDirectory. So if you're working with a directory hierarchy let's say you have a number of different subfolders, you need to make sure that you're not creating this file within each one of the directories. Because the success callback will run each time a directory is created. And again getting directories and creating directories is basically the same thing at this point, but that success callback will fire each time it encounters a directory. So I need to make sure that when I'm done, when I dealing with the last directory in the list, then I go in and create the file. Now as you saw before creating a file is done by calling getFile, by passing in the fileName, and setting the create flag to true. Exclusive at this point is powered by the options whether or not you've set up earlier that you want it to be an exclusive create or not. And so once that's done the success or fail callback is called. So I'll let this run. And now the file in the Projects folder called todo.txt is created.
Demo: localFileSystem Module - Get and File Exists
Now that the todo.txt file is created let's try to get it. And this is again much the same of what we did before, except the main difference is the fact that we're not passing in the create flag in the options that we send in to getFile. So from the top getFile, request the file system, figures out the options, sets the fail handler, gets the parent directory by going to options.directoryPath, that has a success callback which is the directory entry. Then inside that directory I can call getFile based off the fileName, passing in an empty options object, and then running the success or fail callbacks. So I'll let this run. And here I get access to that fileEntry. Now let's take a look at exists. So by calling fileExists I'm passing in the path to the file, after parsing the options you can see that I have the fileName and the path separated out. Just like I did for the directory the first thing that I'll do is I'll take the result and default exists to false and then I really want to find out if the parent directory exists first. So I'll run directoryExists based off of the directoryPath and then I'll get a result from there. So if that exists then I'll call getFile. If getFile returns a fileEntry to me then I can set exists to true and then call the success callback passing in the result. If it doesn't exist then the error handler will be called and then I'll take a look at the code to make sure that it equals 1. So if it does equal 1 I'll call success passing in the result. Here this will have a setting to say that exists equals false, otherwise if there's actually an error then I'll send that error back up through the fail handler, but remember first I check to see if the directory exists. So if it doesn't exist, so I was looking at the results of whether or not that directory exists, if that returns false then I call the success handler, passing in the result. And here this will have exists set to false, so if the directory doesn't exist it returns false, if the directory exists and the file doesn't exist it returns false, otherwise it returns true. So I'll let this finish running. And then you can see that the file exists.
Demo: localFileSystem Module - Read, Prepend and Append
Now let's take a look at Prepend, Read, and Append all together. So here what I'd like to do is go into the local file system and call prependFile. Here I could have chosen just to pass in the path itself, but I did it with the options just to show you how you can start off using an options object as well. Then pass in the text that I want to be saved to the file, so here it's prepended text. I've set up the function with a little bit of shorthand so the default value for the content type is text/plain. So if you pass in null it'll automatically set that for you, otherwise you can pass in a custom content type. And when the success handler runs I know that the file's been prepended. So I'll run that, files been prepended, if I want to read it, here I just want to call readFile, I can pass in the path, and then I get the result of what's in the file. So there we've got prepended text. To append text to the file I'll call appendFile again using options or I could've used the path at this point, passing in the string of what I want to append the text for. Again content type is null but it gets set to text/plain in the function and then I have a success callback. So then the file's been appended and if I read it out again there I have both sets of text for prepended and appended text. Now let's reset everything so we can see this happening as we step through the code. So I'll open up the developer tools. And then refresh the page and create a blank version of the file that we're working with here. Okay, so there it is. So now when we go to prepend the text we're in the function for prependFile. So starting off I want to make sure I have some data to work with and that the length is greater than 0. I'll request the file system and then work out my options. I'll also get the fail handler that I need to use and then like I said if contentType doesn't exist then I'll default it to text/plain. So now I have contentType set correctly, then I need to get the file, and once I have that fileEntry I can create the writer. For writer I'll subscribe to writeEnd and the error handler. And at that point if there's an error I'll just pass it into fail. Now in order to write the data in I need to create a new blob, passing the data in as an array, and then setting the contentType to whatever's been decided up earlier. I pass the blob into the write method of the writer class and then once it's done writing it'll fire writeEnd, and then I can call the success handler. So there the files been prepended. Now let's read it. So once I have my fileEntry available, from there I can call file which gives me a callback that gives me access to the actual file itself. Then I call FileReader, provide a handler for load end, and onerror, and then on the reader I call readAsText passing in the file. So the result of that I can access through this .result and pass that into the success handler. So the result comes in here and I can log that out to the RESULT pane. Lastly to append to the file here I have appendFile, pathOrOptions is coming in as an options object, and I do the same thing here. After I get the fail handler a default contentType to text/plain if it's not being provided. And then once I have my instance of fileEntry then I can create a writer. The callback for that will expose the writer and then I can seek to the end of the length of the writer. So this finds the end of the stream that I'm working with in order to write to the file. When the writing is done then I can call the success callback and then I'm dealing with errors there. So I need to do the same thing here with the blob. The data goes into an array, I set the contentType, call writer.write passing in the blob, and that will fire onsuccess once the write operation is ended. So here I go, the files appended. And now just to make sure I can readout the contents once again and there's the text that we wrote into the file.
Demo: localFileSystem Module - Delete and Replace File
Now the implementation for delete and replace are somewhat related. So let's take a look at both of those here now. So for delete what I want to be able to do is call a function called deleteFile, pass in the path, and then get a notification of when that file is deleted. So let's take a look at it in the developer tools. So in order to delete that file I need an instance of fileEntry. So here I'm going through all the same motions, calling getFile, once I have access to that fileEntry then I can simply call remove. And then I'll pass in my existing success and fail callbacks that the remove function can eventually call. So doing that deletes the file from the file system. Now let's say I want to replace the contents of a file. Now I just deleted that so I'll create it and then I'll come in and add some text to that file. So you can see now it has the text of prepended text to it, but let's say I want to replace the entire contents of that file with some data that I'll pass into it. So the difference between doing a replace and a prepend or append is that prepend and append add to the data that's within an existing file. Here what I want to be able to do is disregard what's already there and only save the file with the content that I pass into it. So here for replaceFile I'll pass in the options, again could just be a string showing you how you can do it both ways by calling createOptions ahead of time. Then I have that options object. Again the same thing here, I'm passing in null, which will allow it to default to text/plain and then I have the success callback where I can say that the file's been replaced. What replaceFile does is it chains together a number of the different functions that we've already created within this module. So the first thing that it does is deletes the file, then it creates the file once again, and then saves the contents by calling prependFile. So as I let this run the rest of the way the files been replaced and so now if we go and readout the contents of the file it has replaced content in it instead of the text that it had before.
Demo: localFileSystem Module - Move, Rename and Copy File
Now for the last bit of functionality we'll take a look at wrapping up move, rename, and copy. So I'll open up the developer tools. And the function called is to moveFile, passing in the source path and the destination path and that will move the file. So here I'll Run the code. And this is the same functionality as you saw before, except it's wrapped up with all the information that we need to be able to handle errors or deal with a path or an options object being passed into the function. So again I'm creating the directory ahead of time. If I try to move it a directory that doesn't exist that won't work out so well, so I'm calling createDirectory to make sure that the fullPath is there of where the ultimate destination of the file will be. So once I create that directory then I can just call files.getFile. And once I have that entry I just call moveTo to the destination. So there it's been moved. Now let's create it one more time so that we can rename it. Here the rename function, at this point I'm passing in the path of the file I want to rename and the new file name that it'll be renamed to. So todo-old.txt and I just need to get the file and then getDirectory based off the directory path that comes out of options. So if we take a look at options the fullPath is Projects/todo the directoryPath is just Projects, so it uses that path. Once the request succeeds I have the directoryEntry and then I can go to the fileEntry and tell it to moveTo the directoryEntry which is its parent entry right now and give it the new file name. I'll let that run. And create the file one last time in order to copy it. So here I'll copyFile, pass in the source path, and the destination path and then I'll have a copied file. So here's the copyFile function, of course the same logic exists in order to create the directory. I want to copy it to something that I know is there, so I'll call createDirectory in order for that to happen. Make sure I'm working with the last directory in the series in case it's a hierarchical list of directories. And then I'll call getFile, once I have an instance of that file I'll copyTo and pass in the directoryEntry. So now as this runs a file is copied to its destination. And with that we've completed implementing the local file system module. So now you have full capability to manage directories and files in order to simplify and streamline your code as you're using the HTML5 file system in your applications.
In this module you learned to work with the files at a very low level, as well as how to wrap up the complex parts into a module that makes working with the file system much more manageable. In the next module we'll look at testing the capacity limits in the file system, as well as implementing a file editor which takes advantage of all the goodness we've created by abstracting away all the complexities of the file system.
File System: Testing Capacity Limits & Implementing a File Editor
Demo: File System Capacity Limits
Now that we've created a handy little abstraction layer on top of the file system we can put it to good use. So here with this capacity demo what I have prepared for you is a similar demonstration to what you've seen in working with web storage and indexedDB. What I'm going to do is click on the Run button here and I'm requesting a very large data file using an AJAX request. So like you can see at the top it's about 57/58MB. So I'll request that file and then it attempts to save it into the directory Capacity-Test. And then it gives is a file name staring at 1.txt. Now I can Run this again and it saves the file at 2.txt. And I can just keep going on and on. So this is the code sample which demonstrates to you how you can save large amounts of data into the file system. In fact let's take a look at the file system inspector and see how it looks. So here's my Capacity-Test folder and inside I have three different files, each about 58MB in size. So you can use this demo or something like it to test what type of capacity you have available while you're developing your applications. So let's take a look at the code so you see how I did it. So here I'm aliasing localFileSystem straight up at the top and then I have a function for getCounter. So I have this counter variable declared outside of the scope of the function that runs all of this code so I can keep track of a new file name. Then I have my save function. Here I'm creating my file path and name, so I'm writing into Capacity-Test and then I'll getCounter .txt. So each time this runs I get a brand new file name. And then I can logout to the RESULT pane that I'm creating the file name at this path into the localFileSystem. Then I'll begin to create the file, I can pass in the path and name, and once it's created I get a callback that includes the file entry that was created. So first I'm creating the file and then I'll attempt to save into the file. So once I have that file created I can prepend into the file with a given file path and name. I'll pass in my largeDataFileContents and that get its value from the AJAX request that's done in the code below. And then here I'll pass in my content type, as you've seen before it's not necessary to do it, but here I'm showing you how I'm doing it explicitly. And then once it's done writing into the file I get the success callback where I can say that at that filePathAndName the content is saved. So the save function creates the file and saves data into it. Now as this code runs the first time largeDataFileContents will equal null. So what I want to do is go in and make that AJAX request in order to request that very large file. And when I get a response back from the request then I can set that equal to largeDataFileContents and then call save. Otherwise if there is data in that variable then I'll just call save again so it can continue to write new files within the file system. So feel free to use this demo to figure out where your limits are and it'll also help you see how different browsers react to trying to write into the file system when you're dealing with needing to write a large amount of data through your application.
Demo: File Editor Demonstration
Now at the end of each module I like to leave you with something that's a little more real-world then some of the isolated demos that we've been working with. So here is a file editor which will show you how to manage a to do list within the browser. So let's go to the directory Documents and go to todo.txt. Now open the file up and it's blank at this point. So now I can add in some things that I need to do today. Finish recording course that's always a good one, check my email, and eat some sushi, yes. Okay so once I have my to do list here I can Save that, you can see that the file saved over there on the right-hand side and now I can refresh my browser. And open the file back up again. There's the contents of my file and I can add some more to it, drive home, Save that. Refresh, you can see it all continues to persist there. So let me show you how to build this really simple file editor.
Demo: File Editor Markup
Demo: File Editor View Model
The viewModel here is an immediately invoked function expression, so everything's wrapped up into the module here for this viewModel. At the top I just like to alias localFileSystem because it's a little bit long, so I have lfs pointing to the localFileSystem module and then setErrorPublisher. So I don't really have anything on this page that works as a UI to show any error messages, so I'm just logging it out to the console. And by using the warn method that will show up as an error in the console if anything does happen. Now as the page loads here I'm going to the jquery ready function and I'm looking for requestFileSystem, if that's unsupported then I'll show the unsupported message. Otherwise I can just apply bindings from Knockout into the viewModel. Now let's take a look at the nested viewModel object. So like I said I have some observables the directoryName, fileName, and content. So as any changes are made into the input elements that these are bound to those changes will make their way back to this object. Now I have a little helper method here for getPath and all it does is take the directoryName and the fileName and gules it together with a slash and returns that back up for getPath. And both openFile and saveFile will use this function. So first let's open up the file. So I'll find out the path that I'm working with and I want to find out if that file exists. If it does exist, if the result returns true, then I can read the contents of the file by passing in the path and then with the callback it'll return the contents of the file and I can set that into my observable item for the content. So at this point the value will be bound into the text area. Otherwise if a file doesn't exist I'll just clear out the content within the text area. So take a second to look at this code and what I want you to realize is how much easier it is to read code like this and work with code like this rather than all the nested callbacks, all the stuff that you have to do, in order to open the file system and work at the really raw level. So whether you use a module like what we've implemented in this course or something else, I hope this shows you it works in your advantage in order to deal with abstractions rather than trying to code directly into the file system API within the code that you implement on each and every page. Alright let's take a look at saveFile now. In order to save the file I need to get the path and then I have in this nested function for showSavedMessage which we'll come back to in a just a moment. So again the first thing I would like to do is find out if that file exists. So I'll pass in the path and I'll get back a result. Now if the file does exist then I'd just like to replace that file. So I'll take the path and pass in the content and I'm dealing with text so I just left this as null for now. And once I get a success callback there I'll show the save message. If the file doesn't exist then I'll first create it and then write to the file by calling prependFile, pass the path in and the content, and once that's successful I can show the saved message. So showSavedMessage is very straightforward, I use a jquery selector to get at the DOM object for that saved message container. I'll fade in and then I create a setTimeout for two seconds after that fires then I fade out that message. So this is certainly a very simple implementation, but ultimately what I hope you come away with is the fact that even though there's a lot of callbacks and there's a lot of asynchronous operations required in order to work with the file system. With just a little bit of work you can wrap it up and make it fairly painless to use in your applications.
In this module you've learned how to use the HTML5 file system to create directories and files, learned about the storage capacity of the file system, and on top of that learned to abstract away the complexity of using the raw API. Next up I'll leave you with a few brief introductions of some libraries that can help you use what you've learned in this course to make your client-side persistent applications easier to develop and ultimately more flexible to maintain.
Well welcome to the final module in the course. Congratulations, you've come a long way. In our final time together, you'll get to see a few third-party libraries in action that can help illustrate important concepts in extending the browser reach of your client-side persistent applications. So let's start off by taking a look at store.js.
What is store.js?
When using store.js the first thing that you want to do is make sure that it's enabled. And what this does is reports back to you that you're working within a browser that will be able to persist the data that you're trying to save. So this should only return false if you're working with something that's fairly old because of the different fallbacks that are available through store.js. So in order to persist some data, you call store.set and then you pass in a key, and then the value that you want to persist. So here I'm creating a JSON object, name is craigshoemaker. Once that's been persisted you can get the value back by calling store.get and passing in the same key that you saved with and then you get that user. Now between the calls of set and get if there's any serialization required store.js will handle that. So what I'm getting back here is a fully qualified object of the user just like I saved it originally. So if I Run this you can see that what I'm getting back is that object. So you can set objects, here I'm creating a little more of a more complex object of some preferences, and then even setting a string value based off of the key. So once I've set a number of different items through store.js, in order to get at all the items I can call store.all. And what that returns is a hash of all the values that have been stored. So you notice here's what's being printed up into the RESULT pane, shows gender as the string male and then my preferences and the user. Let's look in the developer tools. And this is the object right here that has all the values that have been stored by store.js. And then finally if you want to iterate through each one of the items that have been stored you can do that by calling foreach, that'll return an iterator, and then you have access to the key and the value. So here you can see I'm going through each one, getting back that string in the case of gender, and getting back the objects in case of preferences and user. So store.js makes it very simple in order to work with persisting into local storage, but knowing that your code will work if you're dealing with older browsers.
What is amplify.js?
Amplify store component handles persistent client-side storage using standards like localStorage and sessionStorage, but also features a series of fallbacks for nonstandard implementations for older browsers. And within those fallbacks it'll even use proprietary persistence strategies as far back as IE5 and Firefox 2 just to make sure that data is saved successfully. Now if no persistence strategies available on the browser then it stores data in memory as long as that tab is open, which basically makes it work like sessionStorage. Now some of the features that Amplify has beyond store.js is that it can do time based expirations and has an extensibility model which allows you to do things like store data in flash if you wanted to build that out. Also it does other cool things like abstracting away AJAX requests and interfaces with UI components, but for now we'll just discuss the storage capabilities. Now like I said Amplify uses a number of different fallbacks and it categorizes them as storage types. So it'll go through each one of these storage types in succession until it finds one that it can persist data in. So the first one is web storage. Now it defaults to local storage, but you can explicitly set data in sessionStorage if you want to using amplifies API. Now all versions of Chrome support this and everything from version 3.5 and up in Firefox support it and session support is available as far back as version 2. So with Internet Explorer support starts at version 8, Safari version 4, Opera 10.5, and in mobile environments Safari iOS is at version 2 as well as with the Android browser. So if it doesn't find an implementation for web storage, then it'll begin to look for global storage. Now global storage is only implemented in Firefox 2, so probably not very many browsers will end up using this strategy. Finally to round things off are very older versions of Internet Explorer, versions 5-7 it'll use the user data API. And then for anything else that's left it stores that data in memory, which again makes it work as if it were sessionStorage in that the data will persist as long as that tab is open, but as soon as you navigate away from that page or close the browser the data will no longer be there. Alright let's take a look at a simple code example using amplify.
So here's a demo with a basic syntax for amplify. You notice once you include amplify within your page, then you have the object of amplify available, and then you can simply call store by passing in the key and then pass in an object. So here I'm just passing in information about this course, title, and the author name. And then in order to get back the value that you've stored using amplify you call store once again and the fully qualified object is returned back from amplify. Now on top of passing in object literals, here I'm showing you how you can pass in array, of course you could pass in strings, Booleans, integers, or any other type that you have available and amplify will be able to store that for you. Now if you'd like to get access to everything that's been stored using amplify, if you call amplify.store without any parameters, then you get back a hash of all the values. So now you can see that the value that's printed up in the RESULT pane as the result of getting the value out of the store, from here logging the course, is this object right here. And then when I log all the items in the store, here's this one object with the key of course that points to this object up here. And a key of libraries which points to the array and here's the values printed up in the array. We can take a look at it in the developer tools window as well. So here's the object for all, you can see there's the course and there's the array with each one of the values in the array. So amplify features a very simple interface but features quite a lot of flexibility in order to work with a number of different browsers going back to some pretty early versions.
What is lawnchair.js?
Lawnchair.js features an adaptive persistence mechanism where the underlying store is abstracted away behind an interface. This means that you have a number of different persistent strategies to choose from when using lawnchair. And they're all made available through what are called adapters. You must use lawnchair with an adapter. Now lawnchair comes with a stock default adapter, but you can always plug-in any of the adapters that I'll describe here in order to change where the data is stored. In the end the adapter implements the logic to work with the persistence medium and it's all against a known interface. So what are your options? The blackberry-persistent-store works obviously with BlackBerry devices and as the lawnchair site says this may be a viable option for working with PhoneGap applications. The DOM adapter is the default adapter. Now this writes to local storage so if you just spin up lawnchair you'll be writing into local storage. Otherwise you'll have to explicitly include another adapter in order to save data in a different location. The window-name adapter works to write data into the window.name property of the window and is only really an option for very old browsers that have none of the other options available. The gears-sqlite strategy is best used for Android devices where the browser is less than version 2. And also for supporting older browsers, older versions of IE, like versions 5-7, lawnchair will be able to use the user data API. And as we've talked about Web SQL earlier in the course, the standard for SQLite or Web SQL has been deprecated. So depending on the browsers that you're targeting this may or may not be useful to you, but it is an option through lawnchair. On the other side of the coin for client-side databases, if you wish to store your data in a database, you can choose to use IndexedDB. And finally if nothing else works then we'll choose to save the data in memory, but of course there's no long-term persistence happening there. If you refresh the page, close the browser, or move onto another page you'll lose the data that's stored in memory on that page. Okay well now that you've had a brief introduction to lawnchair let's take a look at an introductory code sample.
So here's a few simple examples of working with lawnchair.js. And you'll notice the first thing that I'm doing as I'm using lawnchair is I'm passing in an options object telling it which strategy I want to use. Here I'm being explicit about it, but I'm using the default implementation of DOM storage, which is local storage. Once that runs I get a callback and that exposes the store, the instance of the store that I'll be using for persistence. Now when you're testing this will probably come in handy every once in a while, and that is the nuke method. So when you call nuke that'll delete all the data that's saved within that store. Now as I call save and I like the name of this function because it doesn't matter if you're going to a database or DOM storage or where you're going, it's just saving the data. So you can pass in an object and notice here I'm explicitly giving it a key called user. And then I can give it data of name, my twitter handle, and then once that's successfully saved within the browser I get a callback which exposes the object that I'm saving. Now that's one way to go about it. Another way that you can do it is to call save against an object that has no defined key. So you notice here I just have name and author at this point. So once this is saved it'll return this course information, but it'll have generated a key for it. So as I Run this you can see that the user account information here has all the information. And the object returns with the key that I provided for it, but here for the course, as the name of the course, the author, and an auto generated key which is a GUID value. So if you want to see all the keys that are stored within lawnchair, you can get the instance to the store the same way that I did previously. By calling lawnchair, sending in that options object, and getting the callback. And then from the store you can call keys and it'll return an array of all the keys available that you're working with in lawnchair. Now in order to get data that's just as easy to. I'm not going to work with that long GUID, but here I getting lawnchair in exactly the same way as I did before, but this time I'm calling store.get. And I pass in the key of user, which was the first one that I created when we were saving data. That has a callback with a returned user and so there I have the data that's coming out of lawnchair. Again the benefit here is that you can choose where you want to save the data, it doesn't have to just be in local storage. You can go all the way back to user data for older implementations of IE, you could choose to go to web SQL if you have mobile applications that have that available or even IndexedDB. It's totally up to you based off the adapters that you bring into lawnchair.
Craig Shoemaker is a developer, instructor, writer, podcaster, and technical evangelist of all things awesome.
Released3 Dec 2013