diff --git a/Documentation/power/tuxonice-internals.txt b/Documentation/power/tuxonice-internals.txt new file mode 100644 index 0000000..2247939 --- /dev/null +++ b/Documentation/power/tuxonice-internals.txt @@ -0,0 +1,469 @@ + TuxOnIce 2.2 Internal Documentation. + Updated to 18 September 2007 + +1. Introduction. + + TuxOnIce 2.2 is an addition to the Linux Kernel, designed to + allow the user to quickly shutdown and quickly boot a computer, without + needing to close documents or programs. It is equivalent to the + hibernate facility in some laptops. This implementation, however, + requires no special BIOS or hardware support. + + The code in these files is based upon the original implementation + prepared by Gabor Kuti and additional work by Pavel Machek and a + host of others. This code has been substantially reworked by Nigel + Cunningham, again with the help and testing of many others, not the + least of whom is Michael Frank. At its heart, however, the operation is + essentially the same as Gabor's version. + +2. Overview of operation. + + The basic sequence of operations is as follows: + + a. Quiesce all other activity. + b. Ensure enough memory and storage space are available, and attempt + to free memory/storage if necessary. + c. Allocate the required memory and storage space. + d. Write the image. + e. Power down. + + There are a number of complicating factors which mean that things are + not as simple as the above would imply, however... + + o The activity of each process must be stopped at a point where it will + not be holding locks necessary for saving the image, or unexpectedly + restart operations due to something like a timeout and thereby make + our image inconsistent. + + o It is desirous that we sync outstanding I/O to disk before calculating + image statistics. This reduces corruption if one should suspend but + then not resume, and also makes later parts of the operation safer (see + below). + + o We need to get as close as we can to an atomic copy of the data. + Inconsistencies in the image will result in inconsistent memory contents at + resume time, and thus in instability of the system and/or file system + corruption. This would appear to imply a maximum image size of one half of + the amount of RAM, but we have a solution... (again, below). + + o In 2.6, we choose to play nicely with the other suspend-to-disk + implementations. + +3. Detailed description of internals. + + a. Quiescing activity. + + Safely quiescing the system is achieved using three separate but related + aspects. + + First, we note that the vast majority of processes don't need to run during + suspend. They can be 'frozen'. We therefore implement a refrigerator + routine, which processes enter and in which they remain until the cycle is + complete. Processes enter the refrigerator via try_to_freeze() invocations + at appropriate places. A process cannot be frozen in any old place. It + must not be holding locks that will be needed for writing the image or + freezing other processes. For this reason, userspace processes generally + enter the refrigerator via the signal handling code, and kernel threads at + the place in their event loops where they drop locks and yield to other + processes or sleep. + + The task of freezing processes is complicated by the fact that there can be + interdependencies between processes. Freezing process A before process B may + mean that process B cannot be frozen, because it stops at waiting for + process A rather than in the refrigerator. This issue is seen where + userspace waits on freezeable kernel threads or fuse filesystem threads. To + address this issue, we implement the following algorithm for quiescing + activity: + + - Freeze filesystems (including fuse - userspace programs starting + new requests are immediately frozen; programs already running + requests complete their work before being frozen in the next + step) + - Freeze userspace + - Thaw filesystems (this is safe now that userspace is frozen and no + fuse requests are outstanding). + - Invoke sys_sync (noop on fuse). + - Freeze filesystems + - Freeze kernel threads + + If we need to free memory, we thaw kernel threads and filesystems, but not + userspace. We can then free caches without worrying about deadlocks due to + swap files being on frozen filesystems or such like. + + b. Ensure enough memory & storage are available. + + We have a number of constraints to meet in order to be able to successfully + suspend and resume. + + First, the image will be written in two parts, described below. One of these + parts needs to have an atomic copy made, which of course implies a maximum + size of one half of the amount of system memory. The other part ('pageset') + is not atomically copied, and can therefore be as large or small as desired. + + Second, we have constraints on the amount of storage available. In these + calculations, we may also consider any compression that will be done. The + cryptoapi module allows the user to configure an expected compression ratio. + + Third, the user can specify an arbitrary limit on the image size, in + megabytes. This limit is treated as a soft limit, so that we don't fail the + attempt to suspend if we cannot meet this constraint. + + c. Allocate the required memory and storage space. + + Having done the initial freeze, we determine whether the above constraints + are met, and seek to allocate the metadata for the image. If the constraints + are not met, or we fail to allocate the required space for the metadata, we + seek to free the amount of memory that we calculate is needed and try again. + We allow up to four iterations of this loop before aborting the cycle. If we + do fail, it should only be because of a bug in Suspend's calculations. + + These steps are merged together in the prepare_image function, found in + prepare_image.c. The functions are merged because of the cyclical nature + of the problem of calculating how much memory and storage is needed. Since + the data structures containing the information about the image must + themselves take memory and use storage, the amount of memory and storage + required changes as we prepare the image. Since the changes are not large, + only one or two iterations will be required to achieve a solution. + + The recursive nature of the algorithm is miminised by keeping user space + frozen while preparing the image, and by the fact that our records of which + pages are to be saved and which pageset they are saved in use bitmaps (so + that changes in number or fragmentation of the pages to be saved don't + feedback via changes in the amount of memory needed for metadata). The + recursiveness is thus limited to any extra slab pages allocated to store the + extents that record storage used, and he effects of seeking to free memory. + + d. Write the image. + + We previously mentioned the need to create an atomic copy of the data, and + the half-of-memory limitation that is implied in this. This limitation is + circumvented by dividing the memory to be saved into two parts, called + pagesets. + + Pageset2 contains the page cache - the pages on the active and inactive + lists. These pages aren't needed or modifed while TuxOnIce is running, so + they can be safely written without an atomic copy. They are therefore + saved first and reloaded last. While saving these pages, TuxOnIce carefully + ensures that the work of writing the pages doesn't make the image + inconsistent. + + Once pageset2 has been saved, we prepare to do the atomic copy of remaining + memory. As part of the preparation, we power down drivers, thereby providing + them with the opportunity to have their state recorded in the image. The + amount of memory allocated by drivers for this is usually negligible, but if + DRI is in use, video drivers may require significants amounts. Ideally we + would be able to query drivers while preparing the image as to the amount of + memory they will need. Unfortunately no such mechanism exists at the time of + writing. For this reason, TuxOnIce allows the user to set an + 'extra_pages_allowance', which is used to seek to ensure sufficient memory + is available for drivers at this point. TuxOnIce also lets the user set this + value to 0. In this case, a test driver suspend is done while preparing the + image, and the difference (plus a margin) used instead. + + Having suspended the drivers, we save the CPU context before making an + atomic copy of pageset1, resuming the drivers and saving the atomic copy. + After saving the two pagesets, we just need to save our metadata before + powering down. + + As we mentioned earlier, the contents of pageset2 pages aren't needed once + they've been saved. We therefore use them as the destination of our atomic + copy. In the unlikely event that pageset1 is larger, extra pages are + allocated while the image is being prepared. This is normally only a real + possibility when the system has just been booted and the page cache is + small. + + This is where we need to be careful about syncing, however. Pageset2 will + probably contain filesystem meta data. If this is overwritten with pageset1 + and then a sync occurs, the filesystem will be corrupted - at least until + resume time and another sync of the restored data. Since there is a + possibility that the user might not resume or (may it never be!) that + suspend might oops, we do our utmost to avoid syncing filesystems after + copying pageset1. + + e. Power down. + + Powering down uses standard kernel routines. TuxOnIce supports powering down + using the ACPI S3, S4 and S5 methods or the kernel's non-ACPI power-off. + Supporting suspend to ram (S3) as a power off option might sound strange, + but it allows the user to quickly get their system up and running again if + the battery doesn't run out (we just need to re-read the overwritten pages) + and if the battery does run out (or the user removes power), they can still + resume. + +4. Data Structures. + + TuxOnIce uses three main structures to store its metadata and configuration + information: + + a) Pageflags bitmaps. + + Suspend records which pages will be in pageset1, pageset2, the destination + of the atomic copy and the source of the atomically restored image using + bitmaps. These bitmaps are created from order zero allocations to maximise + reliability. The individual pages are combined together with pointers to + form per-zone bitmaps, which are in turn combined with another layer of + pointers to construct the overall bitmap. + + The pageset1 bitmap is thus easily stored in the image header for use at + resume time. + + As mentioned above, using bitmaps also means that the amount of memory and + storage required for recording the above information is constant. This + greatly simplifies the work of preparing the image. In earlier versions of + TuxOnIce, extents were used to record which pages would be stored. In that + case, however, eating memory could result in greater fragmentation of the + lists of pages, which in turn required more memory to store the extents and + more storage in the image header. These could in turn require further + freeing of memory, and another iteration. All of this complexity is removed + by having bitmaps. + + Bitmaps also make a lot of sense because TuxOnIce only ever iterates + through the lists. There is therefore no cost to not being able to find the + nth page in order 0 time. We only need to worry about the cost of finding + the n+1th page, given the location of the nth page. Bitwise optimisations + help here. + + The data structure is: unsigned long ***. + + b) Extents for block data. + + TuxOnIce supports writing the image to multiple block devices. In the case + of swap, multiple partitions and/or files may be in use, and we happily use + them all. This is accomplished as follows: + + Whatever the actual source of the allocated storage, the destination of the + image can be viewed in terms of one or more block devices, and on each + device, a list of sectors. To simplify matters, we only use contiguous, + PAGE_SIZE aligned sectors, like the swap code does. + + Since sector numbers on each bdev may well not start at 0, it makes much + more sense to use extents here. Contiguous ranges of pages can thus be + represented in the extents by contiguous values. + + Variations in block size are taken account of in transforming this data + into the parameters for bio submission. + + We can thus implement a layer of abstraction wherein the core of TuxOnIce + doesn't have to worry about which device we're currently writing to or + where in the device we are. It simply requests that the next page in the + pageset or header be written, leaving the details to this lower layer. + The lower layer remembers where in the sequence of devices and blocks each + pageset starts. The header always starts at the beginning of the allocated + storage. + + So extents are: + + struct extent { + unsigned long minimum, maximum; + struct extent *next; + } + + These are combined into chains of extents for a device: + + struct extent_chain { + int size; /* size of the extent ie sum (max-min+1) */ + int allocs, frees; + char *name; + struct extent *first, *last_touched; + }; + + For each bdev, we need to store a little more info: + + struct suspend_bdev_info { + struct block_device *bdev; + dev_t dev_t; + int bmap_shift; + int blocks_per_page; + }; + + The dev_t is used to identify the device in the stored image. As a result, + we expect devices at resume time to have the same major and minor numbers + as they had while suspending. This is primarily a concern where the user + utilises LVM for storage, as they will need to dmsetup their partitions in + such a way as to maintain this consistency at resume time. + + bmap_shift and blocks_per_page record apply the effects of variations in + blocks per page settings for the filesystem and underlying bdev. For most + filesystems, these are the same, but for xfs, they can have independant + values. + + Combining these two structures together, we have everything we need to + record what devices and what blocks on each device are being used to + store the image, and to submit i/o using bio_submit. + + The last elements in the picture are a means of recording how the storage + is being used. + + We do this first and foremost by implementing a layer of abstraction on + top of the devices and extent chains which allows us to view however many + devices there might be as one long storage tape, with a single 'head' that + tracks a 'current position' on the tape: + + struct extent_iterate_state { + struct extent_chain *chains; + int num_chains; + int current_chain; + struct extent *current_extent; + unsigned long current_offset; + }; + + That is, *chains points to an array of size num_chains of extent chains. + For the filewriter, this is always a single chain. For the swapwriter, the + array is of size MAX_SWAPFILES. + + current_chain, current_extent and current_offset thus point to the current + index in the chains array (and into a matching array of struct + suspend_bdev_info), the current extent in that chain (to optimise access), + and the current value in the offset. + + The image is divided into three parts: + - The header + - Pageset 1 + - Pageset 2 + + The header always starts at the first device and first block. We know its + size before we begin to save the image because we carefully account for + everything that will be stored in it. + + The second pageset (LRU) is stored first. It begins on the next page after + the end of the header. + + The first pageset is stored second. It's start location is only known once + pageset2 has been saved, since pageset2 may be compressed as it is written. + This location is thus recorded at the end of saving pageset2. It is page + aligned also. + + Since this information is needed at resume time, and the location of extents + in memory will differ at resume time, this needs to be stored in a portable + way: + + struct extent_iterate_saved_state { + int chain_num; + int extent_num; + unsigned long offset; + }; + + We can thus implement a layer of abstraction wherein the core of TuxOnIce + doesn't have to worry about which device we're currently writing to or + where in the device we are. It simply requests that the next page in the + pageset or header be written, leaving the details to this layer, and + invokes the routines to remember and restore the position, without having + to worry about the details of how the data is arranged on disk or such like. + + c) Modules + + One aim in designing TuxOnIce was to make it flexible. We wanted to allow + for the implementation of different methods of transforming a page to be + written to disk and different methods of getting the pages stored. + + In early versions (the betas and perhaps Suspend1), compression support was + inlined in the image writing code, and the data structures and code for + managing swap were intertwined with the rest of the code. A number of people + had expressed interest in implementing image encryption, and alternative + methods of storing the image. + + In order to achieve this, TuxOnIce was given a modular design. + + A module is a single file which encapsulates the functionality needed + to transform a pageset of data (encryption or compression, for example), + or to write the pageset to a device. The former type of module is called + a 'page-transformer', the later a 'writer'. + + Modules are linked together in pipeline fashion. There may be zero or more + page transformers in a pipeline, and there is always exactly one writer. + The pipeline follows this pattern: + + --------------------------------- + | TuxOnIce Core | + --------------------------------- + | + | + --------------------------------- + | Page transformer 1 | + --------------------------------- + | + | + --------------------------------- + | Page transformer 2 | + --------------------------------- + | + | + --------------------------------- + | Writer | + --------------------------------- + + During the writing of an image, the core code feeds pages one at a time + to the first module. This module performs whatever transformations it + implements on the incoming data, completely consuming the incoming data and + feeding output in a similar manner to the next module. A module may buffer + its output. + + During reading, the pipeline works in the reverse direction. The core code + calls the first module with the address of a buffer which should be filled. + (Note that the buffer size is always PAGE_SIZE at this time). This module + will in turn request data from the next module and so on down until the + writer is made to read from the stored image. + + Part of definition of the structure of a module thus looks like this: + + int (*rw_init) (int rw, int stream_number); + int (*rw_cleanup) (int rw); + int (*write_chunk) (struct page *buffer_page); + int (*read_chunk) (struct page *buffer_page, int sync); + + It should be noted that the _cleanup routine may be called before the + full stream of data has been read or written. While writing the image, + the user may (depending upon settings) choose to abort suspending, and + if we are in the midst of writing the last portion of the image, a portion + of the second pageset may be reread. This may also happen if an error + occurs and we seek to abort the process of writing the image. + + The modular design is also useful in a number of other ways. It provides + a means where by we can add support for: + + - providing overall initialisation and cleanup routines; + - serialising configuration information in the image header; + - providing debugging information to the user; + - determining memory and image storage requirements; + - dis/enabling components at run-time; + - configuring the module (see below); + + ...and routines for writers specific to their work: + - Parsing a resume= location; + - Determining whether an image exists; + - Marking a resume as having been attempted; + - Invalidating an image; + + Since some parts of the core - the user interface and storage manager + support - have use for some of these functions, they are registered as + 'miscellaneous' modules as well. + + d) Sysfs data structures. + + This brings us naturally to support for configuring TuxOnIce. We desired to + provide a way to make TuxOnIce as flexible and configurable as possible. + The user shouldn't have to reboot just because they want to now suspend to + a file instead of a partition, for example. + + To accomplish this, TuxOnIce implements a very generic means whereby the + core and modules can register new sysfs entries. All TuxOnIce entries use + a single _store and _show routine, both of which are found in sysfs.c in + the kernel/power directory. These routines handle the most common operations + - getting and setting the values of bits, integers, longs, unsigned longs + and strings in one place, and allow overrides for customised get and set + options as well as side-effect routines for all reads and writes. + + When combined with some simple macros, a new sysfs entry can then be defined + in just a couple of lines: + + { TOI_ATTR("progress_granularity", SYSFS_RW), + SYSFS_INT(&progress_granularity, 1, 2048) + }, + + This defines a sysfs entry named "progress_granularity" which is rw and + allows the user to access an integer stored at &progress_granularity, giving + it a value between 1 and 2048 inclusive. + + Sysfs entries are registered under /sys/power/tuxonice, and entries for + modules are located in a subdirectory named after the module. + diff --git a/Documentation/power/tuxonice.txt b/Documentation/power/tuxonice.txt new file mode 100644 index 0000000..aa2a486 --- /dev/null +++ b/Documentation/power/tuxonice.txt @@ -0,0 +1,709 @@ + --- TuxOnIce, version 2.2 --- + +1. What is it? +2. Why would you want it? +3. What do you need to use it? +4. Why not just use the version already in the kernel? +5. How do you use it? +6. What do all those entries in /sys/power/tuxonice do? +7. How do you get support? +8. I think I've found a bug. What should I do? +9. When will XXX be supported? +10 How does it work? +11. Who wrote TuxOnIce? + +1. What is it? + + Imagine you're sitting at your computer, working away. For some reason, you + need to turn off your computer for a while - perhaps it's time to go home + for the day. When you come back to your computer next, you're going to want + to carry on where you left off. Now imagine that you could push a button and + have your computer store the contents of its memory to disk and power down. + Then, when you next start up your computer, it loads that image back into + memory and you can carry on from where you were, just as if you'd never + turned the computer off. Far less time to start up, no reopening + applications and finding what directory you put that file in yesterday. + That's what TuxOnIce does. + + TuxOnIce has a long heritage. It began life as work by Gabor Kuti, who, + with some help from Pavel Machek, got an early version going in 1999. The + project was then taken over by Florent Chabaud while still in alpha version + numbers. Nigel Cunningham came on the scene when Florent was unable to + continue, moving the project into betas, then 1.0, 2.0 and so on up to + the present series. During the 2.0 series, the name was contracted to + Suspend2 and the website suspend2.net created. Beginning around July 2007, + a transition to calling the software TuxOnIce was made, to seek to help + make it clear that TuxOnIce is more concerned with hibernation than suspend + to ram. + + Pavel Machek's swsusp code, which was merged around 2.5.17 retains the + original name, and was essentially a fork of the beta code until Rafael + Wysocki came on the scene in 2005 and began to improve it further. + +2. Why would you want it? + + Why wouldn't you want it? + + Being able to save the state of your system and quickly restore it improves + your productivity - you get a useful system in far less time than through + the normal boot process. + +3. What do you need to use it? + + a. Kernel Support. + + i) The TuxOnIce patch. + + TuxOnIce is part of the Linux Kernel. This version is not part of Linus's + 2.6 tree at the moment, so you will need to download the kernel source and + apply the latest patch. Having done that, enable the appropriate options in + make [menu|x]config (under Power Management Options), compile and install your + kernel. TuxOnIce works with SMP, Highmem, preemption, fuse filesystems, + x86-32, PPC and x86_64. + + TuxOnIce patches are available from http://tuxonice.net. + + ii) Compression support. + + Compression support is implemented via the cryptoapi. You will therefore want + to select any Cryptoapi transforms that you want to use on your image from + the Cryptoapi menu while configuring your kernel. + + You can also tell TuxOnIce to write it's image to an encrypted and/or + compressed filesystem/swap partition. In that case, you don't need to do + anything special for TuxOnIce when it comes to kernel configuration. + + iii) Configuring other options. + + While you're configuring your kernel, try to configure as much as possible + to build as modules. We recommend this because there are a number of drivers + that are still in the process of implementing proper power management + support. In those cases, the best way to work around their current lack is + to build them as modules and remove the modules while suspending. You might + also bug the driver authors to get their support up to speed, or even help! + + b. Storage. + + i) Swap. + + TuxOnIce can store the suspend image in your swap partition, a swap file or + a combination thereof. Whichever combination you choose, you will probably + want to create enough swap space to store the largest image you could have, + plus the space you'd normally use for swap. A good rule of thumb would be + to calculate the amount of swap you'd want without using TuxOnIce, and then + add the amount of memory you have. This swapspace can be arranged in any way + you'd like. It can be in one partition or file, or spread over a number. The + only requirement is that they be active when you start a suspend cycle. + + There is one exception to this requirement. TuxOnIce has the ability to turn + on one swap file or partition at the start of suspending and turn it back off + at the end. If you want to ensure you have enough memory to store a image + when your memory is fully used, you might want to make one swap partition or + file for 'normal' use, and another for TuxOnIce to activate & deactivate + automatically. (Further details below). + + ii) Normal files. + + TuxOnIce includes a 'file allocator'. The file allocator can store your + image in a simple file. Since Linux has the concept of everything being a + file, this is more powerful than it initially sounds. If, for example, you + were to set up a network block device file, you could suspend to a network + server. This has been tested and works to a point, but nbd itself isn't + stateless enough for our purposes. + + Take extra care when setting up the file allocator. If you just type + commands without thinking and then try to suspend, you could cause + irreversible corruption on your filesystems! Make sure you have backups. + + Most people will only want to suspend to a local file. To achieve that, do + something along the lines of: + + echo "TuxOnIce" > /suspend-file + dd if=/dev/zero bs=1M count=512 >> suspend-file + + This will create a 512MB file called /suspend-file. To get TuxOnIce to use + it: + + echo /suspend-file > /sys/power/tuxonice/file/target + + Then + + cat /sys/power/tuxonice/resume + + Put the results of this into your bootloader's configuration (see also step + C, below: + + ---EXAMPLE-ONLY-DON'T-COPY-AND-PASTE--- + # cat /sys/power/tuxonice/resume + file:/dev/hda2:0x1e001 + + In this example, we would edit the append= line of our lilo.conf|menu.lst + so that it included: + + resume=file:/dev/hda2:0x1e001 + ---EXAMPLE-ONLY-DON'T-COPY-AND-PASTE--- + + For those who are thinking 'Could I make the file sparse?', the answer is + 'No!'. At the moment, there is no way for TuxOnIce to fill in the holes in + a sparse file while suspending. In the longer term (post merge!), I'd like + to change things so that the file could be dynamically resized as needed. + Right now, however, that's not possible and not a priority. + + c. Bootloader configuration. + + Using TuxOnIce also requires that you add an extra parameter to + your lilo.conf or equivalent. Here's an example for a swap partition: + + append="resume=swap:/dev/hda1" + + This would tell TuxOnIce that /dev/hda1 is a swap partition you + have. TuxOnIce will use the swap signature of this partition as a + pointer to your data when you suspend. This means that (in this example) + /dev/hda1 doesn't need to be _the_ swap partition where all of your data + is actually stored. It just needs to be a swap partition that has a + valid signature. + + You don't need to have a swap partition for this purpose. TuxOnIce + can also use a swap file, but usage is a little more complex. Having made + your swap file, turn it on and do + + cat /sys/power/tuxonice/swap/headerlocations + + (this assumes you've already compiled your kernel with TuxOnIce + support and booted it). The results of the cat command will tell you + what you need to put in lilo.conf: + + For swap partitions like /dev/hda1, simply use resume=/dev/hda1. + For swapfile `swapfile`, use resume=swap:/dev/hda2:0x242d. + + If the swapfile changes for any reason (it is moved to a different + location, it is deleted and recreated, or the filesystem is + defragmented) then you will have to check + /sys/power/tuxonice/swap/headerlocations for a new resume_block value. + + Once you've compiled and installed the kernel and adjusted your bootloader + configuration, you should only need to reboot for the most basic part + of TuxOnIce to be ready. + + If you only compile in the swap allocator, or only compile in the file + allocator, you don't need to add the "swap:" part of the resume= + parameters above. resume=/dev/hda2:0x242d will work just as well. + + d. The hibernate script. + + Since the driver model in 2.6 kernels is still being developed, you may need + to do more, however. Users of TuxOnIce usually start the process via a script + which prepares for the suspend, tells the kernel to do its stuff and then + restore things afterwards. This script might involve: + + - Switching to a text console and back if X doesn't like the video card + status on resume. + - Un/reloading PCMCIA support since it doesn't play well with suspend. + + Note that you might not be able to unload some drivers if there are + processes using them. You might have to kill off processes that hold + devices open. Hint: if your X server accesses an USB mouse, doing a + 'chvt' to a text console releases the device and you can unload the + module. + + Check out the latest script (available on tuxonice.net). + +4. Why not just use the version already in the kernel? + + The version in the vanilla kernel has a number of drawbacks. Among these: + - it has a maximum image size of 1/2 total memory. + - it doesn't allocate storage until after it has snapshotted memory. + This means that you can't be sure suspending will work until you + see it start to write the image. + - it performs all of it's I/O synchronously. + - it does not allow you to press escape to cancel a cycle + - it does not allow you to automatically swapon a file when + starting a cycle. + - it does not allow you to use multiple swap partitions. + - it does not allow you to use swapfiles. + - it does not allow you to use ordinary files. + - it just invalidates an image and continues to boot if you + accidentally boot the wrong kernel after suspending. + - it doesn't support any sort of nice display while suspending + - it is moving toward requiring that you have an initrd/initramfs + to ever have a hope of resuming (uswsusp). While uswsusp will + address some of the concerns above, it won't address all, and + will be more complicated to get set up. + +5. How do you use it? + + A suspend cycle can be started directly by doing: + + echo > /sys/power/tuxonice/do_hibernate + + In practice, though, you'll probably want to use the hibernate script + to unload modules, configure the kernel the way you like it and so on. + In that case, you'd do (as root): + + hibernate + + See the hibernate script's man page for more details on the options it + takes. + + If you're using the text or splash user interface modules, one neat feature + of TuxOnIce that you might find useful is that you can press Escape at any + time during suspending, and the process will be aborted. + + Due to the way suspend works, this means you'll have your system back and + perfectly usable almost instantly. The only exception is when it's at the + very end of writing the image. Then it will need to reload a small ( + usually 4-50MBs, depending upon the image characteristics) portion first. + + If you run into problems with resuming, adding the "noresume" option to + the kernel command line will let you skip the resume step and recover your + system. + +6. What do all those entries in /sys/power/tuxonice do? + + /sys/power/tuxonice is the directory which contains files you can use to + tune and configure TuxOnIce to your liking. The exact contents of + the directory will depend upon the version of TuxOnIce you're + running and the options you selected at compile time. In the following + descriptions, names in brackets refer to compile time options. + (Note that they're all dependant upon you having selected CONFIG_SUSPEND2 + in the first place!). + + Since the values of these settings can open potential security risks, they + are usually accessible only to the root user. You can, however, enable a + compile time option which makes all of these files world-accessible. This + should only be done if you trust everyone with shell access to this + computer! + + - checksum/enabled + + Use cryptoapi hashing routines to verify that Pageset2 pages don't change + while we're saving the first part of the image, and to get any pages that + do change resaved in the atomic copy. This should normally not be needed, + but if you're seeing issues, please enable this. If your issues stop you + being able to resume, enable this option, suspend and cancel the cycle + after the atomic copy is done. If the debugging info shows a non-zero + number of pages resaved, please report this to Nigel. + + - compression/algorithm + + Set the cryptoapi algorithm used for compressing the image. + + - compression/expected_compression + + These values allow you to set an expected compression ratio, which Software + Suspend will use in calculating whether it meets constraints on the image + size. If this expected compression ratio is not attained, the suspend will + abort, so it is wise to allow some spare. You can see what compression + ratio is achieved in the logs after suspending. + + - debug_info: + + This file returns information about your configuration that may be helpful + in diagnosing problems with suspending. + + - do_resume: + + When anything is written to this file suspend will attempt to read and + restore an image. If there is no image, it will return almost immediately. + If an image exists, the echo > will never return. Instead, the original + kernel context will be restored and the original echo > do_suspend will + return. + + - do_suspend: + + When anything is written to this file, the kernel side of TuxOnIce will + begin to attempt to write an image to disk and power down. You'll normally + want to run the hibernate script instead, to get modules unloaded first. + + - driver_model_beeping + + Enable beeping when suspending and resuming the drivers. Might help with + determining where a problem in resuming occurs. + + - */enabled + + These option can be used to temporarily disable various parts of suspend. + + - extra_pages_allowance + + When TuxOnIce does its atomic copy, it calls the driver model suspend + and resume methods. If you have DRI enabled with a driver such as fglrx, + this can result in the driver allocating a substantial amount of memory + for storing its state. Extra_pages_allowance tells tuxonice how much + extra memory it should ensure is available for those allocations. If + your attempts at suspending end with a message in dmesg indicating that + insufficient extra pages were allowed, you need to increase this value. + + - file/target: + + Read this value to get the current setting. Write to it to point Suspend + at a new storage location for the file allocator. See above for details of + how to set up the file allocator. + + - freezer_test + + This entry can be used to get TuxOnIce to just test the freezer without + actually doing a suspend cycle. It is useful for diagnosing freezing + issues. + + - image_exists: + + Can be used in a script to determine whether a valid image exists at the + location currently pointed to by resume=. Returns up to three lines. + The first is whether an image exists (-1 for unsure, otherwise 0 or 1). + If an image eixsts, additional lines will return the machine and version. + Echoing anything to this entry removes any current image. + + - image_size_limit: + + The maximum size of suspend image written to disk, measured in megabytes + (1024*1024). + + - interface_version: + + The value returned by this file can be used by scripts and configuration + tools to determine what entries should be looked for. The value is + incremented whenever an entry in /sys/power/tuxonice is obsoleted or + added. + + - last_result: + + The result of the last suspend, as defined in + include/linux/suspend-debug.h with the values SUSPEND_ABORTED to + SUSPEND_KEPT_IMAGE. This is a bitmask. + + - log_everything (CONFIG_PM_DEBUG): + + Setting this option results in all messages printed being logged. Normally, + only a subset are logged, so as to not slow the process and not clutter the + logs. Useful for debugging. It can be toggled during a cycle by pressing + 'L'. + + - pause_between_steps (CONFIG_PM_DEBUG): + + This option is used during debugging, to make TuxOnIce pause between + each step of the process. It is ignored when the nice display is on. + + - powerdown_method: + + Used to select a method by which TuxOnIce should powerdown after writing the + image. Currently: + + 0: Don't use ACPI to power off. + 3: Attempt to enter Suspend-to-ram. + 4: Attempt to enter ACPI S4 mode. + 5: Attempt to power down via ACPI S5 mode. + + Note that these options are highly dependant upon your hardware & software: + + 3: When succesful, your machine suspends-to-ram instead of powering off. + The advantage of using this mode is that it doesn't matter whether your + battery has enough charge to make it through to your next resume. If it + lasts, you will simply resume from suspend to ram (and the image on disk + will be discarded). If the battery runs out, you will resume from disk + instead. The disadvantage is that it takes longer than a normal + suspend-to-ram to enter the state, since the suspend-to-disk image needs + to be written first. + 4/5: When successful, your machine will be off and comsume (almost) no power. + But it might still react to some external events like opening the lid or + trafic on a network or usb device. For the bios, resume is then the same + as warm boot, similar to a situation where you used the command `reboot' + to reboot your machine. If your machine has problems on warm boot or if + you want to protect your machine with the bios password, this is probably + not the right choice. Mode 4 may be necessary on some machines where ACPI + wake up methods need to be run to properly reinitialise hardware after a + suspend-to-disk cycle. + 0: Switch the machine completely off. The only possible wakeup is the power + button. For the bios, resume is then the same as a cold boot, in + particular you would have to provide your bios boot password if your + machine uses that feature for booting. + + - progressbar_granularity_limit: + + This option can be used to limit the granularity of the progress bar + displayed with a bootsplash screen. The value is the maximum number of + steps. That is, 10 will make the progress bar jump in 10% increments. + + - reboot: + + This option causes TuxOnIce to reboot rather than powering down + at the end of saving an image. It can be toggled during a cycle by pressing + 'R'. + + - resume_commandline: + + This entry can be read after resuming to see the commandline that was used + when resuming began. You might use this to set up two bootloader entries + that are the same apart from the fact that one includes a extra append= + argument "at_work=1". You could then grep resume_commandline in your + post-resume scripts and configure networking (for example) differently + depending upon whether you're at home or work. resume_commandline can be + set to arbitrary text if you wish to remove sensitive contents. + + - swap/swapfilename: + + This entry is used to specify the swapfile or partition that + TuxOnIce will attempt to swapon/swapoff automatically. Thus, if + I normally use /dev/hda1 for swap, and want to use /dev/hda2 for specifically + for my suspend image, I would + + echo /dev/hda2 > /sys/power/tuxonice/swap/swapfile + + /dev/hda2 would then be automatically swapon'd and swapoff'd. Note that the + swapon and swapoff occur while other processes are frozen (including kswapd) + so this swap file will not be used up when attempting to free memory. The + parition/file is also given the highest priority, so other swapfiles/partitions + will only be used to save the image when this one is filled. + + The value of this file is used by headerlocations along with any currently + activated swapfiles/partitions. + + - swap/headerlocations: + + This option tells you the resume= options to use for swap devices you + currently have activated. It is particularly useful when you only want to + use a swap file to store your image. See above for further details. + + - toggle_process_nofreeze + + This entry can be used to toggle the NOFREEZE flag on a process, to allow it + to run during Suspending. It should be used with extreme caution. There are + strict limitations on what a process running during suspend can do. This is + really only intended for use by Suspend's helpers (userui in particular). + + - userui_program + + This entry is used to tell Suspend what userspace program to use for + providing a user interface while suspending. The program uses a netlink + socket to pass messages back and forward to the kernel, allowing all of the + functions formerly implemented in the kernel user interface components. + + - user_interface/debug_sections (CONFIG_PM_DEBUG): + + This value, together with the console log level, controls what debugging + information is displayed. The console log level determines the level of + detail, and this value determines what detail is displayed. This value is + a bit vector, and the meaning of the bits can be found in the kernel tree + in include/linux/tuxonice.h. It can be overridden using the kernel's + command line option suspend_dbg. + + - user_interface/default_console_level (CONFIG_PM_DEBUG): + + This determines the value of the console log level at the start of a + suspend cycle. If debugging is compiled in, the console log level can be + changed during a cycle by pressing the digit keys. Meanings are: + + 0: Nice display. + 1: Nice display plus numerical progress. + 2: Errors only. + 3: Low level debugging info. + 4: Medium level debugging info. + 5: High level debugging info. + 6: Verbose debugging info. + + - user_interface/enable_escape: + + Setting this to "1" will enable you abort a suspend by + pressing escape, "0" (default) disables this feature. Note that enabling + this option means that you cannot initiate a suspend and then walk away + from your computer, expecting it to be secure. With feature disabled, + you can validly have this expectation once Suspend begins to write the + image to disk. (Prior to this point, it is possible that Suspend might + about because of failure to freeze all processes or because constraints + on its ability to save the image are not met). + + - version: + + The version of suspend you have compiled into the currently running kernel. + +7. How do you get support? + + Glad you asked. TuxOnIce is being actively maintained and supported + by Nigel (the guy doing most of the kernel coding at the moment), Bernard + (who maintains the hibernate script and userspace user interface components) + and its users. + + Resources availble include HowTos, FAQs and a Wiki, all available via + tuxonice.net. You can find the mailing lists there. + +8. I think I've found a bug. What should I do? + + By far and a way, the most common problems people have with TuxOnIce + related to drivers not having adequate power management support. In this + case, it is not a bug with TuxOnIce, but we can still help you. As we + mentioned above, such issues can usually be worked around by building the + functionality as modules and unloading them while suspending. Please visit + the Wiki for up-to-date lists of known issues and work arounds. + + If this information doesn't help, try running: + + hibernate --bug-report + + ..and sending the output to the users mailing list. + + Good information on how to provide us with useful information from an + oops is found in the file REPORTING-BUGS, in the top level directory + of the kernel tree. If you get an oops, please especially note the + information about running what is printed on the screen through ksymoops. + The raw information is useless. + +9. When will XXX be supported? + + If there's a feature missing from TuxOnIce that you'd like, feel free to + ask. We try to be obliging, within reason. + + Patches are welcome. Please send to the list. + +10. How does it work? + + TuxOnIce does its work in a number of steps. + + a. Freezing system activity. + + The first main stage in suspending is to stop all other activity. This is + achieved in stages. Processes are considered in fours groups, which we will + describe in reverse order for clarity's sake: Threads with the PF_NOFREEZE + flag, kernel threads without this flag, userspace processes with the + PF_SYNCTHREAD flag and all other processes. The first set (PF_NOFREEZE) are + untouched by the refrigerator code. They are allowed to run during suspending + and resuming, and are used to support user interaction, storage access or the + like. Other kernel threads (those unneeded while suspending) are frozen last. + This leaves us with userspace processes that need to be frozen. When a + process enters one of the *_sync system calls, we set a PF_SYNCTHREAD flag on + that process for the duration of that call. Processes that have this flag are + frozen after processes without it, so that we can seek to ensure that dirty + data is synced to disk as quickly as possible in a situation where other + processes may be submitting writes at the same time. Freezing the processes + that are submitting data stops new I/O from being submitted. Syncthreads can + then cleanly finish their work. So the order is: + + - Userspace processes without PF_SYNCTHREAD or PF_NOFREEZE; + - Userspace processes with PF_SYNCTHREAD (they won't have NOFREEZE); + - Kernel processes without PF_NOFREEZE. + + b. Eating memory. + + For a successful suspend, you need to have enough disk space to store the + image and enough memory for the various limitations of TuxOnIce's + algorithm. You can also specify a maximum image size. In order to attain + to those constraints, TuxOnIce may 'eat' memory. If, after freezing + processes, the constraints aren't met, TuxOnIce will thaw all the + other processes and begin to eat memory until its calculations indicate + the constraints are met. It will then freeze processes again and recheck + its calculations. + + c. Allocation of storage. + + Next, TuxOnIce allocates the storage that will be used to save + the image. + + The core of TuxOnIce knows nothing about how or where pages are stored. We + therefore request the active allocator (remember you might have compiled in + more than one!) to allocate enough storage for our expect image size. If + this request cannot be fulfilled, we eat more memory and try again. If it + is fulfiled, we seek to allocate additional storage, just in case our + expected compression ratio (if any) isn't achieved. This time, however, we + just continue if we can't allocate enough storage. + + If these calls to our allocator change the characteristics of the image + such that we haven't allocated enough memory, we also loop. (The allocator + may well need to allocate space for its storage information). + + d. Write the first part of the image. + + TuxOnIce stores the image in two sets of pages called 'pagesets'. + Pageset 2 contains pages on the active and inactive lists; essentially + the page cache. Pageset 1 contains all other pages, including the kernel. + We use two pagesets for one important reason: We need to make an atomic copy + of the kernel to ensure consistency of the image. Without a second pageset, + that would limit us to an image that was at most half the amount of memory + available. Using two pagesets allows us to store a full image. Since pageset + 2 pages won't be needed in saving pageset 1, we first save pageset 2 pages. + We can then make our atomic copy of the remaining pages using both pageset 2 + pages and any other pages that are free. While saving both pagesets, we are + careful not to corrupt the image. Among other things, we use lowlevel block + I/O routines that don't change the pagecache contents. + + The next step, then, is writing pageset 2. + + e. Suspending drivers and storing processor context. + + Having written pageset2, TuxOnIce calls the power management functions to + notify drivers of the suspend, and saves the processor state in preparation + for the atomic copy of memory we are about to make. + + f. Atomic copy. + + At this stage, everything else but the TuxOnIce code is halted. Processes + are frozen or idling, drivers are quiesced and have stored (ideally and where + necessary) their configuration in memory we are about to atomically copy. + In our lowlevel architecture specific code, we have saved the CPU state. + We can therefore now do our atomic copy before resuming drivers etc. + + g. Save the atomic copy (pageset 1). + + Suspend can then write the atomic copy of the remaining pages. Since we + have copied the pages into other locations, we can continue to use the + normal block I/O routines without fear of corruption our image. + + f. Save the suspend header. + + Nearly there! We save our settings and other parameters needed for + reloading pageset 1 in a 'suspend header'. We also tell our allocator to + serialise its data at this stage, so that it can reread the image at resume + time. + + g. Set the image header. + + Finally, we edit the header at our resume= location. The signature is + changed by the allocator to reflect the fact that an image exists, and to + point to the start of that data if necessary (swap allocator). + + h. Power down. + + Or reboot if we're debugging and the appropriate option is selected. + + Whew! + + Reloading the image. + -------------------- + + Reloading the image is essentially the reverse of all the above. We load + our copy of pageset 1, being careful to choose locations that aren't going + to be overwritten as we copy it back (We start very early in the boot + process, so there are no other processes to quiesce here). We then copy + pageset 1 back to its original location in memory and restore the process + context. We are now running with the original kernel. Next, we reload the + pageset 2 pages, free the memory and swap used by TuxOnIce, restore + the pageset header and restart processes. Sounds easy in comparison to + suspending, doesn't it! + + There is of course more to TuxOnIce than this, but this explanation + should be a good start. If there's interest, I'll write further + documentation on range pages and the low level I/O. + +11. Who wrote TuxOnIce? + + (Answer based on the writings of Florent Chabaud, credits in files and + Nigel's limited knowledge; apologies to anyone missed out!) + + The main developers of TuxOnIce have been... + + Gabor Kuti + Pavel Machek + Florent Chabaud + Bernard Blackham + Nigel Cunningham + + Significant portions of swsusp, the code in the vanilla kernel which + TuxOnIce enhances, have been worked on by Rafael Wysocki. Thanks should + also be expressed to him. + + The above mentioned developers have been aided in their efforts by a host + of hundreds, if not thousands of testers and people who have submitted bug + fixes & suggestions. Of special note are the efforts of Michael Frank, who + had his computers repetitively suspend and resume for literally tens of + thousands of cycles and developed scripts to stress the system and test + TuxOnIce far beyond the point most of us (Nigel included!) would consider + testing. His efforts have contributed as much to TuxOnIce as any of the + names above. diff --git a/MAINTAINERS b/MAINTAINERS index 2340cfb..3472ff1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3793,6 +3793,13 @@ P: Maciej W. Rozycki M: macro@linux-mips.org S: Maintained +TUXONICE (ENHANCED HIBERNATION) +P: Nigel Cunningham +M: nigel@tuxonice.net +L: suspend2-devel@tuxonice.net +W: http://tuxonice.net +S: Maintained + U14-34F SCSI DRIVER P: Dario Ballabio M: ballabio_dario@emc.com diff --git a/arch/x86/mm/fault_32.c b/arch/x86/mm/fault_32.c index a2273d4..fa69b1d 100644 --- a/arch/x86/mm/fault_32.c +++ b/arch/x86/mm/fault_32.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -33,6 +34,9 @@ extern void die(const char *,struct pt_regs *,long); +int toi_faulted; +EXPORT_SYMBOL(toi_faulted); + #ifdef CONFIG_KPROBES static inline int notify_page_fault(struct pt_regs *regs) { @@ -315,6 +319,22 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs, si_code = SEGV_MAPERR; + /* During a TuxOnIce atomic copy, with DEBUG_SLAB, we will + * get page faults where slab has been unmapped. Map them + * temporarily and set the variable that tells TuxOnIce to + * unmap afterwards. + */ + +#ifdef CONFIG_DEBUG_PAGEALLOC + if (unlikely(toi_running && !toi_faulted)) { + struct page *page = NULL; + toi_faulted = 1; + page = virt_to_page(address); + kernel_map_pages(page, 1, 1); + return; + } +#endif + /* * We fault-in kernel-space virtual memory on-demand. The * 'reference' page table is init_mm.pgd. diff --git a/arch/x86/mm/pageattr_32.c b/arch/x86/mm/pageattr_32.c index 260073c..ff6d57e 100644 --- a/arch/x86/mm/pageattr_32.c +++ b/arch/x86/mm/pageattr_32.c @@ -272,6 +272,7 @@ void kernel_map_pages(struct page *page, int numpages, int enable) */ __flush_tlb_all(); } +EXPORT_SYMBOL(kernel_map_pages); #endif EXPORT_SYMBOL(change_page_attr); diff --git a/crypto/Kconfig b/crypto/Kconfig index 083d2e1..e066d63 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -453,6 +453,14 @@ config CRYPTO_DEFLATE You will most probably want this if using IPSec. +config CRYPTO_LZF + tristate "LZF compression algorithm" + default y + select CRYPTO_ALGAPI + help + This is the LZF algorithm. It is especially useful for TuxOnIce, + because it achieves good compression quickly. + config CRYPTO_MICHAEL_MIC tristate "Michael MIC keyed digest algorithm" select CRYPTO_ALGAPI diff --git a/crypto/Makefile b/crypto/Makefile index 43c2a0d..d58f128 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -51,6 +51,7 @@ obj-$(CONFIG_CRYPTO_SEED) += seed.o obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o obj-$(CONFIG_CRYPTO_CRC32C) += crc32c.o +obj-$(CONFIG_CRYPTO_LZF) += lzf.o obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o diff --git a/crypto/lzf.c b/crypto/lzf.c new file mode 100644 index 0000000..a472649 --- /dev/null +++ b/crypto/lzf.c @@ -0,0 +1,327 @@ +/* + * Cryptoapi LZF compression module. + * + * Copyright (c) 2004-2005 Nigel Cunningham + * + * based on the deflate.c file: + * + * Copyright (c) 2003 James Morris + * + * and upon the LZF compression module donated to the TuxOnIce project with + * the following copyright: + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the Free + * Software Foundation; either version 2 of the License, or (at your option) + * any later version. + * Copyright (c) 2000-2003 Marc Alexander Lehmann + * + * Redistribution and use in source and binary forms, with or without modifica- + * tion, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, + * this list of conditions and the following disclaimer. + * + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * 3. The name of the author may not be used to endorse or promote products + * derived from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- + * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO + * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- + * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; + * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, + * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- + * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + * OF THE POSSIBILITY OF SUCH DAMAGE. + * + * Alternatively, the contents of this file may be used under the terms of + * the GNU General Public License version 2 (the "GPL"), in which case the + * provisions of the GPL are applicable instead of the above. If you wish to + * allow the use of your version of this file only under the terms of the + * GPL and not to allow others to use your version of this file under the + * BSD license, indicate your decision by deleting the provisions above and + * replace them with the notice and other provisions required by the GPL. If + * you do not delete the provisions above, a recipient may use your version + * of this file under either the BSD or the GPL. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +struct lzf_ctx { + void *hbuf; + unsigned int bufofs; +}; + +/* + * size of hashtable is (1 << hlog) * sizeof (char *) + * decompression is independent of the hash table size + * the difference between 15 and 14 is very small + * for small blocks (and 14 is also faster). + * For a low-memory configuration, use hlog == 13; + * For best compression, use 15 or 16. + */ +static const int hlog = 13; + +/* + * don't play with this unless you benchmark! + * decompression is not dependent on the hash function + * the hashing function might seem strange, just believe me + * it works ;) + */ +static inline u16 first(const u8 *p) +{ + return ((p[0]) << 8) + p[1]; +} + +static inline u16 next(u8 v, const u8 *p) +{ + return ((v) << 8) + p[2]; +} + +static inline u32 idx(unsigned int h) +{ + return (((h ^ (h << 5)) >> (3*8 - hlog)) + h*3) & ((1 << hlog) - 1); +} + +/* + * IDX works because it is very similar to a multiplicative hash, e.g. + * (h * 57321 >> (3*8 - hlog)) + * the next one is also quite good, albeit slow ;) + * (int)(cos(h & 0xffffff) * 1e6) + */ + +static const int max_lit = (1 << 5); +static const int max_off = (1 << 13); +static const int max_ref = ((1 << 8) + (1 << 3)); + +/* + * compressed format + * + * 000LLLLL ; literal + * LLLOOOOO oooooooo ; backref L + * 111OOOOO LLLLLLLL oooooooo ; backref L+7 + * + */ + +static void lzf_compress_exit(struct crypto_tfm *tfm) +{ + struct lzf_ctx *ctx = crypto_tfm_ctx(tfm); + + if (!ctx->hbuf) + return; + + vfree(ctx->hbuf); + ctx->hbuf = NULL; +} + +static int lzf_compress_init(struct crypto_tfm *tfm) +{ + struct lzf_ctx *ctx = crypto_tfm_ctx(tfm); + + /* Get LZF ready to go */ + ctx->hbuf = vmalloc_32((1 << hlog) * sizeof(char *)); + if (ctx->hbuf) + return 0; + + printk(KERN_WARNING "Failed to allocate %ld bytes for lzf workspace\n", + (long) ((1 << hlog) * sizeof(char *))); + return -ENOMEM; +} + +static int lzf_compress(struct crypto_tfm *tfm, const u8 *in_data, + unsigned int in_len, u8 *out_data, unsigned int *out_len) +{ + struct lzf_ctx *ctx = crypto_tfm_ctx(tfm); + const u8 **htab = ctx->hbuf; + const u8 **hslot; + const u8 *ip = in_data; + u8 *op = out_data; + const u8 *in_end = ip + in_len; + u8 *out_end = op + *out_len - 3; + const u8 *ref; + + unsigned int hval = first(ip); + unsigned long off; + int lit = 0; + + memset(htab, 0, sizeof(htab)); + + for (;;) { + if (ip < in_end - 2) { + hval = next(hval, ip); + hslot = htab + idx(hval); + ref = *hslot; + *hslot = ip; + + off = ip - ref - 1; + if (off < max_off + && ip + 4 < in_end && ref > in_data + && *(u16 *) ref == *(u16 *) ip && ref[2] == ip[2] + ) { + /* match found at *ref++ */ + unsigned int len = 2; + unsigned int maxlen = in_end - ip - len; + maxlen = maxlen > max_ref ? max_ref : maxlen; + + do + len++; + while (len < maxlen && ref[len] == ip[len]); + + if (op + lit + 1 + 3 >= out_end) { + *out_len = PAGE_SIZE; + return 0; + } + + if (lit) { + *op++ = lit - 1; + lit = -lit; + do { + *op++ = ip[lit]; + } while (++lit); + } + + len -= 2; + ip++; + + if (len < 7) { + *op++ = (off >> 8) + (len << 5); + } else { + *op++ = (off >> 8) + (7 << 5); + *op++ = len - 7; + } + + *op++ = off; + + ip += len; + hval = first(ip); + hval = next(hval, ip); + htab[idx(hval)] = ip; + ip++; + continue; + } + } else if (ip == in_end) + break; + + /* one more literal byte we must copy */ + lit++; + ip++; + + if (lit == max_lit) { + if (op + 1 + max_lit >= out_end) { + *out_len = PAGE_SIZE; + return 0; + } + + *op++ = max_lit - 1; + memcpy(op, ip - max_lit, max_lit); + op += max_lit; + lit = 0; + } + } + + if (lit) { + if (op + lit + 1 >= out_end) { + *out_len = PAGE_SIZE; + return 0; + } + + *op++ = lit - 1; + lit = -lit; + do { + *op++ = ip[lit]; + } while (++lit); + } + + *out_len = op - out_data; + return 0; +} + +static int lzf_decompress(struct crypto_tfm *tfm, const u8 *src, + unsigned int slen, u8 *dst, unsigned int *dlen) +{ + u8 const *ip = src; + u8 *op = dst; + u8 const *const in_end = ip + slen; + u8 *const out_end = op + *dlen; + + *dlen = PAGE_SIZE; + do { + unsigned int ctrl = *ip++; + + if (ctrl < (1 << 5)) { + /* literal run */ + ctrl++; + + if (op + ctrl > out_end) + return 0; + memcpy(op, ip, ctrl); + op += ctrl; + ip += ctrl; + } else { /* back reference */ + + unsigned int len = ctrl >> 5; + + u8 *ref = op - ((ctrl & 0x1f) << 8) - 1; + + if (len == 7) + len += *ip++; + + ref -= *ip++; + len += 2; + + if (op + len > out_end || ref < (u8 *) dst) + return 0; + + do { + *op++ = *ref++; + } while (--len); + } + } + while (op < out_end && ip < in_end); + + *dlen = op - (u8 *) dst; + return 0; +} + +static struct crypto_alg alg = { + .cra_name = "lzf", + .cra_flags = CRYPTO_ALG_TYPE_COMPRESS, + .cra_ctxsize = sizeof(struct lzf_ctx), + .cra_module = THIS_MODULE, + .cra_list = LIST_HEAD_INIT(alg.cra_list), + .cra_init = lzf_compress_init, + .cra_exit = lzf_compress_exit, + .cra_u = { .compress = { + .coa_compress = lzf_compress, + .coa_decompress = lzf_decompress } } +}; + +static int __init init(void) +{ + return crypto_register_alg(&alg); +} + +static void __exit fini(void) +{ + crypto_unregister_alg(&alg); +} + +module_init(init); +module_exit(fini); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("LZF Compression Algorithm"); +MODULE_AUTHOR("Marc Alexander Lehmann & Nigel Cunningham"); diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c index 6123c70..55f0afc 100644 --- a/drivers/macintosh/via-pmu.c +++ b/drivers/macintosh/via-pmu.c @@ -42,7 +42,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/md/md.c b/drivers/md/md.c index cef9ebd..97e638e 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5426,6 +5426,8 @@ void md_do_sync(mddev_t *mddev) last_mark = next; } + while (freezer_is_on()) + yield(); if (kthread_should_stop()) { /* diff --git a/fs/buffer.c b/fs/buffer.c index 7249e01..6b8393a 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -247,6 +247,93 @@ void thaw_bdev(struct block_device *bdev, struct super_block *sb) } EXPORT_SYMBOL(thaw_bdev); +/* #define DEBUG_FS_FREEZING */ + +/** + * freeze_filesystems - lock all filesystems and force them into a consistent + * state + */ +void freeze_filesystems(int which) +{ + struct super_block *sb; + + lockdep_off(); + + /* + * Freeze in reverse order so filesystems dependant upon others are + * frozen in the right order (eg. loopback on ext3). + */ + list_for_each_entry_reverse(sb, &super_blocks, s_list) { +#ifdef DEBUG_FS_FREEZING + printk(KERN_INFO "Considering %s.%s: (root %p, bdev %x)", + sb->s_type->name ? sb->s_type->name : "?", + sb->s_subtype ? sb->s_subtype : "", sb->s_root, + sb->s_bdev ? sb->s_bdev->bd_dev : 0); +#endif + + if (sb->s_type->fs_flags & FS_IS_FUSE && + sb->s_frozen == SB_UNFROZEN && + which & FS_FREEZER_FUSE) { + sb->s_frozen = SB_FREEZE_TRANS; + sb->s_flags |= MS_FROZEN; + printk("Fuse filesystem done.\n"); + continue; + } + + if (!sb->s_root || !sb->s_bdev || + (sb->s_frozen == SB_FREEZE_TRANS) || + (sb->s_flags & MS_RDONLY) || + (sb->s_flags & MS_FROZEN) || + !(which & FS_FREEZER_NORMAL)) { +#ifdef DEBUG_FS_FREEZING + printk(KERN_INFO "Nope.\n"); +#endif + continue; + } + +#ifdef DEBUG_FS_FREEZING + printk(KERN_INFO "Freezing %x... ", sb->s_bdev->bd_dev); +#endif + freeze_bdev(sb->s_bdev); + sb->s_flags |= MS_FROZEN; +#ifdef DEBUG_FS_FREEZING + printk(KERN_INFO "Done.\n"); +#endif + } + + lockdep_on(); +} + +/** + * thaw_filesystems - unlock all filesystems + */ +void thaw_filesystems(int which) +{ + struct super_block *sb; + + lockdep_off(); + + list_for_each_entry(sb, &super_blocks, s_list) { + if (!(sb->s_flags & MS_FROZEN)) + continue; + + if (sb->s_type->fs_flags & FS_IS_FUSE) { + if (!(which & FS_FREEZER_FUSE)) + continue; + + sb->s_frozen = SB_UNFROZEN; + } else { + if (!(which & FS_FREEZER_NORMAL)) + continue; + + thaw_bdev(sb->s_bdev, sb); + } + sb->s_flags &= ~MS_FROZEN; + } + + lockdep_on(); +} + /* * Various filesystems appear to want __find_get_block to be non-blocking. * But it's the page lock which protects the buffers. To get around this, diff --git a/fs/fuse/control.c b/fs/fuse/control.c index 105d4a2..57eeca4 100644 --- a/fs/fuse/control.c +++ b/fs/fuse/control.c @@ -207,6 +207,7 @@ static void fuse_ctl_kill_sb(struct super_block *sb) static struct file_system_type fuse_ctl_fs_type = { .owner = THIS_MODULE, .name = "fusectl", + .fs_flags = FS_IS_FUSE, .get_sb = fuse_ctl_get_sb, .kill_sb = fuse_ctl_kill_sb, }; diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index db534bc..56a5923 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -7,6 +7,7 @@ */ #include "fuse_i.h" +#include "fuse.h" #include #include @@ -16,6 +17,7 @@ #include #include #include +#include MODULE_ALIAS_MISCDEV(FUSE_MINOR); @@ -702,6 +704,8 @@ static ssize_t fuse_dev_read(struct kiocb *iocb, const struct iovec *iov, if (!fc) return -EPERM; + FUSE_MIGHT_FREEZE(file->f_mapping->host->i_sb, "fuse_dev_read"); + restart: spin_lock(&fc->lock); err = -EAGAIN; @@ -828,6 +832,9 @@ static ssize_t fuse_dev_write(struct kiocb *iocb, const struct iovec *iov, if (!fc) return -EPERM; + FUSE_MIGHT_FREEZE(iocb->ki_filp->f_mapping->host->i_sb, + "fuse_dev_write"); + fuse_copy_init(&cs, fc, 0, NULL, iov, nr_segs); if (nbytes < sizeof(struct fuse_out_header)) return -EINVAL; diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c index 80d2f52..a50a2b7 100644 --- a/fs/fuse/dir.c +++ b/fs/fuse/dir.c @@ -7,12 +7,14 @@ */ #include "fuse_i.h" +#include "fuse.h" #include #include #include #include #include +#include #if BITS_PER_LONG >= 64 static inline void fuse_dentry_settime(struct dentry *entry, u64 time) @@ -176,6 +178,9 @@ static int fuse_dentry_revalidate(struct dentry *entry, struct nameidata *nd) return 0; fc = get_fuse_conn(inode); + + FUSE_MIGHT_FREEZE(inode->i_sb, "fuse_dentry_revalidate"); + req = fuse_get_req(fc); if (IS_ERR(req)) return 0; @@ -271,6 +276,8 @@ static struct dentry *fuse_lookup(struct inode *dir, struct dentry *entry, if (IS_ERR(req)) return ERR_PTR(PTR_ERR(req)); + FUSE_MIGHT_FREEZE(dir->i_sb, "fuse_lookup"); + forget_req = fuse_get_req(fc); if (IS_ERR(forget_req)) { fuse_put_request(fc, req); @@ -361,6 +368,8 @@ static int fuse_create_open(struct inode *dir, struct dentry *entry, int mode, if (IS_ERR(forget_req)) return PTR_ERR(forget_req); + FUSE_MIGHT_FREEZE(dir->i_sb, "fuse_create_open"); + req = fuse_get_req(fc); err = PTR_ERR(req); if (IS_ERR(req)) @@ -446,6 +455,8 @@ static int create_new_entry(struct fuse_conn *fc, struct fuse_req *req, int err; struct fuse_req *forget_req; + FUSE_MIGHT_FREEZE(dir->i_sb, "create_new_entry"); + forget_req = fuse_get_req(fc); if (IS_ERR(forget_req)) { fuse_put_request(fc, req); @@ -543,7 +554,11 @@ static int fuse_mkdir(struct inode *dir, struct dentry *entry, int mode) { struct fuse_mkdir_in inarg; struct fuse_conn *fc = get_fuse_conn(dir); - struct fuse_req *req = fuse_get_req(fc); + struct fuse_req *req; + + FUSE_MIGHT_FREEZE(dir->i_sb, "fuse_mkdir"); + + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); @@ -563,7 +578,11 @@ static int fuse_symlink(struct inode *dir, struct dentry *entry, { struct fuse_conn *fc = get_fuse_conn(dir); unsigned len = strlen(link) + 1; - struct fuse_req *req = fuse_get_req(fc); + struct fuse_req *req; + + FUSE_MIGHT_FREEZE(dir->i_sb, "fuse_symlink"); + + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); @@ -580,7 +599,11 @@ static int fuse_unlink(struct inode *dir, struct dentry *entry) { int err; struct fuse_conn *fc = get_fuse_conn(dir); - struct fuse_req *req = fuse_get_req(fc); + struct fuse_req *req; + + FUSE_MIGHT_FREEZE(dir->i_sb, "fuse_unlink"); + + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); @@ -611,7 +634,11 @@ static int fuse_rmdir(struct inode *dir, struct dentry *entry) { int err; struct fuse_conn *fc = get_fuse_conn(dir); - struct fuse_req *req = fuse_get_req(fc); + struct fuse_req *req; + + FUSE_MIGHT_FREEZE(dir->i_sb, "fuse_rmdir"); + + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); diff --git a/fs/fuse/file.c b/fs/fuse/file.c index bb05d22..a641288 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -7,11 +7,13 @@ */ #include "fuse_i.h" +#include "fuse.h" #include #include #include #include +#include static const struct file_operations fuse_direct_io_file_operations; @@ -23,6 +25,8 @@ static int fuse_send_open(struct inode *inode, struct file *file, int isdir, struct fuse_req *req; int err; + FUSE_MIGHT_FREEZE(inode->i_sb, "fuse_send_open"); + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); @@ -544,6 +548,8 @@ static int fuse_buffered_write(struct file *file, struct inode *inode, if (is_bad_inode(inode)) return -EIO; + FUSE_MIGHT_FREEZE(inode->i_sb, "fuse_commit_write"); + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); @@ -637,6 +643,8 @@ static ssize_t fuse_direct_io(struct file *file, const char __user *buf, if (is_bad_inode(inode)) return -EIO; + FUSE_MIGHT_FREEZE(file->f_mapping->host->i_sb, "fuse_direct_io"); + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); @@ -789,6 +797,8 @@ static int fuse_getlk(struct file *file, struct file_lock *fl) struct fuse_lk_out outarg; int err; + FUSE_MIGHT_FREEZE(file->f_mapping->host->i_sb, "fuse_getlk"); + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); @@ -819,6 +829,8 @@ static int fuse_setlk(struct file *file, struct file_lock *fl, int flock) if (fl->fl_flags & FL_CLOSE) return 0; + FUSE_MIGHT_FREEZE(file->f_mapping->host->i_sb, "fuse_setlk"); + req = fuse_get_req(fc); if (IS_ERR(req)) return PTR_ERR(req); @@ -883,6 +895,8 @@ static sector_t fuse_bmap(struct address_space *mapping, sector_t block) if (!inode->i_sb->s_bdev || fc->no_bmap) return 0; + FUSE_MIGHT_FREEZE(inode->i_sb, "fuse_bmap"); + req = fuse_get_req(fc); if (IS_ERR(req)) return 0; diff --git a/fs/fuse/fuse.h b/fs/fuse/fuse.h new file mode 100644 index 0000000..170e49a --- /dev/null +++ b/fs/fuse/fuse.h @@ -0,0 +1,13 @@ +#define FUSE_MIGHT_FREEZE(superblock, desc) \ +do { \ + int printed = 0; \ + while (superblock->s_frozen != SB_UNFROZEN) { \ + if (!printed) { \ + printk(KERN_INFO "%d frozen in " desc ".\n", \ + current->pid); \ + printed = 1; \ + } \ + try_to_freeze(); \ + yield(); \ + } \ +} while (0) diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c index 84f9f7d..7fd35d3 100644 --- a/fs/fuse/inode.c +++ b/fs/fuse/inode.c @@ -702,7 +702,7 @@ static int fuse_get_sb(struct file_system_type *fs_type, static struct file_system_type fuse_fs_type = { .owner = THIS_MODULE, .name = "fuse", - .fs_flags = FS_HAS_SUBTYPE, + .fs_flags = FS_HAS_SUBTYPE | FS_IS_FUSE, .get_sb = fuse_get_sb, .kill_sb = kill_anon_super, }; @@ -721,7 +721,7 @@ static struct file_system_type fuseblk_fs_type = { .name = "fuseblk", .get_sb = fuse_get_sb_blk, .kill_sb = kill_block_super, - .fs_flags = FS_REQUIRES_DEV | FS_HAS_SUBTYPE, + .fs_flags = FS_REQUIRES_DEV | FS_HAS_SUBTYPE | FS_IS_FUSE, }; static inline int register_fuseblk(void) diff --git a/fs/ioctl.c b/fs/ioctl.c index c2a773e..d83b362 100644 --- a/fs/ioctl.c +++ b/fs/ioctl.c @@ -174,3 +174,4 @@ asmlinkage long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg) out: return error; } +EXPORT_SYMBOL(sys_ioctl); diff --git a/fs/namei.c b/fs/namei.c index 73e2e66..2fae2c1 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -2174,6 +2174,8 @@ int vfs_unlink(struct inode *dir, struct dentry *dentry) if (!dir->i_op || !dir->i_op->unlink) return -EPERM; + vfs_check_frozen(dir->i_sb, SB_FREEZE_WRITE); + DQUOT_INIT(dir); mutex_lock(&dentry->d_inode->i_mutex); diff --git a/include/asm-powerpc/suspend.h b/include/asm-powerpc/suspend.h index cbf2c94..e0756c2 100644 --- a/include/asm-powerpc/suspend.h +++ b/include/asm-powerpc/suspend.h @@ -6,4 +6,7 @@ static inline int arch_prepare_suspend(void) { return 0; } void save_processor_state(void); void restore_processor_state(void); +#define toi_faulted (0) +#define clear_toi_fault() do { } while (0) + #endif /* __ASM_POWERPC_SUSPEND_H */ diff --git a/include/asm-ppc/suspend.h b/include/asm-ppc/suspend.h index 3df9f32..1e2e73d 100644 --- a/include/asm-ppc/suspend.h +++ b/include/asm-ppc/suspend.h @@ -10,3 +10,6 @@ static inline void save_processor_state(void) static inline void restore_processor_state(void) { } + +#define toi_faulted (0) +#define clear_toi_fault() do { } while (0) diff --git a/include/asm-x86/suspend_32.h b/include/asm-x86/suspend_32.h index a252073..17bc8f8 100644 --- a/include/asm-x86/suspend_32.h +++ b/include/asm-x86/suspend_32.h @@ -8,6 +8,9 @@ static inline int arch_prepare_suspend(void) { return 0; } +extern int toi_faulted; +#define clear_toi_fault() do { toi_faulted = 0; } while (0) + /* image of the saved processor state */ struct saved_context { u16 es, fs, gs, ss; diff --git a/include/asm-x86/suspend_64.h b/include/asm-x86/suspend_64.h index c505a76..3be930f 100644 --- a/include/asm-x86/suspend_64.h +++ b/include/asm-x86/suspend_64.h @@ -15,6 +15,9 @@ arch_prepare_suspend(void) return 0; } +#define toi_faulted (0) +#define clear_toi_fault() do { } while (0) + /* Image of the saved processor state. If you touch this, fix acpi/wakeup.S. */ struct saved_context { struct pt_regs regs; diff --git a/include/linux/Kbuild b/include/linux/Kbuild index f30fa92..560974c 100644 --- a/include/linux/Kbuild +++ b/include/linux/Kbuild @@ -202,6 +202,7 @@ unifdef-y += filter.h unifdef-y += flat.h unifdef-y += futex.h unifdef-y += fs.h +unifdef-y += freezer.h unifdef-y += gameport.h unifdef-y += generic_serial.h unifdef-y += genhd.h diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index da0d83f..e4da509 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -172,6 +172,11 @@ wait_queue_head_t *bh_waitq_head(struct buffer_head *bh); int fsync_bdev(struct block_device *); struct super_block *freeze_bdev(struct block_device *); void thaw_bdev(struct block_device *, struct super_block *); +#define FS_FREEZER_FUSE 1 +#define FS_FREEZER_NORMAL 2 +#define FS_FREEZER_ALL (FS_FREEZER_FUSE | FS_FREEZER_NORMAL) +void freeze_filesystems(int which); +void thaw_filesystems(int which); int fsync_super(struct super_block *); int fsync_no_super(struct block_device *); struct buffer_head *__find_get_block(struct block_device *bdev, sector_t block, diff --git a/include/linux/dyn_pageflags.h b/include/linux/dyn_pageflags.h new file mode 100644 index 0000000..e85c3ee --- /dev/null +++ b/include/linux/dyn_pageflags.h @@ -0,0 +1,66 @@ +/* + * include/linux/dyn_pageflags.h + * + * Copyright (C) 2004-2007 Nigel Cunningham + * + * This file is released under the GPLv2. + * + * It implements support for dynamically allocated bitmaps that are + * used for temporary or infrequently used pageflags, in lieu of + * bits in the struct page flags entry. + */ + +#ifndef DYN_PAGEFLAGS_H +#define DYN_PAGEFLAGS_H + +#include + +struct dyn_pageflags { + unsigned long ****bitmap; /* [pg_dat][zone][page_num] */ + int sparse, initialised; + struct list_head list; + spinlock_t struct_lock; +}; + +#define DYN_PAGEFLAGS_INIT(name) { \ + .list = LIST_HEAD_INIT(name.list), \ + .struct_lock = __SPIN_LOCK_UNLOCKED(name.lock) \ +} + +#define DECLARE_DYN_PAGEFLAGS(name) \ + struct dyn_pageflags name = DYN_PAGEFLAGS_INIT(name); + +#define BITMAP_FOR_EACH_SET(BITMAP, CTR) \ + for (CTR = get_next_bit_on(BITMAP, max_pfn + 1); CTR <= max_pfn; \ + CTR = get_next_bit_on(BITMAP, CTR)) + +extern void clear_dyn_pageflags(struct dyn_pageflags *pagemap); +extern int allocate_dyn_pageflags(struct dyn_pageflags *pagemap, int sparse); +extern void free_dyn_pageflags(struct dyn_pageflags *pagemap); +extern unsigned long get_next_bit_on(struct dyn_pageflags *bitmap, + unsigned long counter); + +extern int test_dynpageflag(struct dyn_pageflags *bitmap, struct page *page); +/* + * In sparse bitmaps, setting a flag can fail (we can fail to allocate + * the page to store the bit. If this happens, we will BUG(). If you don't + * want this behaviour, don't allocate sparse pageflags. + */ +extern void set_dynpageflag(struct dyn_pageflags *bitmap, struct page *page); +extern void clear_dynpageflag(struct dyn_pageflags *bitmap, struct page *page); +extern void dump_pagemap(struct dyn_pageflags *pagemap); + +/* + * With the above macros defined, you can do... + * #define PagePageset1(page) (test_dynpageflag(&pageset1_map, page)) + * #define SetPagePageset1(page) (set_dynpageflag(&pageset1_map, page)) + * #define ClearPagePageset1(page) (clear_dynpageflag(&pageset1_map, page)) + */ + +extern void __init dyn_pageflags_init(void); +extern void __init dyn_pageflags_use_kzalloc(void); + +#ifdef CONFIG_MEMORY_HOTPLUG_SPARSE +extern void dyn_pageflags_hotplug(struct zone *zone); +#endif +#endif diff --git a/include/linux/freezer.h b/include/linux/freezer.h index 0893499..01e9dc6 100644 --- a/include/linux/freezer.h +++ b/include/linux/freezer.h @@ -127,6 +127,19 @@ static inline void set_freezable(void) current->flags &= ~PF_NOFREEZE; } +extern int freezer_state; +#define FREEZER_OFF 0 +#define FREEZER_FILESYSTEMS_FROZEN 1 +#define FREEZER_USERSPACE_FROZEN 2 +#define FREEZER_FULLY_ON 3 + +static inline int freezer_is_on(void) +{ + return (freezer_state == FREEZER_FULLY_ON); +} + +extern void thaw_kernel_threads(void); + /* * Freezer-friendly wrappers around wait_event_interruptible() and * wait_event_interruptible_timeout(), originally defined in @@ -169,6 +182,8 @@ static inline int freeze_processes(void) { BUG(); return 0; } static inline void thaw_processes(void) {} static inline int try_to_freeze(void) { return 0; } +static inline int freezer_is_on(void) { return 0; } +static inline void thaw_kernel_threads(void) { } static inline void freezer_do_not_count(void) {} static inline void freezer_count(void) {} diff --git a/include/linux/fs.h b/include/linux/fs.h index b3ec4a4..a6b5a5c 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -8,6 +8,7 @@ #include #include +#include /* * It's silly to have NR_OPEN bigger than NR_FILE, but you can change @@ -93,6 +94,7 @@ extern int dir_notify_enable; #define FS_REQUIRES_DEV 1 #define FS_BINARY_MOUNTDATA 2 #define FS_HAS_SUBTYPE 4 +#define FS_IS_FUSE 8 /* Fuse filesystem - bdev freeze these too */ #define FS_REVAL_DOT 16384 /* Check the paths ".", ".." for staleness */ #define FS_RENAME_DOES_D_MOVE 32768 /* FS will handle d_move() * during rename() internally. @@ -124,6 +126,7 @@ extern int dir_notify_enable; #define MS_SHARED (1<<20) /* change to shared */ #define MS_RELATIME (1<<21) /* Update atime relative to mtime/ctime. */ #define MS_KERNMOUNT (1<<22) /* this is a kern_mount call */ +#define MS_FROZEN (1<<23) /* Frozen by freeze_filesystems() */ #define MS_ACTIVE (1<<30) #define MS_NOUSER (1<<31) @@ -1049,8 +1052,11 @@ enum { SB_FREEZE_TRANS = 2, }; -#define vfs_check_frozen(sb, level) \ - wait_event((sb)->s_wait_unfrozen, ((sb)->s_frozen < (level))) +#define vfs_check_frozen(sb, level) do { \ + freezer_do_not_count(); \ + wait_event((sb)->s_wait_unfrozen, ((sb)->s_frozen < (level))); \ + freezer_count(); \ +} while (0) #define get_fs_excl() atomic_inc(¤t->fs_excl) #define put_fs_excl() atomic_dec(¤t->fs_excl) diff --git a/include/linux/kernel.h b/include/linux/kernel.h index 94bc996..1ba477a 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -147,6 +147,8 @@ extern int vsprintf(char *buf, const char *, va_list) __attribute__ ((format (printf, 2, 0))); extern int snprintf(char * buf, size_t size, const char * fmt, ...) __attribute__ ((format (printf, 3, 4))); +extern int snprintf_used(char *buffer, int buffer_size, + const char *fmt, ...); extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args) __attribute__ ((format (printf, 3, 0))); extern int scnprintf(char * buf, size_t size, const char * fmt, ...) diff --git a/include/linux/netlink.h b/include/linux/netlink.h index d5bfaba..c347585 100644 --- a/include/linux/netlink.h +++ b/include/linux/netlink.h @@ -24,6 +24,8 @@ /* leave room for NETLINK_DM (DM Events) */ #define NETLINK_SCSITRANSPORT 18 /* SCSI Transports */ #define NETLINK_ECRYPTFS 19 +#define NETLINK_TOI_USERUI 20 /* TuxOnIce's userui */ +#define NETLINK_TOI_USM 21 /* Userspace storage manager */ #define MAX_LINKS 32 diff --git a/include/linux/suspend.h b/include/linux/suspend.h index 4360e08..f77e90b 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -257,4 +257,69 @@ static inline void register_nosave_region_late(unsigned long b, unsigned long e) } #endif +enum { + TOI_CAN_HIBERNATE, + TOI_CAN_RESUME, + TOI_RESUME_DEVICE_OK, + TOI_NORESUME_SPECIFIED, + TOI_SANITY_CHECK_PROMPT, + TOI_CONTINUE_REQ, + TOI_RESUMED_BEFORE, + TOI_BOOT_TIME, + TOI_NOW_RESUMING, + TOI_IGNORE_LOGLEVEL, + TOI_TRYING_TO_RESUME, + TOI_LOADING_ALT_IMAGE, + TOI_STOP_RESUME, + TOI_IO_STOPPED, + TOI_NOTIFIERS_PREPARE, + TOI_CLUSTER_MODE, +}; + +#ifdef CONFIG_TOI + +/* Used in init dir files */ +extern unsigned long toi_state; +#define set_toi_state(bit) (set_bit(bit, &toi_state)) +#define clear_toi_state(bit) (clear_bit(bit, &toi_state)) +#define test_toi_state(bit) (test_bit(bit, &toi_state)) +extern int toi_running; + +#else /* !CONFIG_TOI */ + +#define toi_state (0) +#define set_toi_state(bit) do { } while (0) +#define clear_toi_state(bit) do { } while (0) +#define test_toi_state(bit) (0) +#define toi_running (0) +#endif /* CONFIG_TOI */ + +#ifdef CONFIG_HIBERNATION +#ifdef CONFIG_TOI +extern void toi_try_resume(void); +#else +#define toi_try_resume() do { } while (0) +#endif + +extern int resume_attempted; +extern int software_resume(void); + +static inline void check_resume_attempted(void) +{ + if (resume_attempted) + return; + + software_resume(); +} +#else +#define check_resume_attempted() do { } while (0) +#define resume_attempted (0) +#endif + +#ifdef CONFIG_PRINTK_NOSAVE +#define POSS_NOSAVE __nosavedata +#else +#define POSS_NOSAVE +#endif + #endif /* _LINUX_SUSPEND_H */ diff --git a/include/linux/swap.h b/include/linux/swap.h index 4f3838a..59a58fc 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -166,6 +166,7 @@ extern unsigned long totalram_pages; extern unsigned long totalreserve_pages; extern long nr_swap_pages; extern unsigned int nr_free_buffer_pages(void); +extern unsigned int nr_unallocated_buffer_pages(void); extern unsigned int nr_free_pagecache_pages(void); /* Definition of global_page_state not available yet */ @@ -186,6 +187,8 @@ extern void swap_setup(void); extern unsigned long try_to_free_pages(struct zone **zones, int order, gfp_t gfp_mask); extern unsigned long shrink_all_memory(unsigned long nr_pages); +extern void shrink_one_zone(struct zone *zone, unsigned long desired_size, + int ps_wanted); extern int vm_swappiness; extern int remove_mapping(struct address_space *mapping, struct page *page); extern long vm_total_pages; @@ -364,5 +367,10 @@ static inline swp_entry_t get_swap_page(void) #define disable_swap_token() do { } while(0) #endif /* CONFIG_SWAP */ + +/* For TuxOnIce - unlink LRU pages while saving separately */ +void unlink_lru_lists(void); +void relink_lru_lists(void); + #endif /* __KERNEL__*/ #endif /* _LINUX_SWAP_H */ diff --git a/init/do_mounts.c b/init/do_mounts.c index 4efa1e5..55af713 100644 --- a/init/do_mounts.c +++ b/init/do_mounts.c @@ -142,11 +142,16 @@ dev_t name_to_dev_t(char *name) char s[32]; char *p; dev_t res = 0; - int part; + int part, mount_result; #ifdef CONFIG_SYSFS int mkdir_err = sys_mkdir("/sys", 0700); - if (sys_mount("sysfs", "/sys", "sysfs", 0, NULL) < 0) + /* + * When changing resume parameter for TuxOnIce, sysfs may + * already be mounted. + */ + mount_result = sys_mount("sysfs", "/sys", "sysfs", 0, NULL); + if (mount_result < 0 && mount_result != -EBUSY) goto out; #endif @@ -198,7 +203,8 @@ dev_t name_to_dev_t(char *name) res = try_name(s, part); done: #ifdef CONFIG_SYSFS - sys_umount("/sys", 0); + if (mount_result >= 0) + sys_umount("/sys", 0); out: if (!mkdir_err) sys_rmdir("/sys"); @@ -466,6 +472,8 @@ void __init prepare_namespace(void) if (is_floppy && rd_doload && rd_load_disk(0)) ROOT_DEV = Root_RAM0; + check_resume_attempted(); + mount_root(); out: sys_mount(".", "/", NULL, MS_MOVE, NULL); diff --git a/init/do_mounts_initrd.c b/init/do_mounts_initrd.c index 614241b..f3ea292 100644 --- a/init/do_mounts_initrd.c +++ b/init/do_mounts_initrd.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include "do_mounts.h" @@ -68,6 +69,11 @@ static void __init handle_initrd(void) current->flags &= ~PF_FREEZER_SKIP; + if (!resume_attempted) + printk(KERN_ERR "TuxOnIce: No attempt was made to resume from " + "any image that might exist.\n"); + clear_toi_state(TOI_BOOT_TIME); + /* move initrd to rootfs' /old */ sys_fchdir(old_fd); sys_mount("/", ".", NULL, MS_MOVE, NULL); diff --git a/init/main.c b/init/main.c index 80b04b6..e3c8396 100644 --- a/init/main.c +++ b/init/main.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include @@ -572,6 +573,7 @@ asmlinkage void __init start_kernel(void) softirq_init(); timekeeping_init(); time_init(); + dyn_pageflags_init(); profile_init(); if (!irqs_disabled()) printk("start_kernel(): bug: interrupts were enabled early\n"); @@ -608,6 +610,7 @@ asmlinkage void __init start_kernel(void) cpuset_init_early(); mem_init(); kmem_cache_init(); + dyn_pageflags_use_kzalloc(); setup_per_cpu_pageset(); numa_policy_init(); if (late_time_init) diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index 8e186c6..6c8ad71 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -44,6 +44,18 @@ config PM_VERBOSE ---help--- This option enables verbose messages from the Power Management code. +config PRINTK_NOSAVE + depends on PM && PM_DEBUG + bool "Preserve printk data from boot kernel when resuming." + default n + ---help--- + This option gives printk data and the associated variables the + attribute __nosave, which means that they will not be saved as + part of the image. The net effect is that after resuming, your + dmesg will show the messages from prior to the atomic restore, + instead of the messages from the resumed kernel. This may be + useful for debugging hibernation. + config PM_TRACE bool "Suspend/resume event tracing" depends on PM_DEBUG && X86 && PM_SLEEP && EXPERIMENTAL @@ -170,6 +182,255 @@ config PM_STD_PARTITION suspended image to. It will simply pick the first available swap device. +menuconfig TOI_CORE + tristate "Enhanced Hibernation (TuxOnIce)" + depends on HIBERNATION + default y + ---help--- + TuxOnIce is the 'new and improved' suspend support. + + See the TuxOnIce home page (tuxonice.net) + for FAQs, HOWTOs and other documentation. + + comment "Image Storage (you need at least one allocator)" + depends on TOI_CORE + + config TOI_FILE + tristate "File Allocator" + depends on TOI_CORE + default y + ---help--- + This option enables support for storing an image in a + simple file. This should be possible, but we're still + testing it. + + config TOI_SWAP + tristate "Swap Allocator" + depends on TOI_CORE && SWAP + default y + ---help--- + This option enables support for storing an image in your + swap space. + + comment "General Options" + depends on TOI_CORE + + config TOI_DEFAULT_PRE_HIBERNATE + string "Default pre-hibernate command" + depends on TOI_CORE + ---help--- + This entry allows you to specify a command to be run prior + to starting a hibernation cycle. If this command returns + a non-zero result code, hibernating will be aborted. + + config TOI_DEFAULT_POST_HIBERNATE + string "Default post-resume command" + depends on TOI_CORE + ---help--- + This entry allows you to specify a command to be run after + completing a hibernation cycle. The return code of this + command is ignored. + + config TOI_CRYPTO + tristate "Compression support" + depends on TOI_CORE && CRYPTO + default y + ---help--- + This option adds support for using cryptoapi compression + algorithms. Compression is particularly useful as + the LZF support that comes with the TuxOnIce patch can double + your suspend and resume speed. + + You probably want this, so say Y here. + + comment "No compression support available without Cryptoapi support." + depends on TOI_CORE && !CRYPTO + + config TOI_USERUI + tristate "Userspace User Interface support" + depends on TOI_CORE && NET && (VT || SERIAL_CONSOLE) + default y + ---help--- + This option enabled support for a userspace based user interface + to TuxOnIce, which allows you to have a nice display while suspending + and resuming, and also enables features such as pressing escape to + cancel a cycle or interactive debugging. + + config TOI_USERUI_DEFAULT_PATH + string "Default userui program location" + default "/usr/local/sbin/tuxonice_fbsplash" + depends on TOI_USERUI + ---help--- + This entry allows you to specify a default path to the userui binary. + + config TOI_KEEP_IMAGE + bool "Allow Keep Image Mode" + depends on TOI_CORE + ---help--- + This option allows you to keep and image and reuse it. It is intended + __ONLY__ for use with systems where all filesystems are mounted read- + only (kiosks, for example). To use it, compile this option in and boot + normally. Set the KEEP_IMAGE flag in /sys/power/tuxonice and suspend. + When you resume, the image will not be removed. You will be unable to turn + off swap partitions (assuming you are using the swap allocator), but future + suspends simply do a power-down. The image can be updated using the + kernel command line parameter suspend_act= to turn off the keep image + bit. Keep image mode is a little less user friendly on purpose - it + should not be used without thought! + + config TOI_REPLACE_SWSUSP + bool "Replace swsusp by default" + default y + depends on TOI_CORE + ---help--- + TuxOnIce can replace swsusp. This option makes that the default state, + requiring you to echo 0 > /sys/power/tuxonice/replace_swsusp if you want + to use the vanilla kernel functionality. Note that your initrd/ramfs will + need to do this before trying to resume, too. + With overriding swsusp enabled, echoing disk to /sys/power/state will + start a TuxOnIce cycle. If resume= doesn't specify an allocator and both + the swap and file allocators are compiled in, the swap allocator will be + used by default. + + menuconfig TOI_CLUSTER + tristate "Cluster support" + default n + depends on TOI_CORE && NET + ---help--- + Support for linking multiple machines in a cluster so that they suspend + and resume together. + + config TOI_DEFAULT_CLUSTER_INTERFACE + string "Default cluster interface" + depends on TOI_CLUSTER + ---help--- + The default interface on which to communicate with other nodes in + the cluster. + + If no value is set here, cluster support will be disabled by default. + + config TOI_DEFAULT_CLUSTER_KEY + string "Default cluster key" + default "Default" + depends on TOI_CLUSTER + ---help--- + The default key used by this node. All nodes in the same cluster + have the same key. Multiple clusters may coexist on the same lan + by using different values for this key. + + config TOI_CLUSTER_IMAGE_TIMEOUT + int "Timeout when checking for image" + default 15 + depends on TOI_CLUSTER + ---help--- + Timeout (seconds) before continuing to boot when waiting to see + whether other nodes might have an image. Set to -1 to wait + indefinitely. In WAIT_UNTIL_NODES is non zero, we might continue + booting sooner than this timeout. + + config TOI_CLUSTER_WAIT_UNTIL_NODES + int "Nodes without image before continuing" + default 0 + depends on TOI_CLUSTER + ---help--- + When booting and no image is found, we wait to see if other nodes + have an image before continuing to boot. This value lets us + continue after seeing a certain number of nodes without an image, + instead of continuing to wait for the timeout. Set to 0 to only + use the timeout. + + config TOI_DEFAULT_CLUSTER_PRE_HIBERNATE + string "Default pre-hibernate script" + depends on TOI_CLUSTER + ---help--- + The default script to be called when starting to hibernate. + + config TOI_DEFAULT_CLUSTER_POST_HIBERNATE + string "Default post-hibernate script" + depends on TOI_CLUSTER + ---help--- + The default script to be called after resuming from hibernation. + + config TOI_CHECKSUM + bool "Checksum pageset2" + default y + depends on TOI_CORE + select CRYPTO + select CRYPTO_ALGAPI + select CRYPTO_MD4 + ---help--- + Adds support for checksumming pageset2 pages, to ensure you really get an + atomic copy. Since some filesystems (XFS especially) change metadata even + when there's no other activity, we need this to check for pages that have + been changed while we were saving the page cache. If your debugging output + always says no pages were resaved, you may be able to safely disable this + option. + + config TOI_DEFAULT_WAIT + int "Default waiting time for emergency boot messages" + default "25" + range -1 32768 + depends on TOI_CORE + help + TuxOnIce can display warnings very early in the process of resuming, + if (for example) it appears that you have booted a kernel that doesn't + match an image on disk. It can then give you the opportunity to either + continue booting that kernel, or reboot the machine. This option can be + used to control how long to wait in such circumstances. -1 means wait + forever. 0 means don't wait at all (do the default action, which will + generally be to continue booting and remove the image). Values of 1 or + more indicate a number of seconds (up to 255) to wait before doing the + default. + + config TOI_PAGEFLAGS_TEST + tristate "Test pageflags" + default N + depends on TOI_CORE + help + Test pageflags. + +config TOI_PAGEFLAGS_EXPORTS + bool + depends on TOI_PAGEFLAGS_TEST=m + default y + +config TOI_USERUI_EXPORTS + bool + depends on TOI_USERUI=m + default y + +config TOI_SWAP_EXPORTS + bool + depends on TOI_SWAP=m + default y + +config TOI_FILE_EXPORTS + bool + depends on TOI_FILE=m + default y + +config TOI_CRYPTO_EXPORTS + bool + depends on TOI_CRYPTO=m + default y + +config TOI_CORE_EXPORTS + bool + depends on TOI_CORE=m + default y + +config TOI_EXPORTS + bool + depends on TOI_SWAP_EXPORTS || TOI_FILE_EXPORTS || \ + TOI_CRYPTO_EXPORTS || TOI_CLUSTER=m || \ + TOI_USERUI_EXPORTS || TOI_PAGEFLAGS_EXPORTS + default y + +config TOI + bool + depends on TOI_CORE!=n + default y + config APM_EMULATION tristate "Advanced Power Management Emulation" depends on PM && SYS_SUPPORTS_APM_EMULATION diff --git a/kernel/power/Makefile b/kernel/power/Makefile index f7dfff2..8ea53fa 100644 --- a/kernel/power/Makefile +++ b/kernel/power/Makefile @@ -5,6 +5,37 @@ endif obj-y := main.o obj-$(CONFIG_PM_LEGACY) += pm.o + +tuxonice_core-objs := tuxonice_modules.o tuxonice_sysfs.o tuxonice_highlevel.o \ + tuxonice_io.o tuxonice_pagedir.o tuxonice_prepare_image.o \ + tuxonice_extent.o tuxonice_pageflags.o tuxonice_ui.o \ + tuxonice_power_off.o tuxonice_atomic_copy.o + +obj-$(CONFIG_TOI) += tuxonice_builtin.o + +ifdef CONFIG_PM_DEBUG +tuxonice_core-objs += tuxonice_alloc.o +endif + +ifdef CONFIG_TOI_CHECKSUM +tuxonice_core-objs += tuxonice_checksum.o +endif + +ifdef CONFIG_NET +tuxonice_core-objs += tuxonice_storage.o tuxonice_netlink.o +endif + +obj-$(CONFIG_TOI_CORE) += tuxonice_core.o +obj-$(CONFIG_TOI_CRYPTO) += tuxonice_compress.o + +obj-$(CONFIG_TOI_SWAP) += tuxonice_block_io.o tuxonice_swap.o +obj-$(CONFIG_TOI_FILE) += tuxonice_block_io.o tuxonice_file.o +obj-$(CONFIG_TOI_CLUSTER) += tuxonice_cluster.o + +obj-$(CONFIG_TOI_USERUI) += tuxonice_userui.o + +obj-$(CONFIG_TOI_PAGEFLAGS_TEST) += toi_pageflags_test.o + obj-$(CONFIG_PM_SLEEP) += process.o console.o obj-$(CONFIG_HIBERNATION) += swsusp.o disk.o snapshot.o swap.o user.o diff --git a/kernel/power/disk.c b/kernel/power/disk.c index 05b6479..bdc1af1 100644 --- a/kernel/power/disk.c +++ b/kernel/power/disk.c @@ -24,6 +24,8 @@ #include "power.h" +#include "tuxonice.h" +#include "tuxonice_builtin.h" static int noresume = 0; char resume_file[256] = CONFIG_PM_STD_PARTITION; @@ -75,7 +77,7 @@ void hibernation_set_ops(struct platform_hibernation_ops *ops) * hibernation */ -static int platform_start(int platform_mode) +int platform_start(int platform_mode) { return (platform_mode && hibernation_ops) ? hibernation_ops->start() : 0; @@ -86,7 +88,7 @@ static int platform_start(int platform_mode) * platform driver if so configured and return an error code if it fails */ -static int platform_pre_snapshot(int platform_mode) +int platform_pre_snapshot(int platform_mode) { return (platform_mode && hibernation_ops) ? hibernation_ops->pre_snapshot() : 0; @@ -97,7 +99,7 @@ static int platform_pre_snapshot(int platform_mode) * of operation using the platform driver (called with interrupts disabled) */ -static void platform_leave(int platform_mode) +void platform_leave(int platform_mode) { if (platform_mode && hibernation_ops) hibernation_ops->leave(); @@ -108,7 +110,7 @@ static void platform_leave(int platform_mode) * using the platform driver (must be called after platform_prepare()) */ -static void platform_finish(int platform_mode) +void platform_finish(int platform_mode) { if (platform_mode && hibernation_ops) hibernation_ops->finish(); @@ -120,7 +122,7 @@ static void platform_finish(int platform_mode) * called, platform_restore_cleanup() must be called. */ -static int platform_pre_restore(int platform_mode) +int platform_pre_restore(int platform_mode) { return (platform_mode && hibernation_ops) ? hibernation_ops->pre_restore() : 0; @@ -133,7 +135,7 @@ static int platform_pre_restore(int platform_mode) * regardless of the result of platform_pre_restore(). */ -static void platform_restore_cleanup(int platform_mode) +void platform_restore_cleanup(int platform_mode) { if (platform_mode && hibernation_ops) hibernation_ops->restore_cleanup(); @@ -382,6 +384,11 @@ int hibernate(void) { int error; +#ifdef CONFIG_TOI + if (test_action_state(TOI_REPLACE_SWSUSP)) + return toi_try_hibernate(1); +#endif + mutex_lock(&pm_mutex); /* The snapshot device should not be opened while we're running */ if (!atomic_add_unless(&snapshot_device_available, -1, 0)) { @@ -451,10 +458,21 @@ int hibernate(void) * */ -static int software_resume(void) +int software_resume(void) { int error; unsigned int flags; + resume_attempted = 1; + +#ifdef CONFIG_TOI + /* + * We can't know (until an image header - if any - is loaded), whether + * we did override swsusp. We therefore ensure that both are tried. + */ + if (test_action_state(TOI_REPLACE_SWSUSP)) + printk(KERN_INFO "Replacing swsusp.\n"); + toi_try_resume(); +#endif /* * name_to_dev_t() below takes a sysfs buffer mutex when sysfs @@ -467,6 +485,7 @@ static int software_resume(void) * here to avoid lockdep complaining. */ mutex_lock_nested(&pm_mutex, SINGLE_DEPTH_NESTING); + if (!swsusp_resume_device) { if (!strlen(resume_file)) { mutex_unlock(&pm_mutex); @@ -530,9 +549,6 @@ static int software_resume(void) return error; } -late_initcall(software_resume); - - static const char * const hibernation_modes[] = { [HIBERNATION_PLATFORM] = "platform", [HIBERNATION_SHUTDOWN] = "shutdown", @@ -739,6 +755,7 @@ static int __init resume_offset_setup(char *str) static int __init noresume_setup(char *str) { noresume = 1; + set_toi_state(TOI_NORESUME_SPECIFIED); return 1; } diff --git a/kernel/power/power.h b/kernel/power/power.h index 195dc46..2c38ae8 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -1,5 +1,14 @@ +/* + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + */ + +#ifndef KERNEL_POWER_POWER_H +#define KERNEL_POWER_POWER_H + #include #include +#include "tuxonice.h" +#include "tuxonice_builtin.h" struct swsusp_info { struct new_utsname uts; @@ -19,18 +28,22 @@ struct swsusp_info { extern int arch_hibernation_header_save(void *addr, unsigned int max_size); extern int arch_hibernation_header_restore(void *addr); -static inline int init_header_complete(struct swsusp_info *info) +static inline int init_swsusp_header_complete(struct swsusp_info *info) { return arch_hibernation_header_save(info, MAX_ARCH_HEADER_SIZE); } -static inline char *check_image_kernel(struct swsusp_info *info) +static inline char *check_swsusp_image_kernel(struct swsusp_info *info) { return arch_hibernation_header_restore(info) ? "architecture specific data" : NULL; } +#else +extern char *check_swsusp_image_kernel(struct swsusp_info *info); #endif /* CONFIG_ARCH_HIBERNATION_HEADER */ +extern int init_swsusp_header(struct swsusp_info *info); +extern char resume_file[256]; /* * Keep some memory free so that I/O operations can succeed without paging * [Might this be more than 4 MB?] @@ -65,6 +78,8 @@ static struct subsys_attribute _name##_attr = { \ extern struct kset power_subsys; +extern struct pbe *restore_pblist; + /* Preferred image size in bytes (default 500 MB) */ extern unsigned long image_size; extern int in_suspend; @@ -211,3 +226,26 @@ static inline int pm_notifier_call_chain(unsigned long val) return (blocking_notifier_call_chain(&pm_chain_head, val, NULL) == NOTIFY_BAD) ? -EINVAL : 0; } + +extern struct page *saveable_page(unsigned long pfn); +#ifdef CONFIG_HIGHMEM +extern struct page *saveable_highmem_page(unsigned long pfn); +#else +static inline void *saveable_highmem_page(unsigned long pfn) { return NULL; } +#endif + +#define PBES_PER_PAGE (PAGE_SIZE / sizeof(struct pbe)) +extern struct list_head nosave_regions; + +/** + * This structure represents a range of page frames the contents of which + * should not be saved during the suspend. + */ + +struct nosave_region { + struct list_head list; + unsigned long start_pfn; + unsigned long end_pfn; +}; + +#endif diff --git a/kernel/power/process.c b/kernel/power/process.c index 6533923..6728fe1 100644 --- a/kernel/power/process.c +++ b/kernel/power/process.c @@ -13,6 +13,10 @@ #include #include #include +#include + +int freezer_state; +EXPORT_SYMBOL(freezer_state); /* * Timeout for stopping processes @@ -74,6 +78,7 @@ void refrigerator(void) pr_debug("%s left refrigerator\n", current->comm); __set_current_state(save); } +EXPORT_SYMBOL(refrigerator); static void fake_signal_wake_up(struct task_struct *p, int resume) { @@ -219,7 +224,8 @@ static int try_to_freeze_tasks(int freeze_user_space) do_each_thread(g, p) { task_lock(p); if (freezing(p) && !freezer_should_skip(p)) - printk(KERN_ERR " %s\n", p->comm); + printk(KERN_ERR " %s (%d) failed to freeze.\n", + p->comm, p->pid); cancel_freezing(p); task_unlock(p); } while_each_thread(g, p); @@ -239,17 +245,25 @@ int freeze_processes(void) { int error; - printk("Freezing user space processes ... "); + printk(KERN_INFO "Stopping fuse filesystems.\n"); + freeze_filesystems(FS_FREEZER_FUSE); + freezer_state = FREEZER_FILESYSTEMS_FROZEN; + printk(KERN_INFO "Freezing user space processes ... "); error = try_to_freeze_tasks(FREEZER_USER_SPACE); if (error) goto Exit; - printk("done.\n"); + printk(KERN_INFO "done.\n"); - printk("Freezing remaining freezable tasks ... "); + sys_sync(); + printk(KERN_INFO "Stopping normal filesystems.\n"); + freeze_filesystems(FS_FREEZER_NORMAL); + freezer_state = FREEZER_USERSPACE_FROZEN; + printk(KERN_INFO "Freezing remaining freezable tasks ... "); error = try_to_freeze_tasks(FREEZER_KERNEL_THREADS); if (error) goto Exit; - printk("done."); + printk(KERN_INFO "done."); + freezer_state = FREEZER_FULLY_ON; Exit: BUG_ON(in_atomic()); printk("\n"); @@ -275,11 +289,33 @@ static void thaw_tasks(int thaw_user_space) void thaw_processes(void) { - printk("Restarting tasks ... "); - thaw_tasks(FREEZER_KERNEL_THREADS); + int old_state = freezer_state; + + if (old_state == FREEZER_OFF) + return; + + /* + * Change state beforehand because thawed tasks might submit I/O + * immediately. + */ + freezer_state = FREEZER_OFF; + + printk(KERN_INFO "Restarting all filesystems ...\n"); + thaw_filesystems(FS_FREEZER_ALL); + + printk(KERN_INFO "Restarting tasks ... "); + + if (old_state == FREEZER_FULLY_ON) + thaw_tasks(FREEZER_KERNEL_THREADS); thaw_tasks(FREEZER_USER_SPACE); schedule(); printk("done.\n"); } -EXPORT_SYMBOL(refrigerator); +void thaw_kernel_threads(void) +{ + freezer_state = FREEZER_USERSPACE_FROZEN; + printk(KERN_INFO "Restarting normal filesystems.\n"); + thaw_filesystems(FS_FREEZER_NORMAL); + thaw_tasks(FREEZER_KERNEL_THREADS); +} diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 78039b4..6b9b942 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -33,6 +33,7 @@ #include #include "power.h" +#include "tuxonice_builtin.h" static int swsusp_page_is_free(struct page *); static void swsusp_set_page_forbidden(struct page *); @@ -44,6 +45,13 @@ static void swsusp_unset_page_forbidden(struct page *); * directly to their "original" page frames. */ struct pbe *restore_pblist; +int resume_attempted; +EXPORT_SYMBOL_GPL(resume_attempted); + +#ifdef CONFIG_TOI +#include "tuxonice_pagedir.h" +int toi_post_context_save(void); +#endif /* Pointer to an auxiliary buffer (1 page) */ static void *buffer; @@ -86,6 +94,11 @@ static void *get_image_page(gfp_t gfp_mask, int safe_needed) unsigned long get_safe_page(gfp_t gfp_mask) { +#ifdef CONFIG_TOI + if (toi_running) + return toi_get_nonconflicting_page(); +#endif + return (unsigned long)get_image_page(gfp_mask, PG_SAFE); } @@ -587,18 +600,8 @@ static unsigned long memory_bm_next_pfn(struct memory_bitmap *bm) return bb->start_pfn + chunk * BM_BITS_PER_CHUNK + bit; } -/** - * This structure represents a range of page frames the contents of which - * should not be saved during the suspend. - */ - -struct nosave_region { - struct list_head list; - unsigned long start_pfn; - unsigned long end_pfn; -}; - -static LIST_HEAD(nosave_regions); +LIST_HEAD(nosave_regions); +EXPORT_SYMBOL_GPL(nosave_regions); /** * register_nosave_region - register a range of page frames the contents @@ -828,7 +831,7 @@ static unsigned int count_free_highmem_pages(void) * and it isn't a part of a free chunk of pages. */ -static struct page *saveable_highmem_page(unsigned long pfn) +struct page *saveable_highmem_page(unsigned long pfn) { struct page *page; @@ -871,7 +874,6 @@ unsigned int count_highmem_pages(void) return n; } #else -static inline void *saveable_highmem_page(unsigned long pfn) { return NULL; } static inline unsigned int count_highmem_pages(void) { return 0; } #endif /* CONFIG_HIGHMEM */ @@ -884,7 +886,7 @@ static inline unsigned int count_highmem_pages(void) { return 0; } * a free chunk of pages. */ -static struct page *saveable_page(unsigned long pfn) +struct page *saveable_page(unsigned long pfn) { struct page *page; @@ -1202,6 +1204,11 @@ asmlinkage int swsusp_save(void) { unsigned int nr_pages, nr_highmem; +#ifdef CONFIG_TOI + if (toi_running) + return toi_post_context_save(); +#endif + printk("swsusp: critical section: \n"); drain_local_pages(); @@ -1241,14 +1248,14 @@ asmlinkage int swsusp_save(void) } #ifndef CONFIG_ARCH_HIBERNATION_HEADER -static int init_header_complete(struct swsusp_info *info) +int init_swsusp_header_complete(struct swsusp_info *info) { memcpy(&info->uts, init_utsname(), sizeof(struct new_utsname)); info->version_code = LINUX_VERSION_CODE; return 0; } -static char *check_image_kernel(struct swsusp_info *info) +char *check_swsusp_image_kernel(struct swsusp_info *info) { if (info->version_code != LINUX_VERSION_CODE) return "kernel version"; @@ -1262,9 +1269,10 @@ static char *check_image_kernel(struct swsusp_info *info) return "machine"; return NULL; } +EXPORT_SYMBOL_GPL(check_swsusp_image_kernel); #endif /* CONFIG_ARCH_HIBERNATION_HEADER */ -static int init_header(struct swsusp_info *info) +int init_swsusp_header(struct swsusp_info *info) { memset(info, 0, sizeof(struct swsusp_info)); info->num_physpages = num_physpages; @@ -1272,7 +1280,7 @@ static int init_header(struct swsusp_info *info) info->pages = nr_copy_pages + nr_meta_pages + 1; info->size = info->pages; info->size <<= PAGE_SHIFT; - return init_header_complete(info); + return init_swsusp_header_complete(info); } /** @@ -1328,7 +1336,7 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count) if (!handle->offset) { int error; - error = init_header((struct swsusp_info *)buffer); + error = init_swsusp_header((struct swsusp_info *)buffer); if (error) return error; handle->buffer = buffer; @@ -1425,7 +1433,7 @@ static int check_header(struct swsusp_info *info) { char *reason; - reason = check_image_kernel(info); + reason = check_swsusp_image_kernel(info); if (!reason && info->num_physpages != num_physpages) reason = "memory size"; if (reason) { diff --git a/kernel/power/toi_pageflags_test.c b/kernel/power/toi_pageflags_test.c new file mode 100644 index 0000000..381f05b --- /dev/null +++ b/kernel/power/toi_pageflags_test.c @@ -0,0 +1,80 @@ +/* + * TuxOnIce pageflags tester. + */ + +#include "linux/module.h" +#include "linux/bootmem.h" +#include "linux/sched.h" +#include "linux/dyn_pageflags.h" + +DECLARE_DYN_PAGEFLAGS(test_map); + +static char *bits_on(void) +{ + char *page = (char *) get_zeroed_page(GFP_KERNEL); + unsigned long index = get_next_bit_on(&test_map, max_pfn + 1); + int pos = 0; + + while (index <= max_pfn) { + pos += snprintf_used(page + pos, PAGE_SIZE - pos - 1, "%d ", + index); + index = get_next_bit_on(&test_map, index); + } + + return page; +} + +static __init int do_check(void) +{ + unsigned long index; + int step = 1, steps = 100; + + allocate_dyn_pageflags(&test_map, 0); + + for (index = 1; index < max_pfn; index++) { + char *result; + char compare[100]; + + if (index > (max_pfn / steps * step)) { + printk(KERN_INFO "%d/%d\r", step, steps); + step++; + } + + + if (!pfn_valid(index)) + continue; + + clear_dyn_pageflags(&test_map); + set_dynpageflag(&test_map, pfn_to_page(0)); + set_dynpageflag(&test_map, pfn_to_page(index)); + + sprintf(compare, "0 %lu ", index); + + result = bits_on(); + + if (strcmp(result, compare)) { + printk(KERN_INFO "Expected \"%s\", got \"%s\"\n", + result, compare); + } + + free_page((unsigned long) result); + schedule(); + } + + free_dyn_pageflags(&test_map); + return 0; +} + +#ifdef MODULE +static __exit void check_unload(void) +{ +} + +module_init(do_check); +module_exit(check_unload); +MODULE_AUTHOR("Nigel Cunningham"); +MODULE_DESCRIPTION("Pageflags testing"); +MODULE_LICENSE("GPL"); +#else +late_initcall(do_check); +#endif diff --git a/kernel/power/tuxonice.h b/kernel/power/tuxonice.h new file mode 100644 index 0000000..dee791a --- /dev/null +++ b/kernel/power/tuxonice.h @@ -0,0 +1,207 @@ +/* + * kernel/power/tuxonice.h + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * It contains declarations used throughout swsusp. + * + */ + +#ifndef KERNEL_POWER_TOI_H +#define KERNEL_POWER_TOI_H + +#include +#include +#include +#include +#include +#include +#include +#include "tuxonice_pageflags.h" + +#define TOI_CORE_VERSION "3.0-rc5" + +#define MY_BOOT_KERNEL_DATA_VERSION 1 + +struct toi_boot_kernel_data { + int version; + int size; + unsigned long toi_action; + unsigned long toi_debug_state; + int toi_default_console_level; + int toi_io_time[2][2]; + char toi_nosave_commandline[COMMAND_LINE_SIZE]; +}; + +extern struct toi_boot_kernel_data toi_bkd; + +/* Location of book kernel data struct in kernel being resumed */ +extern unsigned long boot_kernel_data_buffer; + +/* == Action states == */ + +enum { + TOI_REBOOT, + TOI_PAUSE, + TOI_SLOW, + TOI_LOGALL, + TOI_CAN_CANCEL, + TOI_KEEP_IMAGE, + TOI_FREEZER_TEST, + TOI_SINGLESTEP, + TOI_PAUSE_NEAR_PAGESET_END, + TOI_TEST_FILTER_SPEED, + TOI_TEST_BIO, + TOI_NO_PAGESET2, + TOI_PM_PREPARE_CONSOLE, + TOI_IGNORE_ROOTFS, + TOI_REPLACE_SWSUSP, + TOI_PAGESET2_FULL, + TOI_ABORT_ON_RESAVE_NEEDED, + TOI_NO_MULTITHREADED_IO, + TOI_NO_DIRECT_LOAD, + TOI_LATE_CPU_HOTPLUG, + TOI_GET_MAX_MEM_ALLOCD +}; + +#define clear_action_state(bit) (test_and_clear_bit(bit, &toi_bkd.toi_action)) +#define test_action_state(bit) (test_bit(bit, &toi_bkd.toi_action)) + +/* == Result states == */ + +enum { + TOI_ABORTED, + TOI_ABORT_REQUESTED, + TOI_NOSTORAGE_AVAILABLE, + TOI_INSUFFICIENT_STORAGE, + TOI_FREEZING_FAILED, + TOI_KEPT_IMAGE, + TOI_WOULD_EAT_MEMORY, + TOI_UNABLE_TO_FREE_ENOUGH_MEMORY, + TOI_PM_SEM, + TOI_DEVICE_REFUSED, + TOI_EXTRA_PAGES_ALLOW_TOO_SMALL, + TOI_UNABLE_TO_PREPARE_IMAGE, + TOI_FAILED_MODULE_INIT, + TOI_FAILED_MODULE_CLEANUP, + TOI_FAILED_IO, + TOI_OUT_OF_MEMORY, + TOI_IMAGE_ERROR, + TOI_PLATFORM_PREP_FAILED, + TOI_CPU_HOTPLUG_FAILED, + TOI_ARCH_PREPARE_FAILED, + TOI_RESAVE_NEEDED, + TOI_CANT_SUSPEND, + TOI_NOTIFIERS_PREPARE_FAILED, + TOI_PRE_SNAPSHOT_FAILED, + TOI_PRE_RESTORE_FAILED, +}; + +extern unsigned long toi_result; + +#define set_result_state(bit) (test_and_set_bit(bit, &toi_result)) +#define set_abort_result(bit) (test_and_set_bit(TOI_ABORTED, &toi_result), \ + test_and_set_bit(bit, &toi_result)) +#define clear_result_state(bit) (test_and_clear_bit(bit, &toi_result)) +#define test_result_state(bit) (test_bit(bit, &toi_result)) + +/* == Debug sections and levels == */ + +/* debugging levels. */ +enum { + TOI_STATUS = 0, + TOI_ERROR = 2, + TOI_LOW, + TOI_MEDIUM, + TOI_HIGH, + TOI_VERBOSE, +}; + +enum { + TOI_ANY_SECTION, + TOI_EAT_MEMORY, + TOI_IO, + TOI_HEADER, + TOI_WRITER, + TOI_MEMORY, +}; + +#define set_debug_state(bit) (test_and_set_bit(bit, &toi_bkd.toi_debug_state)) +#define clear_debug_state(bit) (test_and_clear_bit(bit, &toi_bkd.toi_debug_state)) +#define test_debug_state(bit) (test_bit(bit, &toi_bkd.toi_debug_state)) + +/* == Steps in hibernating == */ + +enum { + STEP_HIBERNATE_PREPARE_IMAGE, + STEP_HIBERNATE_SAVE_IMAGE, + STEP_HIBERNATE_POWERDOWN, + STEP_RESUME_CAN_RESUME, + STEP_RESUME_LOAD_PS1, + STEP_RESUME_DO_RESTORE, + STEP_RESUME_READ_PS2, + STEP_RESUME_GO, + STEP_RESUME_ALT_IMAGE, + STEP_CLEANUP, + STEP_QUIET_CLEANUP +}; + +/* == TuxOnIce states == + (see also include/linux/suspend.h) */ + +#define get_toi_state() (toi_state) +#define restore_toi_state(saved_state) \ + do { toi_state = saved_state; } while (0) + +/* == Module support == */ + +struct toi_core_fns { + int (*post_context_save)(void); + unsigned long (*get_nonconflicting_page)(void); + int (*try_hibernate)(int have_pmsem); + void (*try_resume)(void); +}; + +extern struct toi_core_fns *toi_core_fns; + +/* == All else == */ +#define KB(x) ((x) << (PAGE_SHIFT - 10)) +#define MB(x) ((x) >> (20 - PAGE_SHIFT)) + +extern int toi_start_anything(int toi_or_resume); +extern void toi_finish_anything(int toi_or_resume); + +extern int save_image_part1(void); +extern int toi_atomic_restore(void); + +extern int _toi_try_hibernate(int have_pmsem); +extern void __toi_try_resume(void); + +extern int __toi_post_context_save(void); + +extern unsigned int nr_hibernates; +extern char alt_resume_param[256]; + +extern void copyback_post(void); +extern int toi_hibernate(void); +extern int extra_pd1_pages_used; + +#define SECTOR_SIZE 512 + +extern void toi_early_boot_message(int can_erase_image, int default_answer, + char *warning_reason, ...); + +static inline int load_direct(struct page *page) +{ + return test_action_state(TOI_NO_DIRECT_LOAD) ? 0 : + PagePageset1Copy(page); +} + +extern int pre_resume_freeze(void); +extern int do_check_can_resume(void); +extern int do_toi_step(int step); +extern int toi_launch_userspace_program(char *command, int channel_no, + enum umh_wait wait); +#endif diff --git a/kernel/power/tuxonice_alloc.c b/kernel/power/tuxonice_alloc.c new file mode 100644 index 0000000..5616de4 --- /dev/null +++ b/kernel/power/tuxonice_alloc.c @@ -0,0 +1,267 @@ +/* + * kernel/power/tuxonice_alloc.c + * + * Copyright (C) 2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + */ + +#ifdef CONFIG_PM_DEBUG +#include +#include +#include "tuxonice_modules.h" +#include "tuxonice_sysfs.h" + +#define TOI_ALLOC_PATHS 38 + +DEFINE_MUTEX(toi_alloc_mutex); + +static int toi_fail_num; +static atomic_t toi_alloc_count[TOI_ALLOC_PATHS], + toi_free_count[TOI_ALLOC_PATHS], + toi_test_count[TOI_ALLOC_PATHS], + toi_fail_count[TOI_ALLOC_PATHS]; +int toi_cur_allocd[TOI_ALLOC_PATHS], toi_max_allocd[TOI_ALLOC_PATHS]; +int cur_allocd, max_allocd; + +static char *toi_alloc_desc[TOI_ALLOC_PATHS] = { + "", /* 0 */ + "get_io_info_struct", + "extent", + "extent (loading chain)", + "userui channel", + "userui arg", /* 5 */ + "attention list metadata", + "extra pagedir memory metadata", + "bdev metadata", + "extra pagedir memory", + "header_locations_read", /* 10 */ + "bio queue", + "prepare_readahead", + "i/o buffer", + "writer buffer in bio_init", + "checksum buffer", /* 15 */ + "compression buffer", + "filewriter signature op", + "set resume param alloc1", + "set resume param alloc2", + "debugging info buffer", /* 20 */ + "check can resume buffer", + "write module config buffer", + "read module config buffer", + "write image header buffer", + "read pageset1 buffer", /* 25 */ + "get_have_image_data buffer", + "checksum page", + "worker rw loop", + "get nonconflicting page", + "ps1 load addresses", /* 30 */ + "remove swap image", + "swap image exists", + "swap parse sig location", + "sysfs kobj", + "swap mark resume attempted buffer", /* 35 */ + "cluster member", + "boot kernel data buffer" +}; + +#define MIGHT_FAIL(FAIL_NUM, FAIL_VAL) \ + do { \ + BUG_ON(FAIL_NUM >= TOI_ALLOC_PATHS); \ + \ + if (FAIL_NUM == toi_fail_num) { \ + atomic_inc(&toi_test_count[FAIL_NUM]); \ + toi_fail_num = 0; \ + return FAIL_VAL; \ + } \ + } while (0) + +static void alloc_update_stats(int fail_num, void *result) +{ + if (!result) { + atomic_inc(&toi_fail_count[fail_num]); + return; + } + + atomic_inc(&toi_alloc_count[fail_num]); + if (unlikely(test_action_state(TOI_GET_MAX_MEM_ALLOCD))) { + mutex_lock(&toi_alloc_mutex); + toi_cur_allocd[fail_num]++; + cur_allocd++; + if (unlikely(cur_allocd > max_allocd)) { + int i; + + for (i = 0; i < TOI_ALLOC_PATHS; i++) + toi_max_allocd[i] = toi_cur_allocd[i]; + max_allocd = cur_allocd; + } + mutex_unlock(&toi_alloc_mutex); + } +} + +static void free_update_stats(int fail_num) +{ + atomic_inc(&toi_free_count[fail_num]); + if (unlikely(test_action_state(TOI_GET_MAX_MEM_ALLOCD))) { + mutex_lock(&toi_alloc_mutex); + cur_allocd--; + toi_cur_allocd[fail_num]--; + mutex_unlock(&toi_alloc_mutex); + } +} + +void *toi_kzalloc(int fail_num, size_t size, gfp_t flags) +{ + void *result; + + MIGHT_FAIL(fail_num, NULL); + result = kzalloc(size, flags); + alloc_update_stats(fail_num, result); + return result; +} + +unsigned long toi_get_free_pages(int fail_num, gfp_t mask, + unsigned int order) +{ + unsigned long result; + + MIGHT_FAIL(fail_num, 0); + result = __get_free_pages(mask, order); + alloc_update_stats(fail_num, (void *) result); + return result; +} + +struct page *toi_alloc_page(int fail_num, gfp_t mask) +{ + struct page *result; + + MIGHT_FAIL(fail_num, 0); + result = alloc_page(mask); + alloc_update_stats(fail_num, (void *) result); + return result; +} + +unsigned long toi_get_zeroed_page(int fail_num, gfp_t mask) +{ + unsigned long result; + + MIGHT_FAIL(fail_num, 0); + result = get_zeroed_page(mask); + alloc_update_stats(fail_num, (void *) result); + return result; +} + +void toi_kfree(int fail_num, const void *arg) +{ + if (arg) + free_update_stats(fail_num); + + kfree(arg); +} + +void toi_free_page(int fail_num, unsigned long virt) +{ + if (virt) + free_update_stats(fail_num); + + free_page(virt); +} + +void toi__free_page(int fail_num, struct page *page) +{ + if (page) + free_update_stats(fail_num); + + __free_page(page); +} + +void toi_free_pages(int fail_num, struct page *page, int order) +{ + if (page) + free_update_stats(fail_num); + + __free_pages(page, order); +} + +void toi_alloc_print_debug_stats(void) +{ + int i; + + printk(KERN_INFO "Idx Allocs Frees Tests Fails Max " + "Description\n"); + + for (i = 0; i < TOI_ALLOC_PATHS; i++) + if (atomic_read(&toi_alloc_count[i]) || + atomic_read(&toi_free_count[i])) + printk(KERN_INFO "%3d %7d %7d %7d %7d %7d %s\n", i, + atomic_read(&toi_alloc_count[i]), + atomic_read(&toi_free_count[i]), + atomic_read(&toi_test_count[i]), + atomic_read(&toi_fail_count[i]), + toi_max_allocd[i], + toi_alloc_desc[i]); +} +EXPORT_SYMBOL_GPL(toi_alloc_print_debug_stats); + +static int toi_alloc_initialise(int starting_cycle) +{ + int i; + + if (starting_cycle) { + for (i = 0; i < TOI_ALLOC_PATHS; i++) { + atomic_set(&toi_alloc_count[i], 0); + atomic_set(&toi_free_count[i], 0); + atomic_set(&toi_test_count[i], 0); + atomic_set(&toi_fail_count[i], 0); + toi_cur_allocd[i] = 0; + toi_max_allocd[i] = 0; + }; + max_allocd = 0; + cur_allocd = 0; + } + + return 0; +} + +static struct toi_sysfs_data sysfs_params[] = { + { TOI_ATTR("failure_test", SYSFS_RW), + SYSFS_INT(&toi_fail_num, 0, 99, 0) + }, + + { TOI_ATTR("find_max_mem_allocated", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_GET_MAX_MEM_ALLOCD, 0) + } +}; + +static struct toi_module_ops toi_alloc_ops = { + .type = MISC_HIDDEN_MODULE, + .name = "allocation debugging", + .directory = "alloc", + .module = THIS_MODULE, + .early = 1, + .initialise = toi_alloc_initialise, + + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +int toi_alloc_init(void) +{ + return toi_register_module(&toi_alloc_ops); +} + +void toi_alloc_exit(void) +{ + toi_unregister_module(&toi_alloc_ops); +} +#ifdef CONFIG_TOI_EXPORTS +EXPORT_SYMBOL_GPL(toi_kzalloc); +EXPORT_SYMBOL_GPL(toi_get_free_pages); +EXPORT_SYMBOL_GPL(toi_get_zeroed_page); +EXPORT_SYMBOL_GPL(toi_kfree); +EXPORT_SYMBOL_GPL(toi_free_page); +EXPORT_SYMBOL_GPL(toi__free_page); +#endif +#endif diff --git a/kernel/power/tuxonice_alloc.h b/kernel/power/tuxonice_alloc.h new file mode 100644 index 0000000..146c2bd --- /dev/null +++ b/kernel/power/tuxonice_alloc.h @@ -0,0 +1,51 @@ +/* + * kernel/power/tuxonice_alloc.h + * + * Copyright (C) 2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + */ + +#define TOI_WAIT_GFP (GFP_KERNEL | __GFP_NOWARN) +#define TOI_ATOMIC_GFP (GFP_ATOMIC | __GFP_NOWARN) + +#ifdef CONFIG_PM_DEBUG +extern void *toi_kzalloc(int fail_num, size_t size, gfp_t flags); +extern void toi_kfree(int fail_num, const void *arg); + +extern unsigned long toi_get_free_pages(int fail_num, gfp_t mask, + unsigned int order); +#define toi_get_free_page(FAIL_NUM, MASK) toi_get_free_pages(FAIL_NUM, MASK, 0) +extern unsigned long toi_get_zeroed_page(int fail_num, gfp_t mask); +extern void toi_free_page(int fail_num, unsigned long buf); +extern void toi__free_page(int fail_num, struct page *page); +extern void toi_free_pages(int fail_num, struct page *page, int order); +extern struct page *toi_alloc_page(int fail_num, gfp_t mask); +extern int toi_alloc_init(void); +extern void toi_alloc_exit(void); + +extern void toi_alloc_print_debug_stats(void); + +#else /* CONFIG_PM_DEBUG */ + +#define toi_kzalloc(FAIL, SIZE, FLAGS) (kzalloc(SIZE, FLAGS)) +#define toi_kfree(FAIL, ALLOCN) (kfree(ALLOCN)) + +#define toi_get_free_pages(FAIL, FLAGS, ORDER) __get_free_pages(FLAGS, ORDER) +#define toi_get_free_page(FAIL, FLAGS) __get_free_page(FLAGS) +#define toi_get_zeroed_page(FAIL, FLAGS) get_zeroed_page(FLAGS) +#define toi_free_page(FAIL, ALLOCN) do { free_page(ALLOCN); } while (0) +#define toi__free_page(FAIL, PAGE) __free_page(PAGE) +#define toi_free_pages(FAIL, PAGE, ORDER) __free_pages(PAGE, ORDER) +#define toi_alloc_page(FAIL, MASK) alloc_page(MASK) +static inline int toi_alloc_init(void) +{ + return 0; +} + +static inline void toi_alloc_exit(void) { } + +static inline void toi_alloc_print_debug_stats(void) { } + +#endif diff --git a/kernel/power/tuxonice_atomic_copy.c b/kernel/power/tuxonice_atomic_copy.c new file mode 100644 index 0000000..8a9562c --- /dev/null +++ b/kernel/power/tuxonice_atomic_copy.c @@ -0,0 +1,382 @@ +/* + * kernel/power/tuxonice_atomic_copy.c + * + * Copyright 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * Copyright (C) 2006 Red Hat, inc. + * + * Distributed under GPLv2. + * + * Routines for doing the atomic save/restore. + */ + +#include +#include +#include +#include +#include +#include "tuxonice.h" +#include "tuxonice_storage.h" +#include "tuxonice_power_off.h" +#include "tuxonice_ui.h" +#include "power.h" +#include "tuxonice_io.h" +#include "tuxonice_prepare_image.h" +#include "tuxonice_pageflags.h" +#include "tuxonice_checksum.h" +#include "tuxonice_builtin.h" +#include "tuxonice_atomic_copy.h" +#include "tuxonice_alloc.h" + +int extra_pd1_pages_used; + +/** + * free_pbe_list: Free page backup entries used by the atomic copy code. + * + * Normally, this function isn't used. If, however, we need to abort before + * doing the atomic copy, we use this to free the pbes previously allocated. + **/ +static void free_pbe_list(struct pbe **list, int highmem) +{ + while (*list) { + int i; + struct pbe *free_pbe, *next_page = NULL; + struct page *page; + + if (highmem) { + page = (struct page *) *list; + free_pbe = (struct pbe *) kmap(page); + } else { + page = virt_to_page(*list); + free_pbe = *list; + } + + for (i = 0; i < PBES_PER_PAGE; i++) { + if (!free_pbe) + break; + if (highmem) + toi__free_page(29, free_pbe->address); + else + toi_free_page(29, + (unsigned long) free_pbe->address); + free_pbe = free_pbe->next; + } + + if (highmem) { + if (free_pbe) + next_page = free_pbe; + kunmap(page); + } else { + if (free_pbe) + next_page = free_pbe; + } + + toi__free_page(29, page); + *list = (struct pbe *) next_page; + }; +} + +/** + * copyback_post: Post atomic-restore actions. + * + * After doing the atomic restore, we have a few more things to do: + * 1) We want to retain some values across the restore, so we now copy + * these from the nosave variables to the normal ones. + * 2) Set the status flags. + * 3) Resume devices. + * 4) Tell userui so it can redraw & restore settings. + * 5) Reread the page cache. + **/ + +void copyback_post(void) +{ + struct toi_boot_kernel_data *bkd = + (struct toi_boot_kernel_data *) boot_kernel_data_buffer; + + /* + * The boot kernel's data may be larger (newer version) or + * smaller (older version) than ours. Copy the minimum + * of the two sizes, so that we don't overwrite valid values + * from pre-atomic copy. + */ + + memcpy(&toi_bkd, (char *) boot_kernel_data_buffer, + min_t(int, sizeof(struct toi_boot_kernel_data), + bkd->size)); + + if (toi_activate_storage(1)) + panic("Failed to reactivate our storage."); + + toi_ui_post_atomic_restore(); + + toi_cond_pause(1, "About to reload secondary pagedir."); + + if (read_pageset2(0)) + panic("Unable to successfully reread the page cache."); + + /* + * If the user wants to sleep again after resuming from full-off, + * it's most likely to be in order to suspend to ram, so we'll + * do this check after loading pageset2, to give them the fastest + * wakeup when they are ready to use the computer again. + */ + toi_check_resleep(); +} + +/** + * toi_copy_pageset1: Do the atomic copy of pageset1. + * + * Make the atomic copy of pageset1. We can't use copy_page (as we once did) + * because we can't be sure what side effects it has. On my old Duron, with + * 3DNOW, kernel_fpu_begin increments preempt count, making our preempt + * count at resume time 4 instead of 3. + * + * We don't want to call kmap_atomic unconditionally because it has the side + * effect of incrementing the preempt count, which will leave it one too high + * post resume (the page containing the preempt count will be copied after + * its incremented. This is essentially the same problem. + **/ + +void toi_copy_pageset1(void) +{ + int i; + unsigned long source_index, dest_index; + + source_index = get_next_bit_on(&pageset1_map, max_pfn + 1); + dest_index = get_next_bit_on(&pageset1_copy_map, max_pfn + 1); + + for (i = 0; i < pagedir1.size; i++) { + unsigned long *origvirt, *copyvirt; + struct page *origpage, *copypage; + int loop = (PAGE_SIZE / sizeof(unsigned long)) - 1; + + origpage = pfn_to_page(source_index); + copypage = pfn_to_page(dest_index); + + origvirt = PageHighMem(origpage) ? + kmap_atomic(origpage, KM_USER0) : + page_address(origpage); + + copyvirt = PageHighMem(copypage) ? + kmap_atomic(copypage, KM_USER1) : + page_address(copypage); + + while (loop >= 0) { + *(copyvirt + loop) = *(origvirt + loop); + loop--; + } + + if (PageHighMem(origpage)) + kunmap_atomic(origvirt, KM_USER0); + else if (toi_faulted) { + printk(KERN_INFO "%p (%lu) being unmapped after " + "faulting during atomic copy.\n", origpage, + source_index); + kernel_map_pages(origpage, 1, 0); + clear_toi_fault(); + } + + if (PageHighMem(copypage)) + kunmap_atomic(copyvirt, KM_USER1); + + source_index = get_next_bit_on(&pageset1_map, source_index); + dest_index = get_next_bit_on(&pageset1_copy_map, dest_index); + } +} + +/** + * __toi_post_context_save: Steps after saving the cpu context. + * + * Steps taken after saving the CPU state to make the actual + * atomic copy. + * + * Called from swsusp_save in snapshot.c via toi_post_context_save. + **/ + +int __toi_post_context_save(void) +{ + int old_ps1_size = pagedir1.size; + + check_checksums(); + + free_checksum_pages(); + + toi_recalculate_image_contents(1); + + extra_pd1_pages_used = pagedir1.size - old_ps1_size; + + if (extra_pd1_pages_used > extra_pd1_pages_allowance) { + printk(KERN_INFO "Pageset1 has grown by %d pages. " + "extra_pages_allowance is currently only %d.\n", + pagedir1.size - old_ps1_size, + extra_pd1_pages_allowance); + set_abort_result(TOI_EXTRA_PAGES_ALLOW_TOO_SMALL); + return -1; + } + + if (!test_action_state(TOI_TEST_FILTER_SPEED) && + !test_action_state(TOI_TEST_BIO)) + toi_copy_pageset1(); + + return 0; +} + +/** + * toi_hibernate: High level code for doing the atomic copy. + * + * High-level code which prepares to do the atomic copy. Loosely based + * on the swsusp version, but with the following twists: + * - We set toi_running so the swsusp code uses our code paths. + * - We give better feedback regarding what goes wrong if there is a problem. + * - We use an extra function to call the assembly, just in case this code + * is in a module (return address). + **/ + +int toi_hibernate(void) +{ + int error; + + toi_running = 1; /* For the swsusp code we use :< */ + + error = toi_lowlevel_builtin(); + + toi_running = 0; + return error; +} + +/** + * toi_atomic_restore: Prepare to do the atomic restore. + * + * Get ready to do the atomic restore. This part gets us into the same + * state we are in prior to do calling do_toi_lowlevel while + * hibernating: hot-unplugging secondary cpus and freeze processes, + * before starting the thread that will do the restore. + **/ + +int toi_atomic_restore(void) +{ + int error; + + toi_running = 1; + + toi_prepare_status(DONT_CLEAR_BAR, "Atomic restore."); + + if (add_boot_kernel_data_pbe()) + goto Failed; + + if (toi_go_atomic(PMSG_PRETHAW, 0)) + goto Failed; + + /* We'll ignore saved state, but this gets preempt count (etc) right */ + save_processor_state(); + + error = swsusp_arch_resume(); + /* + * Code below is only ever reached in case of failure. Otherwise + * execution continues at place where swsusp_arch_suspend was called. + * + * We don't know whether it's safe to continue (this shouldn't happen), + * so lets err on the side of caution. + */ + BUG(); + +Failed: + free_pbe_list(&restore_pblist, 0); +#ifdef CONFIG_HIGHMEM + free_pbe_list(&restore_highmem_pblist, 1); +#endif + if (test_action_state(TOI_PM_PREPARE_CONSOLE)) + pm_restore_console(); + toi_running = 0; + return 1; +} + +int toi_go_atomic(pm_message_t state, int suspend_time) +{ + toi_prepare_status(DONT_CLEAR_BAR, "Going atomic."); + + if (test_action_state(TOI_PM_PREPARE_CONSOLE)) + pm_prepare_console(); + + if (suspend_time && toi_platform_start()) { + set_abort_result(TOI_PLATFORM_PREP_FAILED); + toi_end_atomic(ATOMIC_STEP_RESTORE_CONSOLE, suspend_time); + return 1; + } + + suspend_console(); + + if (device_suspend(state)) { + set_abort_result(TOI_DEVICE_REFUSED); + toi_end_atomic(ATOMIC_STEP_RESUME_CONSOLE, suspend_time); + return 1; + } + + if (suspend_time && toi_platform_pre_snapshot()) { + set_abort_result(TOI_PRE_SNAPSHOT_FAILED); + toi_end_atomic(ATOMIC_STEP_RESUME_CONSOLE, suspend_time); + return 1; + } + + if (!suspend_time && toi_platform_pre_restore()) { + set_abort_result(TOI_PRE_RESTORE_FAILED); + toi_end_atomic(ATOMIC_STEP_RESUME_CONSOLE, suspend_time); + return 1; + } + + if (test_action_state(TOI_LATE_CPU_HOTPLUG)) { + if (disable_nonboot_cpus()) { + set_abort_result(TOI_CPU_HOTPLUG_FAILED); + toi_end_atomic(ATOMIC_STEP_DEVICE_RESUME, + suspend_time); + return 1; + } + } + + if (suspend_time && arch_prepare_suspend()) { + set_abort_result(TOI_ARCH_PREPARE_FAILED); + toi_end_atomic(ATOMIC_STEP_CPU_HOTPLUG, suspend_time); + return 1; + } + + local_irq_disable(); + + /* At this point, device_suspend() has been called, but *not* + * device_power_down(). We *must* device_power_down() now. + * Otherwise, drivers for some devices (e.g. interrupt controllers) + * become desynchronized with the actual state of the hardware + * at resume time, and evil weirdness ensues. + */ + + if (device_power_down(state)) { + set_abort_result(TOI_DEVICE_REFUSED); + toi_end_atomic(ATOMIC_STEP_IRQS, suspend_time); + return 1; + } + + return 0; +} + +void toi_end_atomic(int stage, int suspend_time) +{ + switch (stage) { + case ATOMIC_ALL_STEPS: + if (!suspend_time) + toi_platform_leave(); + device_power_up(); + case ATOMIC_STEP_IRQS: + local_irq_enable(); + case ATOMIC_STEP_CPU_HOTPLUG: + if (test_action_state(TOI_LATE_CPU_HOTPLUG)) + enable_nonboot_cpus(); + case ATOMIC_STEP_DEVICE_RESUME: + toi_platform_finish(); + device_resume(); + case ATOMIC_STEP_RESUME_CONSOLE: + resume_console(); + case ATOMIC_STEP_RESTORE_CONSOLE: + if (test_action_state(TOI_PM_PREPARE_CONSOLE)) + pm_restore_console(); + + toi_prepare_status(DONT_CLEAR_BAR, "Post atomic."); + } +} diff --git a/kernel/power/tuxonice_atomic_copy.h b/kernel/power/tuxonice_atomic_copy.h new file mode 100644 index 0000000..8d26f8e --- /dev/null +++ b/kernel/power/tuxonice_atomic_copy.h @@ -0,0 +1,21 @@ +/* + * kernel/power/tuxonice_atomic_copy.h + * + * Copyright 2007 Nigel Cunningham (nigel at tuxonice net) + * + * Distributed under GPLv2. + * + * Routines for doing the atomic save/restore. + */ + +enum { + ATOMIC_ALL_STEPS, + ATOMIC_STEP_IRQS, + ATOMIC_STEP_CPU_HOTPLUG, + ATOMIC_STEP_DEVICE_RESUME, + ATOMIC_STEP_RESUME_CONSOLE, + ATOMIC_STEP_RESTORE_CONSOLE +}; + +int toi_go_atomic(pm_message_t state, int toi_time); +void toi_end_atomic(int stage, int toi_time); diff --git a/kernel/power/tuxonice_block_io.c b/kernel/power/tuxonice_block_io.c new file mode 100644 index 0000000..cf7d1fe --- /dev/null +++ b/kernel/power/tuxonice_block_io.c @@ -0,0 +1,1238 @@ +/* + * kernel/power/tuxonice_block_io.c + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * Distributed under GPLv2. + * + * This file contains block io functions for TuxOnIce. These are + * used by the swapwriter and it is planned that they will also + * be used by the NFSwriter. + * + */ + +#include +#include +#include + +#include "tuxonice.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_modules.h" +#include "tuxonice_prepare_image.h" +#include "tuxonice_block_io.h" +#include "tuxonice_ui.h" +#include "tuxonice_alloc.h" + +static int pr_index; + +#if 0 +#define PR_DEBUG(a, b...) do { \ + if (pr_index < 20) \ + printk(a, ##b); \ +} while (0) +#else +#define PR_DEBUG(a, b...) do { } while (0) +#endif + +#define TARGET_OUTSTANDING_IO 16384 +#define MAX_READAHEAD 2048 +#define CLEANUP_BATCH_SIZE 16 + +static int target_outstanding_io = 2048; +static atomic_t current_outstanding_io; +static int max_outstanding_io; +static int max_readahead = 2048; + +struct io_info { + struct bio *sys_struct; + sector_t first_block; + struct page *bio_page, *dest_page; + int writing, readahead_index, cleaned; + struct block_device *dev; + struct list_head list; +}; + +static struct page *bio_queue_head, *bio_queue_tail; +static DEFINE_SPINLOCK(bio_queue_lock); +static atomic_t toi_io_queue_length; +static int toi_io_max_queue_length; +static int queue_trigger = 25; +static int free_mem_throttle; + +static LIST_HEAD(ioinfo_ready_for_cleanup); +static DEFINE_SPINLOCK(ioinfo_ready_lock); + +static LIST_HEAD(ioinfo_busy); +static DEFINE_SPINLOCK(ioinfo_busy_lock); + +static struct page *waiting_on; + +static atomic_t toi_io_in_progress; +static atomic_t toi_io_to_cleanup; +static DECLARE_WAIT_QUEUE_HEAD(num_in_progress_wait); + +static int extra_page_forward; + +static unsigned long toi_readahead_flags[ + DIV_ROUND_UP(MAX_READAHEAD, BITS_PER_LONG)]; +static DEFINE_SPINLOCK(toi_readahead_flags_lock); +static struct page *toi_ra_pages[MAX_READAHEAD]; +static int readahead_index, ra_submit_index; + +static int current_stream; +/* 0 = Header, 1 = Pageset1, 2 = Pageset2 */ +struct extent_iterate_saved_state toi_writer_posn_save[3]; + +/* Pointer to current entry being loaded/saved. */ +struct extent_iterate_state toi_writer_posn; + +/* Not static, so that the allocators can setup and complete + * writing the header */ +char *toi_writer_buffer; +int toi_writer_buffer_posn; + +static struct toi_bdev_info *toi_devinfo; + +DEFINE_MUTEX(toi_bio_queue_mutex); +DEFINE_MUTEX(toi_bio_mutex); + +/** + * set_throttle: Set the point where we pause to avoid oom. + * + * Initially, this value is zero, but when we first fail to allocate memory, + * we set it (plus a buffer) and thereafter throttle i/o once that limit is + * reached. + */ + +static void set_throttle(void) +{ + free_mem_throttle = nr_unallocated_buffer_pages() + 50; +} + +/** + * toi_bio_cleanup_one: Cleanup one bio. + * @io_info : Struct io_info to be cleaned up. + * + * Cleanup the bio pointed to by io_info and record as appropriate that the + * cleanup is done. + */ +static void toi_bio_cleanup_one(struct io_info *io_info) +{ + int readahead_index = io_info->readahead_index; + unsigned long flags; + + BUG_ON(io_info->cleaned); + io_info->cleaned = 1; + + if (!io_info->writing && readahead_index == -1) { + char *to = (char *) kmap(io_info->dest_page); + char *from = (char *) kmap(io_info->bio_page); + memcpy(to, from, PAGE_SIZE); + kunmap(io_info->dest_page); + kunmap(io_info->bio_page); + } + + put_page(io_info->bio_page); + if (io_info->writing || readahead_index == -1) + toi__free_page(13, io_info->bio_page); + + bio_put(io_info->sys_struct); + + if (readahead_index > -1) { + int index = readahead_index/BITS_PER_LONG; + int bit = readahead_index - (index * BITS_PER_LONG); + spin_lock_irqsave(&toi_readahead_flags_lock, flags); + set_bit(bit, &toi_readahead_flags[index]); + spin_unlock_irqrestore(&toi_readahead_flags_lock, flags); + + /* Ensure we don't try to clean this up twice */ + toi_ra_pages[readahead_index]->private = 0; + } + + toi_kfree(1, io_info); + atomic_dec(&toi_io_to_cleanup); + atomic_dec(¤t_outstanding_io); +} + +/** + * toi_cleanup_completed_io: Cleanup completed TuxOnIce i/o. + * + * Cleanup i/o that has been completed. In the end_bio routine (below), we only + * move the associated io_info struct from the busy list to the + * ready_for_cleanup list. Now (no longer in an interrupt context), we can we + * can do the real work. + * + * No locking is needed because we're under toi_bio_mutex. List items can be + * added from the bio_end routine, but we're the only one removing them. + */ +static void toi_cleanup_completed_io(int all) +{ + int num_cleaned = 0; + struct io_info *this; + unsigned long flags; + + spin_lock_irqsave(&ioinfo_ready_lock, flags); + while (!list_empty(&ioinfo_ready_for_cleanup)) { + this = list_first_entry(&ioinfo_ready_for_cleanup, + struct io_info, list); + list_del_init(&this->list); + + if (waiting_on == this->bio_page) + waiting_on = NULL; + + spin_unlock_irqrestore(&ioinfo_ready_lock, flags); + toi_bio_cleanup_one(this); + spin_lock_irqsave(&ioinfo_ready_lock, flags); + + num_cleaned++; + if (!all && num_cleaned == CLEANUP_BATCH_SIZE) + break; + } + spin_unlock_irqrestore(&ioinfo_ready_lock, flags); +} + +#define NUM_REASONS 9 +static atomic_t reasons[NUM_REASONS]; +static char *reason_name[NUM_REASONS] = { + "readahead not ready", + "bio allocation", + "io_struct allocation", + "submit buffer", + "synchronous I/O", + "bio mutex when reading", + "bio mutex when writing", + "toi_bio_queue_page_write", + "memory low" +}; + +/** + * do_bio_wait: Wait for some TuxOnIce i/o to complete. + * + * Submit any I/O that's batched up (if we're not already doing + * that, schedule and clean up whatever we can. + */ +static void do_bio_wait(int reason) +{ + unsigned long flags; + struct io_info *mine = NULL; + struct page *was_waiting_on = waiting_on; + + /* On SMP, waiting_on can be reset, so we make a copy */ + if (was_waiting_on) { + if (PageLocked(was_waiting_on)) { + wait_on_page_bit(was_waiting_on, PG_locked); + atomic_inc(&reasons[reason]); + } + spin_lock_irqsave(&ioinfo_ready_lock, flags); + if (waiting_on) { + mine = (struct io_info *) waiting_on->private; + list_del_init(&mine->list); + waiting_on = NULL; + } + spin_unlock_irqrestore(&ioinfo_ready_lock, flags); + if (mine) + toi_bio_cleanup_one(mine); + } else { + atomic_inc(&reasons[reason]); + + /* Wait for something to cleanup */ + wait_event(num_in_progress_wait, + atomic_read(&toi_io_to_cleanup)); + toi_cleanup_completed_io(0); + } +} + +/** + * toi_finish_all_io: Complete all outstanding i/o. + */ +static void toi_finish_all_io(void) +{ + wait_event(num_in_progress_wait, !atomic_read(&toi_io_in_progress)); + toi_cleanup_completed_io(1); + BUG_ON(atomic_read(&toi_io_to_cleanup)); +} + +/** + * toi_readahead_ready: Is this readahead finished? + * + * Returns whether the readahead requested is ready. + */ +static int toi_readahead_ready(int readahead_index) +{ + int index = readahead_index / BITS_PER_LONG; + int bit = readahead_index - (index * BITS_PER_LONG); + + return test_bit(bit, &toi_readahead_flags[index]); +} + +/** + * toi_wait_on_readahead: Wait on a particular page. + * + * @readahead_index: Index of the readahead to wait for. + */ +static void toi_wait_on_readahead(int readahead_index) +{ + if (!toi_readahead_ready(readahead_index)) { + waiting_on = toi_ra_pages[readahead_index]; + do_bio_wait(0); + } +} + +static int toi_prepare_readahead(int index) +{ + unsigned long new_page; + + if (toi_ra_pages[index]) + return 0; + + new_page = toi_get_zeroed_page(12, TOI_ATOMIC_GFP); + + if (!new_page) + return -ENOMEM; + + toi_ra_pages[index] = virt_to_page(new_page); + return 0; +} + +/* toi_readahead_cleanup + * Clean up structures used for readahead */ +static void toi_cleanup_readahead(int page) +{ + if (toi_ra_pages[page]) { + toi__free_page(12, toi_ra_pages[page]); + toi_ra_pages[page] = 0; + } +} + +/** + * toi_end_bio: bio completion function. + * + * @bio: bio that has completed. + * @err: Error value. Yes, like end_swap_bio_read, we ignore it. + * + * Function called by block driver from interrupt context when I/O is completed. + * This is the reason we use spinlocks in manipulating the io_info lists. Nearly + * the fs/buffer.c version, but we want to mark the page as done in our own + * structures too. + */ +static void toi_end_bio(struct bio *bio, int err) +{ + struct page *page = bio->bi_io_vec[0].bv_page; + struct io_info *io_info = bio->bi_private; + unsigned long flags; + + BUG_ON(!test_bit(BIO_UPTODATE, &bio->bi_flags)); + + spin_lock_irqsave(&ioinfo_busy_lock, flags); + list_del_init(&io_info->list); + spin_unlock_irqrestore(&ioinfo_busy_lock, flags); + + spin_lock_irqsave(&ioinfo_ready_lock, flags); + list_add_tail(&io_info->list, &ioinfo_ready_for_cleanup); + spin_unlock_irqrestore(&ioinfo_ready_lock, flags); + + unlock_page(page); + bio_put(bio); + + atomic_dec(&toi_io_in_progress); + atomic_inc(&toi_io_to_cleanup); + + wake_up(&num_in_progress_wait); +} + +/** + * submit - submit BIO request. + * @writing: READ or WRITE. + * @io_info: IO info structure. + * + * Based on Patrick's pmdisk code from long ago: + * "Straight from the textbook - allocate and initialize the bio. + * If we're writing, make sure the page is marked as dirty. + * Then submit it and carry on." + * + * With a twist, though - we handle block_size != PAGE_SIZE. + * Caller has already checked that our page is not fragmented. + */ +static int submit(struct io_info *io_info) +{ + struct bio *bio = NULL; + unsigned long flags; + + while (!bio) { + bio = bio_alloc(TOI_ATOMIC_GFP, 1); + if (!bio) { + set_throttle(); + do_bio_wait(1); + } + + } + + bio->bi_bdev = io_info->dev; + bio->bi_sector = io_info->first_block; + bio->bi_private = io_info; + bio->bi_end_io = toi_end_bio; + io_info->sys_struct = bio; + + if (bio_add_page(bio, io_info->bio_page, PAGE_SIZE, 0) < PAGE_SIZE) { + printk(KERN_INFO "ERROR: adding page to bio at %lld\n", + (unsigned long long) io_info->first_block); + bio_put(bio); + return -EFAULT; + } + + io_info->bio_page->private = (unsigned long) io_info; + lock_page(io_info->bio_page); + bio_get(bio); + + spin_lock_irqsave(&ioinfo_busy_lock, flags); + list_add_tail(&io_info->list, &ioinfo_busy); + spin_unlock_irqrestore(&ioinfo_busy_lock, flags); + + atomic_inc(&toi_io_in_progress); + + if (unlikely(test_action_state(TOI_TEST_FILTER_SPEED))) { + /* Fake having done the hard work */ + set_bit(BIO_UPTODATE, &bio->bi_flags); + toi_end_bio(bio, 0); + } else + submit_bio(io_info->writing | (1 << BIO_RW_SYNC), bio); + + return 0; +} + +/** + * get_io_info_struct: Allocate a struct for recording info on i/o submitted. + */ +static struct io_info *get_io_info_struct(void) +{ + struct io_info *this = NULL; + int cur_outstanding_io; + int free_pages = nr_unallocated_buffer_pages(); + + /* Getting low on memory and I/O is in progress? */ + while (unlikely(free_pages < free_mem_throttle) && + atomic_read(¤t_outstanding_io)) { + do_bio_wait(8); + free_pages = nr_unallocated_buffer_pages(); + } + + do { + this = toi_kzalloc(1, sizeof(struct io_info), TOI_ATOMIC_GFP); + + if (this) + break; + + set_throttle(); + do_bio_wait(2); + } while (!this); + + memset(this, 0, sizeof(struct io_info)); + INIT_LIST_HEAD(&this->list); + cur_outstanding_io = atomic_add_return(1, ¤t_outstanding_io); + if (cur_outstanding_io > max_outstanding_io) + max_outstanding_io = cur_outstanding_io; + return this; +} + +/** + * toi_do_io: Prepare to do some i/o on a page and submit or batch it. + * + * @writing: Whether reading or writing. + * @bdev: The block device which we're using. + * @block0: The first sector we're reading or writing. + * @page: The page on which I/O is being done. + * @readahead_index: If doing readahead, the index (reset this flag when done). + * @syncio: Whether the i/o is being done synchronously. + * + * Prepare and start a read or write operation. + * + * Note that we always work with our own page. If writing, we might be given a + * compression buffer that will immediately be used to start compressing the + * next page. For reading, we do readahead and therefore don't know the final + * address where the data needs to go. + * + * Failure? What's that? + */ +static void toi_do_io(int writing, struct block_device *bdev, long block0, + struct page *page, int readahead_index, int syncio) +{ + struct io_info *io_info = get_io_info_struct(); + unsigned long buffer_virt = 0; + char *to, *from; + + /* Copy settings to the io_info struct */ + io_info->writing = writing; + io_info->dev = bdev; + io_info->first_block = block0; + io_info->dest_page = page; + io_info->readahead_index = readahead_index; + + if (io_info->readahead_index == -1) { + while (!(buffer_virt = toi_get_zeroed_page(13, TOI_ATOMIC_GFP))) { + set_throttle(); + do_bio_wait(3); + } + + io_info->bio_page = virt_to_page(buffer_virt); + } else { + unsigned long flags; + int index = io_info->readahead_index / BITS_PER_LONG; + int bit = io_info->readahead_index - index * BITS_PER_LONG; + + spin_lock_irqsave(&toi_readahead_flags_lock, flags); + clear_bit(bit, &toi_readahead_flags[index]); + spin_unlock_irqrestore(&toi_readahead_flags_lock, flags); + + io_info->bio_page = page; + } + + /* Done before submitting to avoid races. */ + if (syncio) + waiting_on = io_info->bio_page; + + /* + * If writing, copy our data. The data is probably in lowmem, but we + * cannot be certain. If there is no compression, we might be passed + * the actual source page's address. + */ + if (writing) { + to = (char *) buffer_virt; + from = kmap_atomic(page, KM_USER1); + memcpy(to, from, PAGE_SIZE); + kunmap_atomic(from, KM_USER1); + } + + /* Submit the page */ + get_page(io_info->bio_page); + + submit(io_info); + + if (syncio) + do_bio_wait(4); +} + +/** + * toi_bdev_page_io: Simpler interface to do directly i/o on a single page. + * + * @writing: Whether reading or writing. + * @bdev: Block device on which we're operating. + * @pos: Sector at which page to read starts. + * @page: Page to be read/written. + * + * We used to use bread here, but it doesn't correctly handle + * blocksize != PAGE_SIZE. Now we create a submit_info to get the data we + * want and use our normal routines (synchronously). + */ +static void toi_bdev_page_io(int writing, struct block_device *bdev, + long pos, struct page *page) +{ + toi_do_io(writing, bdev, pos, page, -1, 1); +} + +/** + * toi_bio_memory_needed: Report amount of memory needed for block i/o. + * + * We want to have at least enough memory so as to have target_outstanding_io + * or more transactions on the fly at once. If we can do more, fine. + */ +static int toi_bio_memory_needed(void) +{ + return (max(target_outstanding_io, max_readahead) * + (PAGE_SIZE + sizeof(struct request) + + sizeof(struct bio) + sizeof(struct io_info))); +} + +/* + * toi_bio_print_debug_stats + * + * Description: + */ +static int toi_bio_print_debug_stats(char *buffer, int size) +{ + int len = 0; + + len = snprintf_used(buffer, size, "- Max readahead %d. Max " + "outstanding io %d.\n", max_readahead, + max_outstanding_io); + + len += snprintf_used(buffer + len, size - len, + " Memory_needed: %d x (%lu + %u + %u + %u) = %d bytes.\n", + max(target_outstanding_io, max_readahead), + PAGE_SIZE, (unsigned int) sizeof(struct request), + (unsigned int) sizeof(struct bio), + (unsigned int) sizeof(struct io_info), toi_bio_memory_needed()); + + return len; +} + +/** + * toi_set_devinfo: Set the bdev info used for i/o. + * + * @info: Pointer to array of struct toi_bdev_info - the list of + * bdevs and blocks on them in which the image is stored. + * + * Set the list of bdevs and blocks in which the image will be stored. + * Sort of like putting a tape in the cassette player. + */ +static void toi_set_devinfo(struct toi_bdev_info *info) +{ + toi_devinfo = info; +} + +/** + * dump_block_chains: Print the contents of the bdev info array. + */ +static void dump_block_chains(void) +{ + int i; + + for (i = 0; i < toi_writer_posn.num_chains; i++) { + struct extent *this; + + this = (toi_writer_posn.chains + i)->first; + + if (!this) + continue; + + printk(KERN_INFO "Chain %d:", i); + + while (this) { + printk(" [%lu-%lu]%s", this->minimum, + this->maximum, this->next ? "," : ""); + this = this->next; + } + + printk("\n"); + } + + for (i = 0; i < 3; i++) + printk(KERN_INFO "Posn %d: Chain %d, extent %d, offset %lu.\n", + i, toi_writer_posn_save[i].chain_num, + toi_writer_posn_save[i].extent_num, + toi_writer_posn_save[i].offset); +} + +/** + * go_next_page: Skip blocks to the start of the next page. + * + * Go forward one page, or two if extra_page_forward is set. It only gets + * set at the start of reading the image header, to skip the first page + * of the header, which is read without using the extent chains. + */ +static int go_next_page(int writing) +{ + int i, max = (toi_writer_posn.current_chain == -1) ? 1 : + toi_devinfo[toi_writer_posn.current_chain].blocks_per_page; + + for (i = 0; i < max; i++) + toi_extent_state_next(&toi_writer_posn); + + if (toi_extent_state_eof(&toi_writer_posn)) { + /* Don't complain if readahead falls off the end */ + if (writing) { + printk(KERN_INFO "Extent state eof. " + "Expected compression ratio too optimistic?\n"); + dump_block_chains(); + } + return -ENODATA; + } + + if (extra_page_forward) { + extra_page_forward = 0; + return go_next_page(writing); + } + + return 0; +} + +/** + * set_extra_page_forward: Make us skip an extra page on next go_next_page. + * + * Used in reading header, to jump to 2nd page after getting 1st page + * direct from image header. + */ +static void set_extra_page_forward(void) +{ + extra_page_forward = 1; +} + +/** + * toi_bio_rw_page: Do i/o on the next disk page in the image. + * + * @writing: Whether reading or writing. + * @page: Page to do i/o on. + * @readahead_index: -1 or the index in the readahead ring. + * + * Submit a page for reading or writing, possibly readahead. + */ +static int toi_bio_rw_page(int writing, struct page *page, + int readahead_index) +{ + struct toi_bdev_info *dev_info; + + if (go_next_page(writing)) { + printk(KERN_INFO "Failed to advance a page in the extent " + "data.\n"); + return -ENODATA; + } + + if (current_stream == 0 && writing && + toi_writer_posn.current_chain == + toi_writer_posn_save[2].chain_num && + toi_writer_posn.current_offset == + toi_writer_posn_save[2].offset) { + dump_block_chains(); + BUG(); + } + + dev_info = &toi_devinfo[toi_writer_posn.current_chain]; + + toi_do_io(writing, dev_info->bdev, + toi_writer_posn.current_offset << + dev_info->bmap_shift, + page, readahead_index, 0); + + return 0; +} + +/** + * toi_rw_init: Prepare to read or write a stream in the image. + * + * @writing: Whether reading or writing. + * @stream number: Section of the image being processed. + */ +static int toi_rw_init(int writing, int stream_number) +{ + toi_extent_state_restore(&toi_writer_posn, + &toi_writer_posn_save[stream_number]); + + toi_writer_buffer_posn = writing ? 0 : PAGE_SIZE; + + current_stream = stream_number; + + readahead_index = ra_submit_index = -1; + + pr_index = 0; + + return 0; +} + +/** + * toi_read_header_init: Prepare to read the image header. + * + * Reset readahead indices prior to starting to read a section of the image. + */ +static void toi_read_header_init(void) +{ + readahead_index = ra_submit_index = -1; +} + +static int toi_bio_queue_flush_pages(int finish); +static void toi_bio_queue_page_write(char **full_buffer); + +/** + * toi_rw_cleanup: Cleanup after i/o. + * + * @writing: Whether we were reading or writing. + */ +static int toi_rw_cleanup(int writing) +{ + int i; + + if (writing) { + if (toi_writer_buffer_posn) + toi_bio_queue_page_write(&toi_writer_buffer); + toi_bio_queue_flush_pages(1); + } + + if (writing && current_stream == 2) + toi_extent_state_save(&toi_writer_posn, + &toi_writer_posn_save[1]); + + toi_finish_all_io(); + + if (!writing) + for (i = 0; i < max_readahead; i++) + toi_cleanup_readahead(i); + + current_stream = 0; + + for (i = 0; i < NUM_REASONS; i++) { + if (!atomic_read(&reasons[i])) + continue; + printk(KERN_INFO "Waited for i/o due to %s %d times.\n", + reason_name[i], atomic_read(&reasons[i])); + atomic_set(&reasons[i], 0); + } + return 0; +} + +/** + * toi_bio_read_page_with_readahead: Read a disk page with readahead. + * + * Read a page from disk, submitting readahead and cleaning up finished i/o + * while we wait for the page we're after. + */ +static int toi_bio_read_page_with_readahead(void) +{ + static int last_result; + unsigned long *virt; + + if (readahead_index == -1) { + last_result = 0; + readahead_index = ra_submit_index = 0; + } + + /* Start a new readahead? */ + if (last_result) { + /* We failed to submit a read, and have cleaned up + * all the readahead previously submitted */ + if (ra_submit_index == readahead_index) { + abort_hibernate(TOI_FAILED_IO, "Failed to submit" + " a read and no readahead left."); + return -EIO; + } + goto wait; + } + + do { + if (toi_prepare_readahead(ra_submit_index)) { + /* We are supposed to have enough memory. */ + printk(KERN_INFO "Failed to get readahead buffer page " + "%d.\n", ra_submit_index); + toi_alloc_print_debug_stats(); + toi_message(TOI_ANY_SECTION, TOI_LOW, 1, + " - Free memory is %d.\n", + real_nr_free_pages(all_zones_mask)); + + BUG(); + } + + last_result = toi_bio_rw_page(READ, + toi_ra_pages[ra_submit_index], + ra_submit_index); + + if (last_result) { + /* + * Don't complain about failing to do readahead past + * the end of storage. + */ + if (last_result != -61) + printk(KERN_INFO "Begin read chunk for page %d " + "returned %d.\n", + ra_submit_index, last_result); + break; + } + + ra_submit_index++; + + if (ra_submit_index == max_readahead) + ra_submit_index = 0; + + } while ((!last_result) && (ra_submit_index != readahead_index) && + (!toi_readahead_ready(readahead_index))); + +wait: + toi_wait_on_readahead(readahead_index); + + virt = kmap_atomic(toi_ra_pages[readahead_index], KM_USER1); + memcpy(toi_writer_buffer, virt, PAGE_SIZE); + kunmap_atomic(virt, KM_USER1); + + readahead_index++; + if (readahead_index == max_readahead) + readahead_index = 0; + + return 0; +} + +/* + * toi_bio_queue_flush_pages + */ + +static int toi_bio_queue_flush_pages(int finish) +{ + unsigned long flags; + int result = 0; + + if (!finish && atomic_read(&toi_io_queue_length) < queue_trigger) + return 0; + + if (!mutex_trylock(&toi_bio_queue_mutex)) + return 0; + + spin_lock_irqsave(&bio_queue_lock, flags); + while (bio_queue_head) { + struct page *page = bio_queue_head; + bio_queue_head = (struct page *) page->private; + if (bio_queue_tail == page) + bio_queue_tail = NULL; + atomic_dec(&toi_io_queue_length); + spin_unlock_irqrestore(&bio_queue_lock, flags); + result = toi_bio_rw_page(WRITE, page, -1); + toi__free_page(11, page); + if (result) + goto out; + spin_lock_irqsave(&bio_queue_lock, flags); + } + spin_unlock_irqrestore(&bio_queue_lock, flags); +out: + mutex_unlock(&toi_bio_queue_mutex); + return result; +} + +/* + * toi_bio_queue_page_write + */ +static void toi_bio_queue_page_write(char **full_buffer) +{ + struct page *page = virt_to_page(*full_buffer); + unsigned long flags; + int new_length; + + page->private = 0; + + spin_lock_irqsave(&bio_queue_lock, flags); + if (!bio_queue_head) + bio_queue_head = page; + else + bio_queue_tail->private = (unsigned long) page; + + bio_queue_tail = page; + + atomic_inc(&toi_io_queue_length); + + new_length = atomic_read(&toi_io_queue_length); + + if (new_length > toi_io_max_queue_length) + toi_io_max_queue_length++; + + spin_unlock_irqrestore(&bio_queue_lock, flags); + + *full_buffer = NULL; + + while (!*full_buffer) { + *full_buffer = (char *) toi_get_zeroed_page(11, TOI_ATOMIC_GFP); + if (!*full_buffer) { + set_throttle(); + do_bio_wait(7); + } + } +} + +/* + * toi_rw_buffer: Combine smaller buffers into PAGE_SIZE I/O. + * + * @writing: Bool - whether writing (or reading). + * @buffer: The start of the buffer to write or fill. + * @buffer_size: The size of the buffer to write or fill. + */ +static int toi_rw_buffer(int writing, char *buffer, int buffer_size) +{ + int bytes_left = buffer_size; + + while (bytes_left) { + char *source_start = buffer + buffer_size - bytes_left; + char *dest_start = toi_writer_buffer + toi_writer_buffer_posn; + int capacity = PAGE_SIZE - toi_writer_buffer_posn; + char *to = writing ? dest_start : source_start; + char *from = writing ? source_start : dest_start; + + if (bytes_left <= capacity) { + memcpy(to, from, bytes_left); + toi_writer_buffer_posn += bytes_left; + return 0; + } + + /* Complete this page and start a new one */ + memcpy(to, from, capacity); + bytes_left -= capacity; + + if (!writing) { + if (toi_bio_read_page_with_readahead()) + return -EIO; + } else + toi_bio_queue_page_write(&toi_writer_buffer); + + toi_writer_buffer_posn = 0; + toi_cond_pause(0, NULL); + } + + return 0; +} + +/** + * toi_bio_read_page - read a page of the image. + * + * @pfn: The pfn where the data belongs. + * @buffer_page: The page containing the (possibly compressed) data. + * @buf_size: The number of bytes on @buffer_page used. + * + * Read a (possibly compressed) page from the image, into buffer_page, + * returning its pfn and the buffer size. + */ +static int toi_bio_read_page(unsigned long *pfn, struct page *buffer_page, + unsigned int *buf_size) +{ + int result = 0; + char *buffer_virt = kmap(buffer_page); + + pr_index++; + + mutex_lock(&toi_bio_mutex); + + if (toi_rw_buffer(READ, (char *) pfn, sizeof(unsigned long)) || + toi_rw_buffer(READ, (char *) buf_size, sizeof(int)) || + toi_rw_buffer(READ, buffer_virt, *buf_size)) { + abort_hibernate(TOI_FAILED_IO, "Read of data failed."); + result = 1; + } else + PR_DEBUG("%d: PFN %ld, %d bytes.\n", pr_index, *pfn, *buf_size); + + mutex_unlock(&toi_bio_mutex); + kunmap(buffer_page); + return result; +} + +/** + * toi_bio_write_page - Write a page of the image. + * + * @pfn: The pfn where the data belongs. + * @buffer_page: The page containing the (possibly compressed) data. + * @buf_size: The number of bytes on @buffer_page used. + * + * Write a (possibly compressed) page to the image from the buffer, together + * with it's index and buffer size. + */ +static int toi_bio_write_page(unsigned long pfn, struct page *buffer_page, + unsigned int buf_size) +{ + char *buffer_virt; + int result = 0; + + pr_index++; + + if (unlikely(test_action_state(TOI_TEST_FILTER_SPEED))) + return 0; + + mutex_lock(&toi_bio_mutex); + buffer_virt = kmap(buffer_page); + + if (toi_rw_buffer(WRITE, (char *) &pfn, sizeof(unsigned long)) || + toi_rw_buffer(WRITE, (char *) &buf_size, sizeof(int)) || + toi_rw_buffer(WRITE, buffer_virt, buf_size)) + result = -EIO; + + PR_DEBUG("%d: Index %ld, %d bytes. Result %d.\n", pr_index, pfn, + buf_size, result); + + kunmap(buffer_page); + mutex_unlock(&toi_bio_mutex); + toi_bio_queue_flush_pages(0); + return result; +} + +/** + * toi_rw_header_chunk: Read or write a portion of the image header. + * + * @writing: Whether reading or writing. + * @owner: The module for which we're writing. Used for confirming that modules + * don't use more header space than they asked for. + * @buffer: Address of the data to write. + * @buffer_size: Size of the data buffer. + */ +static int toi_rw_header_chunk(int writing, + struct toi_module_ops *owner, + char *buffer, int buffer_size) +{ + int result; + + if (owner) { + owner->header_used += buffer_size; + toi_message(TOI_HEADER, TOI_LOW, 1, + "Header: %s : %d bytes (%d/%d).\n", + buffer_size, owner->header_used, + owner->header_requested); + if (owner->header_used > owner->header_requested) { + printk(KERN_EMERG "TuxOnIce module %s is using more" + "header space (%u) than it requested (%u).\n", + owner->name, + owner->header_used, + owner->header_requested); + return buffer_size; + } + } else + toi_message(TOI_HEADER, TOI_LOW, 1, + "Header: (No owner): %d bytes.\n", buffer_size); + + result = toi_rw_buffer(writing, buffer, buffer_size); + if (writing) { + int flush_result = toi_bio_queue_flush_pages(0); + if (!result) + result = flush_result; + } + return result; +} + +/** + * write_header_chunk_finish: Flush any buffered header data. + */ +static int write_header_chunk_finish(void) +{ + int result = 0; + + toi_bio_queue_flush_pages(1); + + if (toi_writer_buffer_posn) { + result = toi_bio_rw_page(WRITE, + virt_to_page(toi_writer_buffer), -1) ? -EIO : 0; + } + + toi_finish_all_io(); + + return result; +} + +/** + * toi_bio_storage_needed: Get the amount of storage needed for my fns. + */ +static int toi_bio_storage_needed(void) +{ + return 2 * sizeof(int); +} + +/** + * toi_bio_save_config_info: Save block i/o config to image header. + * + * @buf: PAGE_SIZE'd buffer into which data should be saved. + */ +static int toi_bio_save_config_info(char *buf) +{ + int *ints = (int *) buf; + ints[0] = target_outstanding_io; + ints[1] = max_readahead; + return 2 * sizeof(int); +} + +/** + * toi_bio_load_config_info: Restore block i/o config. + * + * @buf: Data to be reloaded. + * @size: Size of the buffer saved. + */ +static void toi_bio_load_config_info(char *buf, int size) +{ + int *ints = (int *) buf; + target_outstanding_io = ints[0]; + max_readahead = ints[1]; +} + +/** + * toi_bio_initialise: Initialise bio code at start of some action. + * + * @starting_cycle: Whether starting a hibernation cycle, or just reading or + * writing a sysfs value. + */ +static int toi_bio_initialise(int starting_cycle) +{ + toi_writer_buffer = (char *) toi_get_zeroed_page(14, TOI_ATOMIC_GFP); + + if (starting_cycle) + max_outstanding_io = 0; + + return toi_writer_buffer ? 0 : -ENOMEM; +} + +/** + * toi_bio_cleanup: Cleanup after some action. + * + * @finishing_cycle: Whether completing a cycle. + */ +static void toi_bio_cleanup(int finishing_cycle) +{ + if (toi_writer_buffer) { + toi_free_page(14, (unsigned long) toi_writer_buffer); + toi_writer_buffer = NULL; + } + + atomic_set(&toi_io_queue_length, 0); +} + +struct toi_bio_ops toi_bio_ops = { + .bdev_page_io = toi_bdev_page_io, + .finish_all_io = toi_finish_all_io, + .forward_one_page = go_next_page, + .set_extra_page_forward = set_extra_page_forward, + .set_devinfo = toi_set_devinfo, + .read_page = toi_bio_read_page, + .write_page = toi_bio_write_page, + .rw_init = toi_rw_init, + .rw_cleanup = toi_rw_cleanup, + .read_header_init = toi_read_header_init, + .rw_header_chunk = toi_rw_header_chunk, + .write_header_chunk_finish = write_header_chunk_finish, +}; + +static struct toi_sysfs_data sysfs_params[] = { + { TOI_ATTR("target_outstanding_io", SYSFS_RW), + SYSFS_INT(&target_outstanding_io, 0, TARGET_OUTSTANDING_IO, 0), + }, + + { TOI_ATTR("queue_trigger", SYSFS_RW), + SYSFS_INT(&queue_trigger, 1, 4096, 0), + }, + + { TOI_ATTR("max_readahead", SYSFS_RW), + SYSFS_INT(&max_readahead, 1, MAX_READAHEAD, 0), + }, +}; + +static struct toi_module_ops toi_blockwriter_ops = { + .name = "lowlevel i/o", + .type = MISC_HIDDEN_MODULE, + .directory = "block_io", + .module = THIS_MODULE, + .print_debug_info = toi_bio_print_debug_stats, + .memory_needed = toi_bio_memory_needed, + .storage_needed = toi_bio_storage_needed, + .save_config_info = toi_bio_save_config_info, + .load_config_info = toi_bio_load_config_info, + .initialise = toi_bio_initialise, + .cleanup = toi_bio_cleanup, + + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +/** + * toi_block_io_load: Load time routine for block i/o module. + * + * Register block i/o ops and sysfs entries. + */ +static __init int toi_block_io_load(void) +{ + return toi_register_module(&toi_blockwriter_ops); +} + +#if defined(CONFIG_TOI_FILE_EXPORTS) || defined(CONFIG_TOI_SWAP_EXPORTS) +EXPORT_SYMBOL_GPL(toi_writer_posn); +EXPORT_SYMBOL_GPL(toi_writer_posn_save); +EXPORT_SYMBOL_GPL(toi_writer_buffer); +EXPORT_SYMBOL_GPL(toi_writer_buffer_posn); +EXPORT_SYMBOL_GPL(toi_bio_ops); +#endif +#ifdef MODULE +static __exit void toi_block_io_unload(void) +{ + toi_unregister_module(&toi_blockwriter_ops); +} + +module_init(toi_block_io_load); +module_exit(toi_block_io_unload); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Nigel Cunningham"); +MODULE_DESCRIPTION("TuxOnIce block io functions"); +#else +late_initcall(toi_block_io_load); +#endif diff --git a/kernel/power/tuxonice_block_io.h b/kernel/power/tuxonice_block_io.h new file mode 100644 index 0000000..14cc8bd --- /dev/null +++ b/kernel/power/tuxonice_block_io.h @@ -0,0 +1,53 @@ +/* + * kernel/power/tuxonice_block_io.h + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * Copyright (C) 2006 Red Hat, inc. + * + * Distributed under GPLv2. + * + * This file contains declarations for functions exported from + * tuxonice_block_io.c, which contains low level io functions. + */ + +#include +#include "tuxonice_extent.h" + +struct toi_bdev_info { + struct block_device *bdev; + dev_t dev_t; + int bmap_shift; + int blocks_per_page; +}; + +/* + * Our exported interface so the swapwriter and filewriter don't + * need these functions duplicated. + */ +struct toi_bio_ops { + void (*bdev_page_io) (int rw, struct block_device *bdev, long pos, + struct page *page); + void (*check_io_stats) (void); + void (*reset_io_stats) (void); + void (*finish_all_io) (void); + int (*forward_one_page) (int writing); + void (*set_extra_page_forward) (void); + void (*set_devinfo) (struct toi_bdev_info *info); + int (*read_page) (unsigned long *index, struct page *buffer_page, + unsigned int *buf_size); + int (*write_page) (unsigned long index, struct page *buffer_page, + unsigned int buf_size); + void (*read_header_init) (void); + int (*rw_header_chunk) (int rw, struct toi_module_ops *owner, + char *buffer, int buffer_size); + int (*write_header_chunk_finish) (void); + int (*rw_init) (int rw, int stream_number); + int (*rw_cleanup) (int rw); +}; + +extern struct toi_bio_ops toi_bio_ops; + +extern char *toi_writer_buffer; +extern int toi_writer_buffer_posn; +extern struct extent_iterate_saved_state toi_writer_posn_save[3]; +extern struct extent_iterate_state toi_writer_posn; diff --git a/kernel/power/tuxonice_builtin.c b/kernel/power/tuxonice_builtin.c new file mode 100644 index 0000000..f6edb14 --- /dev/null +++ b/kernel/power/tuxonice_builtin.c @@ -0,0 +1,399 @@ +/* + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "tuxonice_io.h" +#include "tuxonice.h" +#include "tuxonice_extent.h" +#include "tuxonice_block_io.h" +#include "tuxonice_netlink.h" +#include "tuxonice_prepare_image.h" +#include "tuxonice_ui.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_pagedir.h" +#include "tuxonice_modules.h" +#include "tuxonice_builtin.h" +#include "tuxonice_power_off.h" + +/* + * Highmem related functions (x86 only). + */ + +#ifdef CONFIG_HIGHMEM + +/** + * copyback_high: Restore highmem pages. + * + * Highmem data and pbe lists are/can be stored in highmem. + * The format is slightly different to the lowmem pbe lists + * used for the assembly code: the last pbe in each page is + * a struct page * instead of struct pbe *, pointing to the + * next page where pbes are stored (or NULL if happens to be + * the end of the list). Since we don't want to generate + * unnecessary deltas against swsusp code, we use a cast + * instead of a union. + **/ + +static void copyback_high(void) +{ + struct page *pbe_page = (struct page *) restore_highmem_pblist; + struct pbe *this_pbe, *first_pbe; + unsigned long *origpage, *copypage; + int pbe_index = 1; + + if (!pbe_page) + return; + + this_pbe = (struct pbe *) kmap_atomic(pbe_page, KM_BOUNCE_READ); + first_pbe = this_pbe; + + while (this_pbe) { + int loop = (PAGE_SIZE / sizeof(unsigned long)) - 1; + + origpage = kmap_atomic((struct page *) this_pbe->orig_address, + KM_BIO_DST_IRQ); + copypage = kmap_atomic((struct page *) this_pbe->address, + KM_BIO_SRC_IRQ); + + while (loop >= 0) { + *(origpage + loop) = *(copypage + loop); + loop--; + } + + kunmap_atomic(origpage, KM_BIO_DST_IRQ); + kunmap_atomic(copypage, KM_BIO_SRC_IRQ); + + if (!this_pbe->next) + break; + + if (pbe_index < PBES_PER_PAGE) { + this_pbe++; + pbe_index++; + } else { + pbe_page = (struct page *) this_pbe->next; + kunmap_atomic(first_pbe, KM_BOUNCE_READ); + if (!pbe_page) + return; + this_pbe = (struct pbe *) kmap_atomic(pbe_page, + KM_BOUNCE_READ); + first_pbe = this_pbe; + pbe_index = 1; + } + } + kunmap_atomic(first_pbe, KM_BOUNCE_READ); +} + +#else /* CONFIG_HIGHMEM */ +void copyback_high(void) { } +#endif + +char toi_wait_for_keypress_dev_console(int timeout) +{ + int fd, this_timeout = 255; + char key = '\0'; + struct termios t, t_backup; + + /* We should be guaranteed /dev/console exists after populate_rootfs() + * in init/main.c. + */ + fd = sys_open("/dev/console", O_RDONLY, 0); + if (fd < 0) { + printk(KERN_INFO "Couldn't open /dev/console.\n"); + return key; + } + + if (sys_ioctl(fd, TCGETS, (long)&t) < 0) + goto out_close; + + memcpy(&t_backup, &t, sizeof(t)); + + t.c_lflag &= ~(ISIG|ICANON|ECHO); + t.c_cc[VMIN] = 0; + +new_timeout: + if (timeout > 0) { + this_timeout = timeout < 26 ? timeout : 25; + timeout -= this_timeout; + this_timeout *= 10; + } + + t.c_cc[VTIME] = this_timeout; + + if (sys_ioctl(fd, TCSETS, (long)&t) < 0) + goto out_restore; + + while (1) { + if (sys_read(fd, &key, 1) <= 0) { + if (timeout) + goto new_timeout; + key = '\0'; + break; + } + key = tolower(key); + if (test_toi_state(TOI_SANITY_CHECK_PROMPT)) { + if (key == 'c') { + set_toi_state(TOI_CONTINUE_REQ); + break; + } else if (key == ' ') + break; + } else + break; + } + +out_restore: + sys_ioctl(fd, TCSETS, (long)&t_backup); +out_close: + sys_close(fd); + + return key; +} + +struct toi_boot_kernel_data toi_bkd __nosavedata + __attribute__((aligned(PAGE_SIZE))) = { + MY_BOOT_KERNEL_DATA_VERSION, + 0, +#ifdef CONFIG_TOI_REPLACE_SWSUSP + (1 << TOI_REPLACE_SWSUSP) | +#endif + (1 << TOI_PAGESET2_FULL) | (1 << TOI_LATE_CPU_HOTPLUG), +}; +EXPORT_SYMBOL_GPL(toi_bkd); + +struct block_device *toi_open_by_devnum(dev_t dev, unsigned mode) +{ + struct block_device *bdev = bdget(dev); + int err = -ENOMEM; + int flags = mode & FMODE_WRITE ? O_RDWR : O_RDONLY; + flags |= O_NONBLOCK; + if (bdev) + err = blkdev_get(bdev, mode, flags); + return err ? ERR_PTR(err) : bdev; +} +EXPORT_SYMBOL_GPL(toi_open_by_devnum); + +EXPORT_SYMBOL_GPL(toi_wait_for_keypress_dev_console); +EXPORT_SYMBOL_GPL(hibernation_platform_enter); +EXPORT_SYMBOL_GPL(platform_start); +EXPORT_SYMBOL_GPL(platform_pre_snapshot); +EXPORT_SYMBOL_GPL(platform_leave); +EXPORT_SYMBOL_GPL(platform_finish); +EXPORT_SYMBOL_GPL(platform_pre_restore); +EXPORT_SYMBOL_GPL(platform_restore_cleanup); + +#ifdef CONFIG_ARCH_HIBERNATION_HEADER +EXPORT_SYMBOL_GPL(arch_hibernation_header_save); +EXPORT_SYMBOL_GPL(arch_hibernation_header_restore); +#endif +EXPORT_SYMBOL_GPL(init_swsusp_header); + +#ifdef CONFIG_TOI_CORE_EXPORTS +#ifdef CONFIG_X86_64 +EXPORT_SYMBOL_GPL(restore_processor_state); +EXPORT_SYMBOL_GPL(save_processor_state); +#endif + +EXPORT_SYMBOL_GPL(pm_chain_head); +EXPORT_SYMBOL_GPL(kernel_shutdown_prepare); +EXPORT_SYMBOL_GPL(drop_pagecache); +EXPORT_SYMBOL_GPL(restore_pblist); +EXPORT_SYMBOL_GPL(pm_mutex); +EXPORT_SYMBOL_GPL(pm_restore_console); +EXPORT_SYMBOL_GPL(super_blocks); +EXPORT_SYMBOL_GPL(next_zone); + +EXPORT_SYMBOL_GPL(freeze_processes); +EXPORT_SYMBOL_GPL(thaw_processes); +EXPORT_SYMBOL_GPL(thaw_kernel_threads); +EXPORT_SYMBOL_GPL(shrink_all_memory); +EXPORT_SYMBOL_GPL(shrink_one_zone); +EXPORT_SYMBOL_GPL(saveable_page); +EXPORT_SYMBOL_GPL(swsusp_arch_suspend); +EXPORT_SYMBOL_GPL(swsusp_arch_resume); +EXPORT_SYMBOL_GPL(pm_prepare_console); +EXPORT_SYMBOL_GPL(follow_page); +EXPORT_SYMBOL_GPL(machine_halt); +EXPORT_SYMBOL_GPL(block_dump); +EXPORT_SYMBOL_GPL(unlink_lru_lists); +EXPORT_SYMBOL_GPL(relink_lru_lists); +EXPORT_SYMBOL_GPL(power_subsys); +EXPORT_SYMBOL_GPL(machine_power_off); +EXPORT_SYMBOL_GPL(suspend_devices_and_enter); +EXPORT_SYMBOL_GPL(first_online_pgdat); +EXPORT_SYMBOL_GPL(next_online_pgdat); +EXPORT_SYMBOL_GPL(machine_restart); +EXPORT_SYMBOL_GPL(saved_command_line); +EXPORT_SYMBOL_GPL(tasklist_lock); +#ifdef CONFIG_PM_SLEEP_SMP +EXPORT_SYMBOL_GPL(disable_nonboot_cpus); +EXPORT_SYMBOL_GPL(enable_nonboot_cpus); +#endif +#endif + +int toi_wait = CONFIG_TOI_DEFAULT_WAIT; + +#ifdef CONFIG_TOI_USERUI_EXPORTS +EXPORT_SYMBOL_GPL(kmsg_redirect); +#endif +EXPORT_SYMBOL_GPL(toi_wait); + +#if defined(CONFIG_TOI_USERUI_EXPORTS) || defined(CONFIG_TOI_CORE_EXPORTS) +EXPORT_SYMBOL_GPL(console_printk); +#endif +#ifdef CONFIG_TOI_SWAP_EXPORTS /* TuxOnIce swap specific */ +EXPORT_SYMBOL_GPL(sys_swapon); +EXPORT_SYMBOL_GPL(sys_swapoff); +EXPORT_SYMBOL_GPL(si_swapinfo); +EXPORT_SYMBOL_GPL(map_swap_page); +EXPORT_SYMBOL_GPL(get_swap_page); +EXPORT_SYMBOL_GPL(swap_free); +EXPORT_SYMBOL_GPL(get_swap_info_struct); +#endif + +#ifdef CONFIG_TOI_FILE_EXPORTS +/* TuxOnice file allocator specific support */ +EXPORT_SYMBOL_GPL(sys_unlink); +EXPORT_SYMBOL_GPL(sys_mknod); +#endif + +/* Swap or file */ +#if defined(CONFIG_TOI_FILE_EXPORTS) || defined(CONFIG_TOI_SWAP_EXPORTS) +EXPORT_SYMBOL_GPL(bio_set_pages_dirty); +EXPORT_SYMBOL_GPL(name_to_dev_t); +#endif + +#if defined(CONFIG_TOI_FILE_EXPORTS) || defined(CONFIG_TOI_SWAP_EXPORTS) || \ + defined(CONFIG_TOI_CORE_EXPORTS) +EXPORT_SYMBOL_GPL(resume_file); +#endif +struct toi_core_fns *toi_core_fns; +EXPORT_SYMBOL_GPL(toi_core_fns); + +DECLARE_DYN_PAGEFLAGS(pageset1_map); +DECLARE_DYN_PAGEFLAGS(pageset1_copy_map); +EXPORT_SYMBOL_GPL(pageset1_map); +EXPORT_SYMBOL_GPL(pageset1_copy_map); + +unsigned long toi_result; +struct pagedir pagedir1 = {1}; + +EXPORT_SYMBOL_GPL(toi_result); +EXPORT_SYMBOL_GPL(pagedir1); + +unsigned long toi_get_nonconflicting_page(void) +{ + return toi_core_fns->get_nonconflicting_page(); +} + +int toi_post_context_save(void) +{ + return toi_core_fns->post_context_save(); +} + +int toi_try_hibernate(int have_pmsem) +{ + if (!toi_core_fns) + return -ENODEV; + + return toi_core_fns->try_hibernate(have_pmsem); +} + +void toi_try_resume(void) +{ + if (toi_core_fns) + toi_core_fns->try_resume(); + else + printk(KERN_INFO "TuxOnIce core not loaded yet.\n"); +} + +int toi_lowlevel_builtin(void) +{ + int error = 0; + + save_processor_state(); + error = swsusp_arch_suspend(); + if (error) + printk(KERN_ERR "Error %d hibernating\n", error); + + /* Restore control flow appears here */ + if (!toi_in_hibernate) { + copyback_high(); + set_toi_state(TOI_NOW_RESUMING); + } + + restore_processor_state(); + + return error; +} + +EXPORT_SYMBOL_GPL(toi_lowlevel_builtin); + +unsigned long toi_compress_bytes_in, toi_compress_bytes_out; +EXPORT_SYMBOL_GPL(toi_compress_bytes_in); +EXPORT_SYMBOL_GPL(toi_compress_bytes_out); + +unsigned long toi_state = ((1 << TOI_BOOT_TIME) | + (1 << TOI_IGNORE_LOGLEVEL) | + (1 << TOI_IO_STOPPED)); +EXPORT_SYMBOL_GPL(toi_state); + +/* The number of hibernates we have started (some may have been cancelled) */ +unsigned int nr_hibernates; +EXPORT_SYMBOL_GPL(nr_hibernates); + +int toi_running; +EXPORT_SYMBOL_GPL(toi_running); + +int toi_in_hibernate __nosavedata; +EXPORT_SYMBOL_GPL(toi_in_hibernate); + +__nosavedata struct pbe *restore_highmem_pblist; + +#ifdef CONFIG_TOI_CORE_EXPORTS +#ifdef CONFIG_HIGHMEM +EXPORT_SYMBOL_GPL(nr_free_highpages); +EXPORT_SYMBOL_GPL(saveable_highmem_page); +EXPORT_SYMBOL_GPL(restore_highmem_pblist); +#endif +#endif + +#if defined(CONFIG_TOI_CORE_EXPORTS) || defined(CONFIG_TOI_PAGEFLAGS_EXPORTS) +EXPORT_SYMBOL_GPL(max_pfn); +#endif + +#if defined(CONFIG_TOI_EXPORTS) || defined(CONFIG_TOI_CORE_EXPORTS) +EXPORT_SYMBOL_GPL(snprintf_used); +#endif + +static int __init toi_wait_setup(char *str) +{ + int value; + + if (sscanf(str, "=%d", &value)) { + if (value < -1 || value > 255) + printk(KERN_INFO "TuxOnIce_wait outside range -1 to " + "255.\n"); + else + toi_wait = value; + } + + return 1; +} + +__setup("toi_wait", toi_wait_setup); diff --git a/kernel/power/tuxonice_builtin.h b/kernel/power/tuxonice_builtin.h new file mode 100644 index 0000000..bf6083a --- /dev/null +++ b/kernel/power/tuxonice_builtin.h @@ -0,0 +1,30 @@ +/* + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + */ +#include +#include + +extern struct toi_core_fns *toi_core_fns; +extern unsigned long toi_compress_bytes_in, toi_compress_bytes_out; +extern unsigned int nr_hibernates; +extern int toi_in_hibernate; + +extern __nosavedata struct pbe *restore_highmem_pblist; + +int toi_lowlevel_builtin(void); + +extern struct dyn_pageflags __nosavedata toi_nosave_origmap; +extern struct dyn_pageflags __nosavedata toi_nosave_copymap; + +#ifdef CONFIG_HIGHMEM +extern __nosavedata struct zone_data *toi_nosave_zone_list; +extern __nosavedata unsigned long toi_nosave_max_pfn; +#endif + +extern unsigned long toi_get_nonconflicting_page(void); +extern int toi_post_context_save(void); +extern int toi_try_hibernate(int have_pmsem); +extern char toi_wait_for_keypress_dev_console(int timeout); +extern struct block_device *toi_open_by_devnum(dev_t dev, unsigned mode); diff --git a/kernel/power/tuxonice_checksum.c b/kernel/power/tuxonice_checksum.c new file mode 100644 index 0000000..eea3029 --- /dev/null +++ b/kernel/power/tuxonice_checksum.c @@ -0,0 +1,389 @@ +/* + * kernel/power/tuxonice_checksum.c + * + * Copyright (C) 2006-2007 Nigel Cunningham (nigel at tuxonice net) + * Copyright (C) 2006 Red Hat, inc. + * + * This file is released under the GPLv2. + * + * This file contains data checksum routines for TuxOnIce, + * using cryptoapi. They are used to locate any modifications + * made to pageset 2 while we're saving it. + */ + +#include +#include +#include +#include +#include +#include + +#include "tuxonice.h" +#include "tuxonice_modules.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_io.h" +#include "tuxonice_pageflags.h" +#include "tuxonice_checksum.h" +#include "tuxonice_pagedir.h" +#include "tuxonice_alloc.h" + +static struct toi_module_ops toi_checksum_ops; + +/* Constant at the mo, but I might allow tuning later */ +static char toi_checksum_name[32] = "md4"; +/* Bytes per checksum */ +#define CHECKSUM_SIZE (16) + +#define CHECKSUMS_PER_PAGE ((PAGE_SIZE - sizeof(void *)) / CHECKSUM_SIZE) + +struct cpu_context { + struct crypto_hash *transform; + struct hash_desc desc; + struct scatterlist sg[2]; + char *buf; +}; + +static DEFINE_PER_CPU(struct cpu_context, contexts); +static int pages_allocated; +static unsigned long page_list; + +static int toi_num_resaved; + +static unsigned long this_checksum, next_page; +static int checksum_index; + +static inline int checksum_pages_needed(void) +{ + return DIV_ROUND_UP(pagedir2.size, CHECKSUMS_PER_PAGE); +} + +/* ---- Local buffer management ---- */ + +/* + * toi_checksum_cleanup + * + * Frees memory allocated for our labours. + */ +static void toi_checksum_cleanup(int ending_cycle) +{ + int cpu; + + if (ending_cycle) { + for_each_online_cpu(cpu) { + struct cpu_context *this = &per_cpu(contexts, cpu); + if (this->transform) { + crypto_free_hash(this->transform); + this->transform = NULL; + this->desc.tfm = NULL; + } + + if (this->buf) { + toi_free_page(27, (unsigned long) this->buf); + this->buf = NULL; + } + } + } +} + +/* + * toi_crypto_initialise + * + * Prepare to do some work by allocating buffers and transforms. + * Returns: Int: Zero. Even if we can't set up checksum, we still + * seek to hibernate. + */ +static int toi_checksum_initialise(int starting_cycle) +{ + int cpu; + + if (!(starting_cycle & SYSFS_HIBERNATE) || !toi_checksum_ops.enabled) + return 0; + + if (!*toi_checksum_name) { + printk(KERN_INFO "TuxOnIce: No checksum algorithm name set.\n"); + return 1; + } + + for_each_online_cpu(cpu) { + struct cpu_context *this = &per_cpu(contexts, cpu); + struct page *page; + + this->transform = crypto_alloc_hash(toi_checksum_name, 0, 0); + if (IS_ERR(this->transform)) { + printk(KERN_INFO "TuxOnIce: Failed to initialise the " + "%s checksum algorithm: %ld.\n", + toi_checksum_name, (long) this->transform); + this->transform = NULL; + return 1; + } + + this->desc.tfm = this->transform; + this->desc.flags = 0; + + page = toi_alloc_page(27, GFP_KERNEL); + if (!page) + return 1; + this->buf = page_address(page); + sg_set_buf(&this->sg[0], this->buf, PAGE_SIZE); + } + return 0; +} + +/* + * toi_checksum_print_debug_stats + * @buffer: Pointer to a buffer into which the debug info will be printed. + * @size: Size of the buffer. + * + * Print information to be recorded for debugging purposes into a buffer. + * Returns: Number of characters written to the buffer. + */ + +static int toi_checksum_print_debug_stats(char *buffer, int size) +{ + int len; + + if (!toi_checksum_ops.enabled) + return snprintf_used(buffer, size, + "- Checksumming disabled.\n"); + + len = snprintf_used(buffer, size, "- Checksum method is '%s'.\n", + toi_checksum_name); + len += snprintf_used(buffer + len, size - len, + " %d pages resaved in atomic copy.\n", toi_num_resaved); + return len; +} + +static int toi_checksum_memory_needed(void) +{ + return toi_checksum_ops.enabled ? + checksum_pages_needed() << PAGE_SHIFT : 0; +} + +static int toi_checksum_storage_needed(void) +{ + if (toi_checksum_ops.enabled) + return strlen(toi_checksum_name) + sizeof(int) + 1; + else + return 0; +} + +/* + * toi_checksum_save_config_info + * @buffer: Pointer to a buffer of size PAGE_SIZE. + * + * Save informaton needed when reloading the image at resume time. + * Returns: Number of bytes used for saving our data. + */ +static int toi_checksum_save_config_info(char *buffer) +{ + int namelen = strlen(toi_checksum_name) + 1; + int total_len; + + *((unsigned int *) buffer) = namelen; + strncpy(buffer + sizeof(unsigned int), toi_checksum_name, namelen); + total_len = sizeof(unsigned int) + namelen; + return total_len; +} + +/* toi_checksum_load_config_info + * @buffer: Pointer to the start of the data. + * @size: Number of bytes that were saved. + * + * Description: Reload information needed for dechecksuming the image at + * resume time. + */ +static void toi_checksum_load_config_info(char *buffer, int size) +{ + int namelen; + + namelen = *((unsigned int *) (buffer)); + strncpy(toi_checksum_name, buffer + sizeof(unsigned int), + namelen); + return; +} + +/* + * Free Checksum Memory + */ + +void free_checksum_pages(void) +{ + while (pages_allocated) { + unsigned long next = *((unsigned long *) page_list); + ClearPageNosave(virt_to_page(page_list)); + toi_free_page(15, (unsigned long) page_list); + page_list = next; + pages_allocated--; + } +} + +/* + * Allocate Checksum Memory + */ + +int allocate_checksum_pages(void) +{ + int pages_needed = checksum_pages_needed(); + + if (!toi_checksum_ops.enabled) + return 0; + + while (pages_allocated < pages_needed) { + unsigned long *new_page = + (unsigned long *) toi_get_zeroed_page(15, TOI_ATOMIC_GFP); + if (!new_page) { + printk("Unable to allocate checksum pages.\n"); + return -ENOMEM; + } + SetPageNosave(virt_to_page(new_page)); + (*new_page) = page_list; + page_list = (unsigned long) new_page; + pages_allocated++; + } + + next_page = (unsigned long) page_list; + checksum_index = 0; + + return 0; +} + +#if 0 +static void print_checksum(char *buf, int size) +{ + int index; + + for (index = 0; index < size; index++) + printk(KERN_INFO "%x ", buf[index]); + + printk("\n"); +} +#endif + +char *tuxonice_get_next_checksum(void) +{ + if (!toi_checksum_ops.enabled) + return NULL; + + if (checksum_index % CHECKSUMS_PER_PAGE) + this_checksum += CHECKSUM_SIZE; + else { + this_checksum = next_page + sizeof(void *); + next_page = *((unsigned long *) next_page); + } + + checksum_index++; + return (char *) this_checksum; +} + +int tuxonice_calc_checksum(struct page *page, char *checksum_locn) +{ + char *pa; + int result, cpu = smp_processor_id(); + struct cpu_context *ctx = &per_cpu(contexts, cpu); + + if (!toi_checksum_ops.enabled) + return 0; + + pa = kmap(page); + memcpy(ctx->buf, pa, PAGE_SIZE); + kunmap(page); + result = crypto_hash_digest(&ctx->desc, ctx->sg, PAGE_SIZE, + checksum_locn); + return result; +} +/* + * Calculate checksums + */ + +void check_checksums(void) +{ + int pfn, index = 0, cpu = smp_processor_id(); + unsigned long next_page, this_checksum = 0; + char current_checksum[CHECKSUM_SIZE]; + struct cpu_context *ctx = &per_cpu(contexts, cpu); + + if (!toi_checksum_ops.enabled) + return; + + next_page = (unsigned long) page_list; + + toi_num_resaved = 0; + + BITMAP_FOR_EACH_SET(&pageset2_map, pfn) { + int ret; + char *pa; + struct page *page = pfn_to_page(pfn); + + if (index % CHECKSUMS_PER_PAGE) { + this_checksum += CHECKSUM_SIZE; + } else { + this_checksum = next_page + sizeof(void *); + next_page = *((unsigned long *) next_page); + } + + /* Done when IRQs disabled so must be atomic */ + pa = kmap_atomic(page, KM_USER1); + memcpy(ctx->buf, pa, PAGE_SIZE); + kunmap_atomic(pa, KM_USER1); + ret = crypto_hash_digest(&ctx->desc, ctx->sg, PAGE_SIZE, + current_checksum); + + if (ret) { + printk(KERN_INFO "Digest failed. Returned %d.\n", ret); + return; + } + + if (memcmp(current_checksum, (char *) this_checksum, + CHECKSUM_SIZE)) { + SetPageResave(pfn_to_page(pfn)); + toi_num_resaved++; + if (test_action_state(TOI_ABORT_ON_RESAVE_NEEDED)) + set_abort_result(TOI_RESAVE_NEEDED); + } + + index++; + } +} + +static struct toi_sysfs_data sysfs_params[] = { + { TOI_ATTR("enabled", SYSFS_RW), + SYSFS_INT(&toi_checksum_ops.enabled, 0, 1, 0) + }, + + { TOI_ATTR("abort_if_resave_needed", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_ABORT_ON_RESAVE_NEEDED, 0) + } +}; + +/* + * Ops structure. + */ +static struct toi_module_ops toi_checksum_ops = { + .type = MISC_MODULE, + .name = "checksumming", + .directory = "checksum", + .module = THIS_MODULE, + .initialise = toi_checksum_initialise, + .cleanup = toi_checksum_cleanup, + .print_debug_info = toi_checksum_print_debug_stats, + .save_config_info = toi_checksum_save_config_info, + .load_config_info = toi_checksum_load_config_info, + .memory_needed = toi_checksum_memory_needed, + .storage_needed = toi_checksum_storage_needed, + + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +/* ---- Registration ---- */ +int toi_checksum_init(void) +{ + int result = toi_register_module(&toi_checksum_ops); + return result; +} + +void toi_checksum_exit(void) +{ + toi_unregister_module(&toi_checksum_ops); +} diff --git a/kernel/power/tuxonice_checksum.h b/kernel/power/tuxonice_checksum.h new file mode 100644 index 0000000..81b928d --- /dev/null +++ b/kernel/power/tuxonice_checksum.h @@ -0,0 +1,32 @@ +/* + * kernel/power/tuxonice_checksum.h + * + * Copyright (C) 2006-2007 Nigel Cunningham (nigel at tuxonice net) + * Copyright (C) 2006 Red Hat, inc. + * + * This file is released under the GPLv2. + * + * This file contains data checksum routines for TuxOnIce, + * using cryptoapi. They are used to locate any modifications + * made to pageset 2 while we're saving it. + */ + +#if defined(CONFIG_TOI_CHECKSUM) +extern int toi_checksum_init(void); +extern void toi_checksum_exit(void); +void check_checksums(void); +int allocate_checksum_pages(void); +void free_checksum_pages(void); +char *tuxonice_get_next_checksum(void); +int tuxonice_calc_checksum(struct page *page, char *checksum_locn); +#else +static inline int toi_checksum_init(void) { return 0; } +static inline void toi_checksum_exit(void) { } +static inline void check_checksums(void) { }; +static inline int allocate_checksum_pages(void) { return 0; }; +static inline void free_checksum_pages(void) { }; +static inline char *tuxonice_get_next_checksum(void) { return NULL; }; +static inline int tuxonice_calc_checksum(struct page *page, char *checksum_locn) + { return 0; } +#endif + diff --git a/kernel/power/tuxonice_cluster.c b/kernel/power/tuxonice_cluster.c new file mode 100644 index 0000000..4e74624 --- /dev/null +++ b/kernel/power/tuxonice_cluster.c @@ -0,0 +1,1086 @@ +/* + * kernel/power/tuxonice_cluster.c + * + * Copyright (C) 2006-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * This file contains routines for cluster hibernation support. + * + * Based on ip autoconfiguration code in net/ipv4/ipconfig.c. + * + * How does it work? + * + * There is no 'master' node that tells everyone else what to do. All nodes + * send messages to the broadcast address/port, maintain a list of peers + * and figure out when to progress to the next step in hibernating or resuming. + * This makes us more fault tolerant when it comes to nodes coming and going + * (which may be more of an issue if we're hibernating when power supplies + * are being unreliable). + * + * At boot time, we start a ktuxonice thread that handles communication with + * other nodes. This node maintains a state machine that controls our progress + * through hibernating and resuming, keeping us in step with other nodes. Nodes + * are identified by their hw address. + * + * On startup, the node sends CLUSTER_PING on the configured interface's + * broadcast address, port $toi_cluster_port (see below) and begins to listen + * for other broadcast messages. CLUSTER_PING messages are repeated at + * intervals of 5 minutes, with a random offset to spread traffic out. + * + * A hibernation cycle is initiated from any node via + * + * echo > /sys/power/tuxonice/do_hibernate + * + * and (possibily) the hibernate script. At each step of the process, the node + * completes its work, and waits for all other nodes to signal completion of + * their work (or timeout) before progressing to the next step. + * + * Request/state Action before reply Possible reply Next state + * HIBERNATE capable, pre-script HIBERNATE|ACK NODE_PREP + * HIBERNATE|NACK INIT_0 + * + * PREP prepare_image PREP|ACK IMAGE_WRITE + * PREP|NACK INIT_0 + * ABORT RUNNING + * + * IO write image IO|ACK power off + * ABORT POST_RESUME + * + * (Boot time) check for image IMAGE|ACK RESUME_PREP + * (Note 1) + * IMAGE|NACK (Note 2) + * + * PREP prepare read image PREP|ACK IMAGE_READ + * PREP|NACK (As NACK_IMAGE) + * + * IO read image IO|ACK POST_RESUME + * + * POST_RESUME thaw, post-script RUNNING + * + * INIT_0 init 0 + * + * Other messages: + * + * - PING: Request for all other live nodes to send a PONG. Used at startup to + * announce presence, when a node is suspected dead and periodically, in case + * segments of the network are [un]plugged. + * + * - PONG: Response to a PING. + * + * - ABORT: Request to cancel writing an image. + * + * - BYE: Notification that this node is shutting down. + * + * Note 1: Repeated at 3s intervals until we continue to boot/resume, so that + * nodes which are slower to start up can get state synchronised. If a node + * starting up sees other nodes sending RESUME_PREP or IMAGE_READ, it may send + * ACK_IMAGE and they will wait for it to catch up. If it sees ACK_READ, it + * must invalidate its image (if any) and boot normally. + * + * Note 2: May occur when one node lost power or powered off while others + * hibernated. This node waits for others to complete resuming (ACK_READ) + * before completing its boot, so that it appears as a fail node restarting. + * + * If any node has an image, then it also has a list of nodes that hibernated + * in synchronisation with it. The node will wait for other nodes to appear + * or timeout before beginning its restoration. + * + * If a node has no image, it needs to wait, in case other nodes which do have + * an image are going to resume, but are taking longer to announce their + * presence. For this reason, the user can specify a timeout value and a number + * of nodes detected before we just continue. (We might want to assume in a + * cluster of, say, 15 nodes, if 8 others have booted without finding an image, + * the remaining nodes will too. This might help in situations where some nodes + * are much slower to boot, or more subject to hardware failures or such like). + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "tuxonice.h" +#include "tuxonice_modules.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_io.h" + +#if 1 +#define PRINTK(a, b...) do { printk(a, ##b); } while (0) +#else +#define PRINTK(a, b...) do { } while (0) +#endif + +static int loopback_mode; +static int num_local_nodes = 1; +#define MAX_LOCAL_NODES 8 +#define SADDR (loopback_mode ? b->sid : h->saddr) + +#define MYNAME "TuxOnIce Clustering" + +enum cluster_message { + MSG_ACK = 1, + MSG_NACK = 2, + MSG_PING = 4, + MSG_ABORT = 8, + MSG_BYE = 16, + MSG_HIBERNATE = 32, + MSG_IMAGE = 64, + MSG_IO = 128, + MSG_RUNNING = 256 +}; + +static char *str_message(int message) +{ + switch (message) { + case 4: + return "Ping"; + case 8: + return "Abort"; + case 9: + return "Abort acked"; + case 10: + return "Abort nacked"; + case 16: + return "Bye"; + case 17: + return "Bye acked"; + case 18: + return "Bye nacked"; + case 32: + return "Hibernate request"; + case 33: + return "Hibernate ack"; + case 34: + return "Hibernate nack"; + case 64: + return "Image exists?"; + case 65: + return "Image does exist"; + case 66: + return "No image here"; + case 128: + return "I/O"; + case 129: + return "I/O okay"; + case 130: + return "I/O failed"; + case 256: + return "Running"; + default: + printk("Unrecognised message %d.\n", message); + return "Unrecognised message (see dmesg)"; + } +} + +#define MSG_ACK_MASK (MSG_ACK | MSG_NACK) +#define MSG_STATE_MASK (~MSG_ACK_MASK) + +struct node_info { + struct list_head member_list; + wait_queue_head_t member_events; + spinlock_t member_list_lock; + spinlock_t receive_lock; + int peer_count, ignored_peer_count; + struct toi_sysfs_data sysfs_data; + enum cluster_message current_message; +}; + +struct node_info node_array[MAX_LOCAL_NODES]; + +struct cluster_member { + __be32 addr; + enum cluster_message message; + struct list_head list; + int ignore; +}; + +#define toi_cluster_port_send 3501 +#define toi_cluster_port_recv 3502 + +static struct net_device *net_dev; +static struct toi_module_ops toi_cluster_ops; + +static int toi_recv(struct sk_buff *skb, struct net_device *dev, + struct packet_type *pt, struct net_device *orig_dev); + +static struct packet_type toi_cluster_packet_type = { + .type = __constant_htons(ETH_P_IP), + .func = toi_recv, +}; + +struct toi_pkt { /* BOOTP packet format */ + struct iphdr iph; /* IP header */ + struct udphdr udph; /* UDP header */ + u8 htype; /* HW address type */ + u8 hlen; /* HW address length */ + __be32 xid; /* Transaction ID */ + __be16 secs; /* Seconds since we started */ + __be16 flags; /* Just what it says */ + u8 hw_addr[16]; /* Sender's HW address */ + u16 message; /* Message */ + unsigned long sid; /* Source ID for loopback testing */ +}; + +static char toi_cluster_iface[IFNAMSIZ] = CONFIG_TOI_DEFAULT_CLUSTER_INTERFACE; + +static int added_pack; + +static int others_have_image, num_others; + +/* Key used to allow multiple clusters on the same lan */ +static char toi_cluster_key[32] = CONFIG_TOI_DEFAULT_CLUSTER_KEY; +static char pre_hibernate_script[255] = + CONFIG_TOI_DEFAULT_CLUSTER_PRE_HIBERNATE; +static char post_hibernate_script[255] = + CONFIG_TOI_DEFAULT_CLUSTER_POST_HIBERNATE; + +/* List of cluster members */ +static unsigned long continue_delay = 5 * HZ; +static unsigned long cluster_message_timeout = 3 * HZ; + +/* === Membership list === */ + +static void print_member_info(int index) +{ + struct cluster_member *this; + + printk(KERN_INFO "==> Dumping node %d.\n", index); + + list_for_each_entry(this, &node_array[index].member_list, list) + printk(KERN_INFO "%d.%d.%d.%d last message %s. %s\n", + NIPQUAD(this->addr), + str_message(this->message), + this->ignore ? "(Ignored)" : ""); + printk(KERN_INFO "== Done ==\n"); +} + +static struct cluster_member *__find_member(int index, __be32 addr) +{ + struct cluster_member *this; + + list_for_each_entry(this, &node_array[index].member_list, list) { + if (this->addr != addr) + continue; + + return this; + } + + return NULL; +} + +static void set_ignore(int index, __be32 addr, struct cluster_member *this) +{ + if (this->ignore) { + PRINTK("Node %d already ignoring %d.%d.%d.%d.\n", + index, NIPQUAD(addr)); + return; + } + + PRINTK("Node %d sees node %d.%d.%d.%d now being ignored.\n", + index, NIPQUAD(addr)); + this->ignore = 1; + node_array[index].ignored_peer_count++; +} + +static int __add_update_member(int index, __be32 addr, int message) +{ + struct cluster_member *this; + + this = __find_member(index, addr); + if (this) { + if (this->message != message) { + this->message = message; + if ((message & MSG_NACK) && + (message & (MSG_HIBERNATE | MSG_IMAGE | MSG_IO))) + set_ignore(index, addr, this); + PRINTK("Node %d sees node %d.%d.%d.%d now sending " + "%s.\n", index, NIPQUAD(addr), + str_message(message)); + wake_up(&node_array[index].member_events); + } + return 0; + } + + this = (struct cluster_member *) toi_kzalloc(36, + sizeof(struct cluster_member), GFP_KERNEL); + + if (!this) + return -1; + + this->addr = addr; + this->message = message; + this->ignore = 0; + INIT_LIST_HEAD(&this->list); + + node_array[index].peer_count++; + + PRINTK("Node %d sees node %d.%d.%d.%d sending %s.\n", index, + NIPQUAD(addr), str_message(message)); + + if ((message & MSG_NACK) && + (message & (MSG_HIBERNATE | MSG_IMAGE | MSG_IO))) + set_ignore(index, addr, this); + list_add_tail(&this->list, &node_array[index].member_list); + return 1; +} + +static int add_update_member(int index, __be32 addr, int message) +{ + int result; + unsigned long flags; + spin_lock_irqsave(&node_array[index].member_list_lock, flags); + result = __add_update_member(index, addr, message); + spin_unlock_irqrestore(&node_array[index].member_list_lock, flags); + + print_member_info(index); + + wake_up(&node_array[index].member_events); + + return result; +} + +static void del_member(int index, __be32 addr) +{ + struct cluster_member *this; + unsigned long flags; + + spin_lock_irqsave(&node_array[index].member_list_lock, flags); + this = __find_member(index, addr); + + if (this) { + list_del_init(&this->list); + toi_kfree(36, this); + node_array[index].peer_count--; + } + + spin_unlock_irqrestore(&node_array[index].member_list_lock, flags); +} + +/* === Message transmission === */ + +static void toi_send_if(int message, unsigned long my_id); + +/* + * Process received TOI packet. + */ +static int toi_recv(struct sk_buff *skb, struct net_device *dev, + struct packet_type *pt, struct net_device *orig_dev) +{ + struct toi_pkt *b; + struct iphdr *h; + int len, result, index; + unsigned long addr, message, ack; + + /* Perform verifications before taking the lock. */ + if (skb->pkt_type == PACKET_OTHERHOST) + goto drop; + + if (dev != net_dev) + goto drop; + + skb = skb_share_check(skb, GFP_ATOMIC)); + if (!skb) + return NET_RX_DROP; + + if (!pskb_may_pull(skb, + sizeof(struct iphdr) + + sizeof(struct udphdr))) + goto drop; + + b = (struct toi_pkt *)skb_network_header(skb); + h = &b->iph; + + if (h->ihl != 5 || h->version != 4 || h->protocol != IPPROTO_UDP) + goto drop; + + /* Fragments are not supported */ + if (h->frag_off & htons(IP_OFFSET | IP_MF)) { + if (net_ratelimit()) + printk(KERN_ERR "TuxOnIce: Ignoring fragmented " + "cluster message.\n"); + goto drop; + } + + if (skb->len < ntohs(h->tot_len)) + goto drop; + + if (ip_fast_csum((char *) h, h->ihl)) + goto drop; + + if (b->udph.source != htons(toi_cluster_port_send) || + b->udph.dest != htons(toi_cluster_port_recv)) + goto drop; + + if (ntohs(h->tot_len) < ntohs(b->udph.len) + sizeof(struct iphdr)) + goto drop; + + len = ntohs(b->udph.len) - sizeof(struct udphdr); + + /* Ok the front looks good, make sure we can get at the rest. */ + if (!pskb_may_pull(skb, skb->len)) + goto drop; + + b = (struct toi_pkt *)skb_network_header(skb); + h = &b->iph; + + addr = SADDR; + PRINTK(">>> Message %s received from " NIPQUAD_FMT ".\n", + str_message(b->message), NIPQUAD(addr)); + + message = b->message & MSG_STATE_MASK; + ack = b->message & MSG_ACK_MASK; + + for (index = 0; index < num_local_nodes; index++) { + int new_message = node_array[index].current_message, + old_message = new_message; + + if (index == SADDR || !old_message) { + PRINTK("Ignoring node %d (offline or self).\n", index); + continue; + } + + /* One message at a time, please. */ + spin_lock(&node_array[index].receive_lock); + + result = add_update_member(index, SADDR, b->message); + if (result == -1) { + printk(KERN_INFO "Failed to add new cluster member " + NIPQUAD_FMT ".\n", + NIPQUAD(addr)); + goto drop_unlock; + } + + switch (b->message & MSG_STATE_MASK) { + case MSG_PING: + break; + case MSG_ABORT: + break; + case MSG_BYE: + break; + case MSG_HIBERNATE: + /* Can I hibernate? */ + new_message = MSG_HIBERNATE | + ((index & 1) ? MSG_NACK : MSG_ACK); + break; + case MSG_IMAGE: + /* Can I resume? */ + new_message = MSG_IMAGE | + ((index & 1) ? MSG_NACK : MSG_ACK); + if (new_message != old_message) + printk("Setting whether I can resume to %d.\n", + new_message); + break; + case MSG_IO: + new_message = MSG_IO | MSG_ACK; + break; + case MSG_RUNNING: + break; + default: + if (net_ratelimit()) + printk(KERN_ERR "Unrecognised TuxOnIce cluster" + " message %d from " NIPQUAD_FMT ".\n", + b->message, NIPQUAD(addr)); + }; + + if (old_message != new_message) { + node_array[index].current_message = new_message; + printk(KERN_INFO ">>> Sending new message for node " + "%d.\n", index); + toi_send_if(new_message, index); + } else if (!ack) { + printk(KERN_INFO ">>> Resending message for node %d.\n", + index); + toi_send_if(new_message, index); + } +drop_unlock: + spin_unlock(&node_array[index].receive_lock); + }; + +drop: + /* Throw the packet out. */ + kfree_skb(skb); + + return 0; +} + +/* + * Send cluster message to single interface. + */ +static void toi_send_if(int message, unsigned long my_id) +{ + struct sk_buff *skb; + struct toi_pkt *b; + int hh_len = LL_RESERVED_SPACE(net_dev); + struct iphdr *h; + + /* Allocate packet */ + skb = alloc_skb(sizeof(struct toi_pkt) + hh_len + 15, GFP_KERNEL); + if (!skb) + return; + skb_reserve(skb, hh_len); + b = (struct toi_pkt *) skb_put(skb, sizeof(struct toi_pkt)); + memset(b, 0, sizeof(struct toi_pkt)); + + /* Construct IP header */ + skb_reset_network_header(skb); + h = ip_hdr(skb); + h->version = 4; + h->ihl = 5; + h->tot_len = htons(sizeof(struct toi_pkt)); + h->frag_off = htons(IP_DF); + h->ttl = 64; + h->protocol = IPPROTO_UDP; + h->daddr = htonl(INADDR_BROADCAST); + h->check = ip_fast_csum((unsigned char *) h, h->ihl); + + /* Construct UDP header */ + b->udph.source = htons(toi_cluster_port_send); + b->udph.dest = htons(toi_cluster_port_recv); + b->udph.len = htons(sizeof(struct toi_pkt) - sizeof(struct iphdr)); + /* UDP checksum not calculated -- explicitly allowed in BOOTP RFC */ + + /* Construct message */ + b->message = message; + b->sid = my_id; + b->htype = net_dev->type; /* can cause undefined behavior */ + b->hlen = net_dev->addr_len; + memcpy(b->hw_addr, net_dev->dev_addr, net_dev->addr_len); + b->secs = htons(3); /* 3 seconds */ + + /* Chain packet down the line... */ + skb->dev = net_dev; + skb->protocol = htons(ETH_P_IP); + if ((net_dev->hard_header && + net_dev->hard_header(skb, net_dev, ntohs(skb->protocol), + net_dev->broadcast, net_dev->dev_addr, skb->len) < 0) || + dev_queue_xmit(skb) < 0) + printk(KERN_INFO "E"); +} + +/* ========================================= */ + +/* kTOICluster */ + +static atomic_t num_cluster_threads; +static DECLARE_WAIT_QUEUE_HEAD(clusterd_events); + +static int kTOICluster(void *data) +{ + unsigned long my_id; + + my_id = atomic_add_return(1, &num_cluster_threads) - 1; + node_array[my_id].current_message = (unsigned long) data; + + PRINTK("kTOICluster daemon %lu starting.\n", my_id); + + current->flags |= PF_NOFREEZE; + + while (node_array[my_id].current_message) { + toi_send_if(node_array[my_id].current_message, my_id); + sleep_on_timeout(&clusterd_events, + cluster_message_timeout); + PRINTK("Link state %lu is %d.\n", my_id, + node_array[my_id].current_message); + } + + toi_send_if(MSG_BYE, my_id); + atomic_dec(&num_cluster_threads); + wake_up(&clusterd_events); + + PRINTK("kTOICluster daemon %lu exiting.\n", my_id); + __set_current_state(TASK_RUNNING); + return 0; +} + +static void kill_clusterd(void) +{ + int i; + + for (i = 0; i < num_local_nodes; i++) { + if (node_array[i].current_message) { + PRINTK("Seeking to kill clusterd %d.\n", i); + node_array[i].current_message = 0; + } + } + wait_event(clusterd_events, + !atomic_read(&num_cluster_threads)); + PRINTK("All cluster daemons have exited.\n"); +} + +static int peers_not_in_message(int index, int message, int precise) +{ + struct cluster_member *this; + unsigned long flags; + int result = 0; + + spin_lock_irqsave(&node_array[index].member_list_lock, flags); + list_for_each_entry(this, &node_array[index].member_list, list) { + if (this->ignore) + continue; + + PRINTK("Peer %d.%d.%d.%d sending %s. " + "Seeking %s.\n", + NIPQUAD(this->addr), + str_message(this->message), str_message(message)); + if ((precise ? this->message : + this->message & MSG_STATE_MASK) != + message) + result++; + } + spin_unlock_irqrestore(&node_array[index].member_list_lock, flags); + PRINTK("%d peers in sought message.\n", result); + return result; +} + +static void reset_ignored(int index) +{ + struct cluster_member *this; + unsigned long flags; + + spin_lock_irqsave(&node_array[index].member_list_lock, flags); + list_for_each_entry(this, &node_array[index].member_list, list) + this->ignore = 0; + node_array[index].ignored_peer_count = 0; + spin_unlock_irqrestore(&node_array[index].member_list_lock, flags); +} + +static int peers_in_message(int index, int message, int precise) +{ + return node_array[index].peer_count - + node_array[index].ignored_peer_count - + peers_not_in_message(index, message, precise); +} + +static int time_to_continue(int index, unsigned long start, int message) +{ + int first = peers_not_in_message(index, message, 0); + int second = peers_in_message(index, message, 1); + + PRINTK("First part returns %d, second returns %d.\n", first, second); + + if (!first && !second) { + PRINTK("All peers answered message %d.\n", + message); + return 1; + } + + if (time_after(jiffies, start + continue_delay)) { + PRINTK("Timeout reached.\n"); + return 1; + } + + PRINTK("Not time to continue yet (%lu < %lu).\n", jiffies, + start + continue_delay); + return 0; +} + +void toi_initiate_cluster_hibernate(void) +{ + int result; + unsigned long start; + + result = do_toi_step(STEP_HIBERNATE_PREPARE_IMAGE); + if (result) + return; + + toi_send_if(MSG_HIBERNATE, 0); + + start = jiffies; + wait_event(node_array[0].member_events, + time_to_continue(0, start, MSG_HIBERNATE)); + + if (test_action_state(TOI_FREEZER_TEST)) { + toi_send_if(MSG_ABORT, 0); + + start = jiffies; + wait_event(node_array[0].member_events, + time_to_continue(0, start, MSG_RUNNING)); + + do_toi_step(STEP_QUIET_CLEANUP); + return; + } + + toi_send_if(MSG_IO, 0); + + result = do_toi_step(STEP_HIBERNATE_SAVE_IMAGE); + if (result) + return; + + /* This code runs at resume time too! */ + if (toi_in_hibernate) + result = do_toi_step(STEP_HIBERNATE_POWERDOWN); +} +EXPORT_SYMBOL_GPL(toi_initiate_cluster_hibernate); + +/* toi_cluster_print_debug_stats + * + * Description: Print information to be recorded for debugging purposes into a + * buffer. + * Arguments: buffer: Pointer to a buffer into which the debug info will be + * printed. + * size: Size of the buffer. + * Returns: Number of characters written to the buffer. + */ +static int toi_cluster_print_debug_stats(char *buffer, int size) +{ + int len; + + if (strlen(toi_cluster_iface)) + len = snprintf_used(buffer, size, + "- Cluster interface is '%s'.\n", + toi_cluster_iface); + else + len = snprintf_used(buffer, size, + "- Cluster support is disabled.\n"); + return len; +} + +/* cluster_memory_needed + * + * Description: Tell the caller how much memory we need to operate during + * hibernate/resume. + * Returns: Unsigned long. Maximum number of bytes of memory required for + * operation. + */ +static int toi_cluster_memory_needed(void) +{ + return 0; +} + +static int toi_cluster_storage_needed(void) +{ + return 1 + strlen(toi_cluster_iface); +} + +/* toi_cluster_save_config_info + * + * Description: Save informaton needed when reloading the image at resume time. + * Arguments: Buffer: Pointer to a buffer of size PAGE_SIZE. + * Returns: Number of bytes used for saving our data. + */ +static int toi_cluster_save_config_info(char *buffer) +{ + strcpy(buffer, toi_cluster_iface); + return strlen(toi_cluster_iface + 1); +} + +/* toi_cluster_load_config_info + * + * Description: Reload information needed for declustering the image at + * resume time. + * Arguments: Buffer: Pointer to the start of the data. + * Size: Number of bytes that were saved. + */ +static void toi_cluster_load_config_info(char *buffer, int size) +{ + strncpy(toi_cluster_iface, buffer, size); + return; +} + +static void cluster_startup(void) +{ + int have_image = do_check_can_resume(), i; + unsigned long start = jiffies, initial_message; + struct task_struct *p; + + initial_message = MSG_IMAGE; + + have_image = 1; + + for (i = 0; i < num_local_nodes; i++) { + PRINTK("Starting ktoiclusterd %d.\n", i); + p = kthread_create(kTOICluster, (void *) initial_message, + "ktoiclusterd/%d", i); + if (IS_ERR(p)) { + printk("Failed to start ktoiclusterd.\n"); + return; + } + + wake_up_process(p); + } + + /* Wait for delay or someone else sending first message */ + wait_event(node_array[0].member_events, time_to_continue(0, start, + MSG_IMAGE)); + + others_have_image = peers_in_message(0, MSG_IMAGE | MSG_ACK, 1); + + printk(KERN_INFO "Continuing. I %shave an image. Peers with image:" + " %d.\n", have_image ? "" : "don't ", others_have_image); + + if (have_image) { + int result; + + /* Start to resume */ + printk(KERN_INFO " === Starting to resume === \n"); + node_array[0].current_message = MSG_IO; + toi_send_if(MSG_IO, 0); + + /* result = do_toi_step(STEP_RESUME_LOAD_PS1); */ + result = 0; + + if (!result) { + /* + * Atomic restore - we'll come back in the hibernation + * path. + */ + + /* result = do_toi_step(STEP_RESUME_DO_RESTORE); */ + result = 0; + + /* do_toi_step(STEP_QUIET_CLEANUP); */ + } + + node_array[0].current_message |= MSG_NACK; + + /* For debugging - disable for real life? */ + wait_event(node_array[0].member_events, + time_to_continue(0, start, MSG_IO)); + } + + if (others_have_image) { + /* Wait for them to resume */ + printk(KERN_INFO "Waiting for other nodes to resume.\n"); + start = jiffies; + wait_event(node_array[0].member_events, + time_to_continue(0, start, MSG_RUNNING)); + if (peers_not_in_message(0, MSG_RUNNING, 0)) + printk(KERN_INFO "Timed out while waiting for other " + "nodes to resume.\n"); + } + + /* Find out whether an image exists here. Send ACK_IMAGE or NACK_IMAGE + * as appropriate. + * + * If we don't have an image: + * - Wait until someone else says they have one, or conditions are met + * for continuing to boot (n machines or t seconds). + * - If anyone has an image, wait for them to resume before continuing + * to boot. + * + * If we have an image: + * - Wait until conditions are met before continuing to resume (n + * machines or t seconds). Send RESUME_PREP and freeze processes. + * NACK_PREP if freezing fails (shouldn't) and follow logic for + * us having no image above. On success, wait for [N]ACK_PREP from + * other machines. Read image (including atomic restore) until done. + * Wait for ACK_READ from others (should never fail). Thaw processes + * and do post-resume. (The section after the atomic restore is done + * via the code for hibernating). + */ + + node_array[0].current_message = MSG_RUNNING; +} + +/* toi_cluster_open_iface + * + * Description: Prepare to use an interface. + */ + +static int toi_cluster_open_iface(void) +{ + struct net_device *dev; + + rtnl_lock(); + + for_each_netdev(dev) { + if (/* dev == &loopback_dev || */ + strcmp(dev->name, toi_cluster_iface)) + continue; + + net_dev = dev; + break; + } + + rtnl_unlock(); + + if (!net_dev) { + printk(KERN_ERR MYNAME ": Device %s not found.\n", + toi_cluster_iface); + return -ENODEV; + } + + dev_add_pack(&toi_cluster_packet_type); + added_pack = 1; + + loopback_mode = (net_dev == &loopback_dev); + num_local_nodes = loopback_mode ? 8 : 1; + + PRINTK("Loopback mode is %s. Number of local nodes is %d.\n", + loopback_mode ? "on" : "off", num_local_nodes); + + cluster_startup(); + return 0; +} + +/* toi_cluster_close_iface + * + * Description: Stop using an interface. + */ + +static int toi_cluster_close_iface(void) +{ + kill_clusterd(); + if (added_pack) { + dev_remove_pack(&toi_cluster_packet_type); + added_pack = 0; + } + return 0; +} + +static void write_side_effect(void) +{ + if (toi_cluster_ops.enabled) { + toi_cluster_open_iface(); + set_toi_state(TOI_CLUSTER_MODE); + } else { + toi_cluster_close_iface(); + clear_toi_state(TOI_CLUSTER_MODE); + } +} + +static void node_write_side_effect(void) +{ +} + +/* + * data for our sysfs entries. + */ +static struct toi_sysfs_data sysfs_params[] = { + { + TOI_ATTR("interface", SYSFS_RW), + SYSFS_STRING(toi_cluster_iface, IFNAMSIZ, 0) + }, + + { + TOI_ATTR("enabled", SYSFS_RW), + SYSFS_INT(&toi_cluster_ops.enabled, 0, 1, 0), + .write_side_effect = write_side_effect, + }, + + { + TOI_ATTR("cluster_name", SYSFS_RW), + SYSFS_STRING(toi_cluster_key, 32, 0) + }, + + { + TOI_ATTR("pre-hibernate-script", SYSFS_RW), + SYSFS_STRING(pre_hibernate_script, 256, 0) + }, + + { + TOI_ATTR("post-hibernate-script", SYSFS_RW), + SYSFS_STRING(post_hibernate_script, 256, 0) + }, + + { + TOI_ATTR("continue_delay", SYSFS_RW), + SYSFS_UL(&continue_delay, HZ / 2, 60 * HZ, 0) + } +}; + +/* + * Ops structure. + */ + +static struct toi_module_ops toi_cluster_ops = { + .type = FILTER_MODULE, + .name = "Cluster", + .directory = "cluster", + .module = THIS_MODULE, + .memory_needed = toi_cluster_memory_needed, + .print_debug_info = toi_cluster_print_debug_stats, + .save_config_info = toi_cluster_save_config_info, + .load_config_info = toi_cluster_load_config_info, + .storage_needed = toi_cluster_storage_needed, + + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +/* ---- Registration ---- */ + +#ifdef MODULE +#define INIT static __init +#define EXIT static __exit +#else +#define INIT +#define EXIT +#endif + +INIT int toi_cluster_init(void) +{ + int temp = toi_register_module(&toi_cluster_ops), i; + struct kobject *kobj = toi_cluster_ops.dir_kobj; + + for (i = 0; i < MAX_LOCAL_NODES; i++) { + node_array[i].current_message = 0; + INIT_LIST_HEAD(&node_array[i].member_list); + init_waitqueue_head(&node_array[i].member_events); + spin_lock_init(&node_array[i].member_list_lock); + spin_lock_init(&node_array[i].receive_lock); + + /* Set up sysfs entry */ + node_array[i].sysfs_data.attr.name = toi_kzalloc(8, GFP_KERNEL); + sprintf((char *) node_array[i].sysfs_data.attr.name, "node_%d", + i); + node_array[i].sysfs_data.attr.mode = SYSFS_RW; + node_array[i].sysfs_data.type = TOI_SYSFS_DATA_INTEGER; + node_array[i].sysfs_data.flags = 0; + node_array[i].sysfs_data.data.integer.variable = + &node_array[i].current_message; + node_array[i].sysfs_data.data.integer.minimum = 0; + node_array[i].sysfs_data.data.integer.maximum = INT_MAX; + node_array[i].sysfs_data.write_side_effect = + node_write_side_effect; + toi_register_sysfs_file(kobj, &node_array[i].sysfs_data); + } + + toi_cluster_ops.enabled = (strlen(toi_cluster_iface) > 0); + + if (toi_cluster_ops.enabled) + toi_cluster_open_iface(); + + return temp; +} + +EXIT void toi_cluster_exit(void) +{ + int i; + toi_cluster_close_iface(); + + for (i = 0; i < MAX_LOCAL_NODES; i++) + toi_unregister_sysfs_file(toi_cluster_ops.dir_kobj, + &node_array[i].sysfs_data); + toi_unregister_module(&toi_cluster_ops); +} + +static int __init toi_cluster_iface_setup(char *iface) +{ + toi_cluster_ops.enabled = (*iface && + strcmp(iface, "off")); + + if (toi_cluster_ops.enabled) + strncpy(toi_cluster_iface, iface, strlen(iface)); +} + +__setup("toi_cluster=", toi_cluster_iface_setup); + +#ifdef MODULE +MODULE_LICENSE("GPL"); +module_init(toi_cluster_init); +module_exit(toi_cluster_exit); +MODULE_AUTHOR("Nigel Cunningham"); +MODULE_DESCRIPTION("Cluster Support for TuxOnIce"); +#endif diff --git a/kernel/power/tuxonice_cluster.h b/kernel/power/tuxonice_cluster.h new file mode 100644 index 0000000..cd9ee3a --- /dev/null +++ b/kernel/power/tuxonice_cluster.h @@ -0,0 +1,19 @@ +/* + * kernel/power/tuxonice_cluster.h + * + * Copyright (C) 2006-2007 Nigel Cunningham (nigel at tuxonice net) + * Copyright (C) 2006 Red Hat, inc. + * + * This file is released under the GPLv2. + */ + +#ifdef CONFIG_TOI_CLUSTER +extern int toi_cluster_init(void); +extern void toi_cluster_exit(void); +extern void toi_initiate_cluster_hibernate(void); +#else +static inline int toi_cluster_init(void) { return 0; } +static inline void toi_cluster_exit(void) { } +static inline void toi_initiate_cluster_hibernate(void) { } +#endif + diff --git a/kernel/power/tuxonice_compress.c b/kernel/power/tuxonice_compress.c new file mode 100644 index 0000000..2ac21ee --- /dev/null +++ b/kernel/power/tuxonice_compress.c @@ -0,0 +1,434 @@ +/* + * kernel/power/compression.c + * + * Copyright (C) 2003-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * This file contains data compression routines for TuxOnIce, + * using cryptoapi. + */ + +#include +#include +#include +#include +#include + +#include "tuxonice_builtin.h" +#include "tuxonice.h" +#include "tuxonice_modules.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_io.h" +#include "tuxonice_ui.h" +#include "tuxonice_alloc.h" + +static int toi_expected_compression; + +static struct toi_module_ops toi_compression_ops; +static struct toi_module_ops *next_driver; + +static char toi_compressor_name[32] = "lzf"; + +static DEFINE_MUTEX(stats_lock); + +struct cpu_context { + u8 *page_buffer; + struct crypto_comp *transform; + unsigned int len; + char *buffer_start; +}; + +static DEFINE_PER_CPU(struct cpu_context, contexts); + +static int toi_compress_prepare_result; + +/* + * toi_compress_cleanup + * + * Frees memory allocated for our labours. + */ +static void toi_compress_cleanup(int toi_or_resume) +{ + int cpu; + + if (!toi_or_resume) + return; + + for_each_online_cpu(cpu) { + struct cpu_context *this = &per_cpu(contexts, cpu); + if (this->transform) { + crypto_free_comp(this->transform); + this->transform = NULL; + } + + if (this->page_buffer) + toi_free_page(16, (unsigned long) this->page_buffer); + + this->page_buffer = NULL; + } +} + +/* + * toi_crypto_prepare + * + * Prepare to do some work by allocating buffers and transforms. + */ +static int toi_compress_crypto_prepare(void) +{ + int cpu; + + if (!*toi_compressor_name) { + printk(KERN_INFO "TuxOnIce: Compression enabled but no " + "compressor name set.\n"); + return 1; + } + + for_each_online_cpu(cpu) { + struct cpu_context *this = &per_cpu(contexts, cpu); + this->transform = crypto_alloc_comp(toi_compressor_name, 0, 0); + if (IS_ERR(this->transform)) { + printk(KERN_INFO "TuxOnIce: Failed to initialise the " + "%s compression transform.\n", + toi_compressor_name); + this->transform = NULL; + return 1; + } + + this->page_buffer = + (char *) toi_get_zeroed_page(16, TOI_ATOMIC_GFP); + + if (!this->page_buffer) { + printk(KERN_ERR + "Failed to allocate a page buffer for TuxOnIce " + "encryption driver.\n"); + return -ENOMEM; + } + } + + return 0; +} + +/* + * toi_compress_init + */ + +static int toi_compress_init(int toi_or_resume) +{ + if (!toi_or_resume) + return 0; + + toi_compress_bytes_in = toi_compress_bytes_out = 0; + + next_driver = toi_get_next_filter(&toi_compression_ops); + + if (!next_driver) + return -ECHILD; + + toi_compress_prepare_result = toi_compress_crypto_prepare(); + + return 0; +} + +/* + * toi_compress_rw_init() + */ + +int toi_compress_rw_init(int rw, int stream_number) +{ + if (toi_compress_prepare_result) { + printk("Failed to initialise compression algorithm.\n"); + if (rw == READ) + return -ENODEV; + else + toi_compression_ops.enabled = 0; + } + + return 0; +} + +/* + * toi_compress_write_page() + * + * Compress a page of data, buffering output and passing on filled + * pages to the next module in the pipeline. + * + * Buffer_page: Pointer to a buffer of size PAGE_SIZE, containing + * data to be compressed. + * + * Returns: 0 on success. Otherwise the error is that returned by later + * modules, -ECHILD if we have a broken pipeline or -EIO if + * zlib errs. + */ +static int toi_compress_write_page(unsigned long index, + struct page *buffer_page, unsigned int buf_size) +{ + int ret, cpu = smp_processor_id(); + struct cpu_context *ctx = &per_cpu(contexts, cpu); + + if (!ctx->transform) + return next_driver->write_page(index, buffer_page, buf_size); + + ctx->buffer_start = kmap(buffer_page); + + ctx->len = buf_size; + + ret = crypto_comp_compress(ctx->transform, + ctx->buffer_start, buf_size, + ctx->page_buffer, &ctx->len); + + kunmap(buffer_page); + + if (ret) { + printk(KERN_INFO "Compression failed.\n"); + goto failure; + } + + mutex_lock(&stats_lock); + toi_compress_bytes_in += buf_size; + toi_compress_bytes_out += ctx->len; + mutex_unlock(&stats_lock); + + if (ctx->len < buf_size) /* some compression */ + ret = next_driver->write_page(index, + virt_to_page(ctx->page_buffer), + ctx->len); + else + ret = next_driver->write_page(index, buffer_page, buf_size); + +failure: + return ret; +} + +/* + * toi_compress_read_page() + * @buffer_page: struct page *. Pointer to a buffer of size PAGE_SIZE. + * + * Retrieve data from later modules and decompress it until the input buffer + * is filled. + * Zero if successful. Error condition from me or from downstream on failure. + */ +static int toi_compress_read_page(unsigned long *index, + struct page *buffer_page, unsigned int *buf_size) +{ + int ret, cpu = smp_processor_id(); + unsigned int len; + unsigned int outlen = PAGE_SIZE; + char *buffer_start; + struct cpu_context *ctx = &per_cpu(contexts, cpu); + + if (!ctx->transform) + return next_driver->read_page(index, buffer_page, buf_size); + + /* + * All our reads must be synchronous - we can't decompress + * data that hasn't been read yet. + */ + + *buf_size = PAGE_SIZE; + + ret = next_driver->read_page(index, buffer_page, &len); + + /* Error or uncompressed data */ + if (ret || len == PAGE_SIZE) + return ret; + + buffer_start = kmap(buffer_page); + memcpy(ctx->page_buffer, buffer_start, len); + ret = crypto_comp_decompress( + ctx->transform, + ctx->page_buffer, + len, buffer_start, &outlen); + if (ret) + abort_hibernate(TOI_FAILED_IO, + "Compress_read returned %d.\n", ret); + else if (outlen != PAGE_SIZE) { + abort_hibernate(TOI_FAILED_IO, + "Decompression yielded %d bytes instead of %ld.\n", + outlen, PAGE_SIZE); + ret = -EIO; + *buf_size = outlen; + } + kunmap(buffer_page); + return ret; +} + +/* + * toi_compress_print_debug_stats + * @buffer: Pointer to a buffer into which the debug info will be printed. + * @size: Size of the buffer. + * + * Print information to be recorded for debugging purposes into a buffer. + * Returns: Number of characters written to the buffer. + */ + +static int toi_compress_print_debug_stats(char *buffer, int size) +{ + unsigned long pages_in = toi_compress_bytes_in >> PAGE_SHIFT, + pages_out = toi_compress_bytes_out >> PAGE_SHIFT; + int len; + + /* Output the compression ratio achieved. */ + if (*toi_compressor_name) + len = snprintf_used(buffer, size, "- Compressor is '%s'.\n", + toi_compressor_name); + else + len = snprintf_used(buffer, size, "- Compressor is not set.\n"); + + if (pages_in) + len += snprintf_used(buffer+len, size - len, + " Compressed %lu bytes into %lu (%d percent compression).\n", + toi_compress_bytes_in, + toi_compress_bytes_out, + (pages_in - pages_out) * 100 / pages_in); + return len; +} + +/* + * toi_compress_compression_memory_needed + * + * Tell the caller how much memory we need to operate during hibernate/resume. + * Returns: Unsigned long. Maximum number of bytes of memory required for + * operation. + */ +static int toi_compress_memory_needed(void) +{ + return 2 * PAGE_SIZE; +} + +static int toi_compress_storage_needed(void) +{ + return 4 * sizeof(unsigned long) + strlen(toi_compressor_name) + 1; +} + +/* + * toi_compress_save_config_info + * @buffer: Pointer to a buffer of size PAGE_SIZE. + * + * Save informaton needed when reloading the image at resume time. + * Returns: Number of bytes used for saving our data. + */ +static int toi_compress_save_config_info(char *buffer) +{ + int namelen = strlen(toi_compressor_name) + 1; + int total_len; + + *((unsigned long *) buffer) = toi_compress_bytes_in; + *((unsigned long *) (buffer + 1 * sizeof(unsigned long))) = + toi_compress_bytes_out; + *((unsigned long *) (buffer + 2 * sizeof(unsigned long))) = + toi_expected_compression; + *((unsigned long *) (buffer + 3 * sizeof(unsigned long))) = namelen; + strncpy(buffer + 4 * sizeof(unsigned long), toi_compressor_name, + namelen); + total_len = 4 * sizeof(unsigned long) + namelen; + return total_len; +} + +/* toi_compress_load_config_info + * @buffer: Pointer to the start of the data. + * @size: Number of bytes that were saved. + * + * Description: Reload information needed for decompressing the image at + * resume time. + */ +static void toi_compress_load_config_info(char *buffer, int size) +{ + int namelen; + + toi_compress_bytes_in = *((unsigned long *) buffer); + toi_compress_bytes_out = *((unsigned long *) (buffer + 1 * + sizeof(unsigned long))); + toi_expected_compression = *((unsigned long *) (buffer + 2 * + sizeof(unsigned long))); + namelen = *((unsigned long *) (buffer + 3 * sizeof(unsigned long))); + strncpy(toi_compressor_name, buffer + 4 * sizeof(unsigned long), + namelen); + return; +} + +/* + * toi_expected_compression_ratio + * + * Description: Returns the expected ratio between data passed into this module + * and the amount of data output when writing. + * Returns: 100 if the module is disabled. Otherwise the value set by the + * user via our sysfs entry. + */ + +static int toi_compress_expected_ratio(void) +{ + if (!toi_compression_ops.enabled) + return 100; + else + return 100 - toi_expected_compression; +} + +/* + * data for our sysfs entries. + */ +static struct toi_sysfs_data sysfs_params[] = { + { + TOI_ATTR("expected_compression", SYSFS_RW), + SYSFS_INT(&toi_expected_compression, 0, 99, 0) + }, + + { + TOI_ATTR("enabled", SYSFS_RW), + SYSFS_INT(&toi_compression_ops.enabled, 0, 1, 0) + }, + + { + TOI_ATTR("algorithm", SYSFS_RW), + SYSFS_STRING(toi_compressor_name, 31, 0) + } +}; + +/* + * Ops structure. + */ +static struct toi_module_ops toi_compression_ops = { + .type = FILTER_MODULE, + .name = "compression", + .directory = "compression", + .module = THIS_MODULE, + .initialise = toi_compress_init, + .cleanup = toi_compress_cleanup, + .memory_needed = toi_compress_memory_needed, + .print_debug_info = toi_compress_print_debug_stats, + .save_config_info = toi_compress_save_config_info, + .load_config_info = toi_compress_load_config_info, + .storage_needed = toi_compress_storage_needed, + .expected_compression = toi_compress_expected_ratio, + + .rw_init = toi_compress_rw_init, + + .write_page = toi_compress_write_page, + .read_page = toi_compress_read_page, + + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +/* ---- Registration ---- */ + +static __init int toi_compress_load(void) +{ + return toi_register_module(&toi_compression_ops); +} + +#ifdef MODULE +static __exit void toi_compress_unload(void) +{ + toi_unregister_module(&toi_compression_ops); +} + +module_init(toi_compress_load); +module_exit(toi_compress_unload); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Nigel Cunningham"); +MODULE_DESCRIPTION("Compression Support for TuxOnIce"); +#else +late_initcall(toi_compress_load); +#endif diff --git a/kernel/power/tuxonice_extent.c b/kernel/power/tuxonice_extent.c new file mode 100644 index 0000000..45af368 --- /dev/null +++ b/kernel/power/tuxonice_extent.c @@ -0,0 +1,312 @@ +/* + * kernel/power/tuxonice_extent.c + * + * Copyright (C) 2003-2007 Nigel Cunningham (nigel at tuxonice net) + * + * Distributed under GPLv2. + * + * These functions encapsulate the manipulation of storage metadata. For + * pageflags, we use dynamically allocated bitmaps. + */ + +#include +#include +#include "tuxonice_modules.h" +#include "tuxonice_extent.h" +#include "tuxonice_alloc.h" +#include "tuxonice_ui.h" +#include "tuxonice.h" + +/* toi_get_extent + * + * Returns a free extent. May fail, returning NULL instead. + */ +static struct extent *toi_get_extent(void) +{ + struct extent *result; + + result = toi_kzalloc(2, sizeof(struct extent), TOI_ATOMIC_GFP); + if (!result) + return NULL; + + result->minimum = result->maximum = 0; + result->next = NULL; + + return result; +} + +/* toi_put_extent_chain. + * + * Frees a whole chain of extents. + */ +void toi_put_extent_chain(struct extent_chain *chain) +{ + struct extent *this; + + this = chain->first; + + while (this) { + struct extent *next = this->next; + toi_kfree(2, this); + chain->num_extents--; + this = next; + } + + chain->first = chain->last_touched = NULL; + chain->size = 0; +} + +/* + * toi_add_to_extent_chain + * + * Add an extent to an existing chain. + */ +int toi_add_to_extent_chain(struct extent_chain *chain, + unsigned long minimum, unsigned long maximum) +{ + struct extent *new_extent = NULL, *start_at; + + /* Find the right place in the chain */ + start_at = (chain->last_touched && + (chain->last_touched->minimum < minimum)) ? + chain->last_touched : NULL; + + if (!start_at && chain->first && chain->first->minimum < minimum) + start_at = chain->first; + + while (start_at && start_at->next && start_at->next->minimum < minimum) + start_at = start_at->next; + + if (start_at && start_at->maximum == (minimum - 1)) { + start_at->maximum = maximum; + + /* Merge with the following one? */ + if (start_at->next && + start_at->maximum + 1 == start_at->next->minimum) { + struct extent *to_free = start_at->next; + start_at->maximum = start_at->next->maximum; + start_at->next = start_at->next->next; + chain->num_extents--; + toi_kfree(2, to_free); + } + + chain->last_touched = start_at; + chain->size += (maximum - minimum + 1); + + return 0; + } + + new_extent = toi_get_extent(); + if (!new_extent) { + printk(KERN_INFO "Error unable to append a new extent to the " + "chain.\n"); + return 2; + } + + chain->num_extents++; + chain->size += (maximum - minimum + 1); + new_extent->minimum = minimum; + new_extent->maximum = maximum; + new_extent->next = NULL; + + chain->last_touched = new_extent; + + if (start_at) { + struct extent *next = start_at->next; + start_at->next = new_extent; + new_extent->next = next; + } else { + if (chain->first) + new_extent->next = chain->first; + chain->first = new_extent; + } + + return 0; +} + +/* toi_serialise_extent_chain + * + * Write a chain in the image. + */ +int toi_serialise_extent_chain(struct toi_module_ops *owner, + struct extent_chain *chain) +{ + struct extent *this; + int ret, i = 0; + + ret = toiActiveAllocator->rw_header_chunk(WRITE, owner, (char *) chain, + 2 * sizeof(int)); + if (ret) + return ret; + + this = chain->first; + while (this) { + ret = toiActiveAllocator->rw_header_chunk(WRITE, owner, + (char *) this, 2 * sizeof(unsigned long)); + if (ret) + return ret; + this = this->next; + i++; + } + + if (i != chain->num_extents) { + printk(KERN_EMERG "Saved %d extents but chain metadata says " + "there should be %d.\n", i, chain->num_extents); + return 1; + } + + return ret; +} + +/* toi_load_extent_chain + * + * Read back a chain saved in the image. + */ +int toi_load_extent_chain(struct extent_chain *chain) +{ + struct extent *this, *last = NULL; + int i, ret; + + ret = toiActiveAllocator->rw_header_chunk(READ, NULL, (char *) chain, + 2 * sizeof(int)); + if (ret) { + printk("Failed to read size of extent chain.\n"); + return 1; + } + + for (i = 0; i < chain->num_extents; i++) { + this = toi_kzalloc(3, sizeof(struct extent), TOI_ATOMIC_GFP); + if (!this) { + printk(KERN_INFO "Failed to allocate a new extent.\n"); + return -ENOMEM; + } + this->next = NULL; + ret = toiActiveAllocator->rw_header_chunk(READ, NULL, + (char *) this, 2 * sizeof(unsigned long)); + if (ret) { + printk(KERN_INFO "Failed to an extent.\n"); + return 1; + } + if (last) + last->next = this; + else + chain->first = this; + last = this; + } + return 0; +} + +/* toi_extent_state_next + * + * Given a state, progress to the next valid entry. We may begin in an + * invalid state, as we do when invoked after extent_state_goto_start below. + * + * When using compression and expected_compression > 0, we let the image size + * be larger than storage, so we can validly run out of data to return. + */ +unsigned long toi_extent_state_next(struct extent_iterate_state *state) +{ + if (state->current_chain == state->num_chains) + return 0; + + if (state->current_extent) { + if (state->current_offset == state->current_extent->maximum) { + if (state->current_extent->next) { + state->current_extent = + state->current_extent->next; + state->current_offset = + state->current_extent->minimum; + } else { + state->current_extent = NULL; + state->current_offset = 0; + } + } else + state->current_offset++; + } + + while (!state->current_extent) { + int chain_num = ++(state->current_chain); + + if (chain_num == state->num_chains) + return 0; + + state->current_extent = (state->chains + chain_num)->first; + + if (!state->current_extent) + continue; + + state->current_offset = state->current_extent->minimum; + } + + return state->current_offset; +} + +/* toi_extent_state_goto_start + * + * Find the first valid value in a group of chains. + */ +void toi_extent_state_goto_start(struct extent_iterate_state *state) +{ + state->current_chain = -1; + state->current_extent = NULL; + state->current_offset = 0; +} + +/* toi_extent_start_save + * + * Given a state and a struct extent_state_store, save the current + * position in a format that can be used with relocated chains (at + * resume time). + */ +void toi_extent_state_save(struct extent_iterate_state *state, + struct extent_iterate_saved_state *saved_state) +{ + struct extent *extent; + + saved_state->chain_num = state->current_chain; + saved_state->extent_num = 0; + saved_state->offset = state->current_offset; + + if (saved_state->chain_num == -1) + return; + + extent = (state->chains + state->current_chain)->first; + + while (extent != state->current_extent) { + saved_state->extent_num++; + extent = extent->next; + } +} + +/* toi_extent_start_restore + * + * Restore the position saved by extent_state_save. + */ +void toi_extent_state_restore(struct extent_iterate_state *state, + struct extent_iterate_saved_state *saved_state) +{ + int posn = saved_state->extent_num; + + if (saved_state->chain_num == -1) { + toi_extent_state_goto_start(state); + return; + } + + state->current_chain = saved_state->chain_num; + state->current_extent = (state->chains + state->current_chain)->first; + state->current_offset = saved_state->offset; + + while (posn--) + state->current_extent = state->current_extent->next; +} + +#ifdef CONFIG_TOI_EXPORTS +EXPORT_SYMBOL_GPL(toi_add_to_extent_chain); +EXPORT_SYMBOL_GPL(toi_put_extent_chain); +EXPORT_SYMBOL_GPL(toi_load_extent_chain); +EXPORT_SYMBOL_GPL(toi_serialise_extent_chain); +EXPORT_SYMBOL_GPL(toi_extent_state_save); +EXPORT_SYMBOL_GPL(toi_extent_state_restore); +EXPORT_SYMBOL_GPL(toi_extent_state_goto_start); +EXPORT_SYMBOL_GPL(toi_extent_state_next); +#endif diff --git a/kernel/power/tuxonice_extent.h b/kernel/power/tuxonice_extent.h new file mode 100644 index 0000000..d7dd07e --- /dev/null +++ b/kernel/power/tuxonice_extent.h @@ -0,0 +1,78 @@ +/* + * kernel/power/tuxonice_extent.h + * + * Copyright (C) 2003-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * It contains declarations related to extents. Extents are + * TuxOnIce's method of storing some of the metadata for the image. + * See tuxonice_extent.c for more info. + * + */ + +#include "tuxonice_modules.h" + +#ifndef EXTENT_H +#define EXTENT_H + +struct extent { + unsigned long minimum, maximum; + struct extent *next; +}; + +struct extent_chain { + int size; /* size of the chain ie sum (max-min+1) */ + int num_extents; + struct extent *first, *last_touched; +}; + +struct extent_iterate_state { + struct extent_chain *chains; + int num_chains; + int current_chain; + struct extent *current_extent; + unsigned long current_offset; +}; + +struct extent_iterate_saved_state { + int chain_num; + int extent_num; + unsigned long offset; +}; + +#define toi_extent_state_eof(state) \ + ((state)->num_chains == (state)->current_chain) + +/* Simplify iterating through all the values in an extent chain */ +#define toi_extent_for_each(extent_chain, extentpointer, value) \ +if ((extent_chain)->first) \ + for ((extentpointer) = (extent_chain)->first, (value) = \ + (extentpointer)->minimum; \ + ((extentpointer) && ((extentpointer)->next || (value) <= \ + (extentpointer)->maximum)); \ + (((value) == (extentpointer)->maximum) ? \ + ((extentpointer) = (extentpointer)->next, (value) = \ + ((extentpointer) ? (extentpointer)->minimum : 0)) : \ + (value)++)) + +void toi_put_extent_chain(struct extent_chain *chain); +int toi_add_to_extent_chain(struct extent_chain *chain, + unsigned long minimum, unsigned long maximum); +int toi_serialise_extent_chain(struct toi_module_ops *owner, + struct extent_chain *chain); +int toi_load_extent_chain(struct extent_chain *chain); + +/* swap_entry_to_extent_val & extent_val_to_swap_entry: + * We are putting offset in the low bits so consecutive swap entries + * make consecutive extent values */ +#define swap_entry_to_extent_val(swp_entry) (swp_entry.val) +#define extent_val_to_swap_entry(val) (swp_entry_t) { (val) } + +void toi_extent_state_save(struct extent_iterate_state *state, + struct extent_iterate_saved_state *saved_state); +void toi_extent_state_restore(struct extent_iterate_state *state, + struct extent_iterate_saved_state *saved_state); +void toi_extent_state_goto_start(struct extent_iterate_state *state); +unsigned long toi_extent_state_next(struct extent_iterate_state *state); +#endif diff --git a/kernel/power/tuxonice_file.c b/kernel/power/tuxonice_file.c new file mode 100644 index 0000000..d508b89 --- /dev/null +++ b/kernel/power/tuxonice_file.c @@ -0,0 +1,1104 @@ +/* + * kernel/power/tuxonice_file.c + * + * Copyright (C) 2005-2007 Nigel Cunningham (nigel at tuxonice net) + * + * Distributed under GPLv2. + * + * This file encapsulates functions for usage of a simple file as a + * backing store. It is based upon the swapallocator, and shares the + * same basic working. Here, though, we have nothing to do with + * swapspace, and only one device to worry about. + * + * The user can just + * + * echo TuxOnIce > /path/to/my_file + * + * dd if=/dev/zero bs=1M count= >> /path/to/my_file + * + * and + * + * echo /path/to/my_file > /sys/power/tuxonice/file/target + * + * then put what they find in /sys/power/tuxonice/resume + * as their resume= parameter in lilo.conf (and rerun lilo if using it). + * + * Having done this, they're ready to hibernate and resume. + * + * TODO: + * - File resizing. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "tuxonice.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_modules.h" +#include "tuxonice_ui.h" +#include "tuxonice_extent.h" +#include "tuxonice_io.h" +#include "tuxonice_storage.h" +#include "tuxonice_block_io.h" +#include "tuxonice_alloc.h" + +static struct toi_module_ops toi_fileops; + +/* Details of our target. */ + +char toi_file_target[256]; +static struct inode *target_inode; +static struct file *target_file; +static struct block_device *toi_file_target_bdev; +static dev_t resume_file_dev_t; +static int used_devt; +static int setting_toi_file_target; +static sector_t target_firstblock, target_header_start; +static int target_storage_available; +static int target_claim; + +static char HaveImage[] = "HaveImage\n"; +static char NoImage[] = "TuxOnIce\n"; +#define sig_size (sizeof(HaveImage) + 1) + +struct toi_file_header { + char sig[sig_size]; + int resumed_before; + unsigned long first_header_block; +}; + +/* Header Page Information */ +static int header_pages_allocated; + +/* Main Storage Pages */ +static int main_pages_allocated, main_pages_requested; + +#define target_is_normal_file() (S_ISREG(target_inode->i_mode)) + +static struct toi_bdev_info devinfo; + +/* Extent chain for blocks */ +static struct extent_chain block_chain; + +/* Signature operations */ +enum { + GET_IMAGE_EXISTS, + INVALIDATE, + MARK_RESUME_ATTEMPTED, + UNMARK_RESUME_ATTEMPTED, +}; + +static void set_devinfo(struct block_device *bdev, int target_blkbits) +{ + devinfo.bdev = bdev; + if (!target_blkbits) { + devinfo.bmap_shift = devinfo.blocks_per_page = 0; + } else { + devinfo.bmap_shift = target_blkbits - 9; + devinfo.blocks_per_page = (1 << (PAGE_SHIFT - target_blkbits)); + } +} + +static int adjust_for_extra_pages(int unadjusted) +{ + return (unadjusted << PAGE_SHIFT) / (PAGE_SIZE + sizeof(unsigned long) + + sizeof(int)); +} + +static int toi_file_storage_available(void) +{ + int result = 0; + struct block_device *bdev = toi_file_target_bdev; + + if (!target_inode) + return 0; + + switch (target_inode->i_mode & S_IFMT) { + case S_IFSOCK: + case S_IFCHR: + case S_IFIFO: /* Socket, Char, Fifo */ + return -1; + case S_IFREG: /* Regular file: current size - holes + free + space on part */ + result = target_storage_available; + break; + case S_IFBLK: /* Block device */ + if (!bdev->bd_disk) { + printk(KERN_INFO "bdev->bd_disk null.\n"); + return 0; + } + + result = (bdev->bd_part ? + bdev->bd_part->nr_sects : + bdev->bd_disk->capacity) >> (PAGE_SHIFT - 9); + } + + return adjust_for_extra_pages(result); +} + +static int has_contiguous_blocks(int page_num) +{ + int j; + sector_t last = 0; + + for (j = 0; j < devinfo.blocks_per_page; j++) { + sector_t this = bmap(target_inode, + page_num * devinfo.blocks_per_page + j); + + if (!this || (last && (last + 1) != this)) + break; + + last = this; + } + + return (j == devinfo.blocks_per_page); +} + +static int size_ignoring_ignored_pages(void) +{ + int mappable = 0, i; + + if (!target_is_normal_file()) + return toi_file_storage_available(); + + for (i = 0; i < (target_inode->i_size >> PAGE_SHIFT) ; i++) + if (has_contiguous_blocks(i)) + mappable++; + + return mappable; +} + +static void __populate_block_list(int min, int max) +{ + if (test_action_state(TOI_TEST_BIO)) + printk(KERN_INFO "Adding extent %d-%d.\n", + min << devinfo.bmap_shift, + ((max + 1) << devinfo.bmap_shift) - 1); + + toi_add_to_extent_chain(&block_chain, min, max); +} + +static void populate_block_list(void) +{ + int i; + int extent_min = -1, extent_max = -1, got_header = 0; + + if (block_chain.first) + toi_put_extent_chain(&block_chain); + + if (!target_is_normal_file()) { + if (target_storage_available > 0) + __populate_block_list(devinfo.blocks_per_page, + (target_storage_available + 1) * + devinfo.blocks_per_page - 1); + return; + } + + for (i = 0; i < (target_inode->i_size >> PAGE_SHIFT); i++) { + sector_t new_sector; + + if (!has_contiguous_blocks(i)) + continue; + + new_sector = bmap(target_inode, + (i * devinfo.blocks_per_page)); + + /* + * Ignore the first block in the file. + * It gets the header. + */ + if (new_sector == target_firstblock >> devinfo.bmap_shift) { + got_header = 1; + continue; + } + + /* + * I'd love to be able to fill in holes and resize + * files, but not yet... + */ + + if (new_sector == extent_max + 1) + extent_max += devinfo.blocks_per_page; + else { + if (extent_min > -1) + __populate_block_list(extent_min, + extent_max); + + extent_min = new_sector; + extent_max = extent_min + + devinfo.blocks_per_page - 1; + } + } + + if (extent_min > -1) + __populate_block_list(extent_min, extent_max); +} + +static void toi_file_cleanup(int finishing_cycle) +{ + if (toi_file_target_bdev) { + if (target_claim) { + bd_release(toi_file_target_bdev); + target_claim = 0; + } + + if (used_devt) { + blkdev_put(toi_file_target_bdev); + used_devt = 0; + } + toi_file_target_bdev = NULL; + target_inode = NULL; + set_devinfo(NULL, 0); + target_storage_available = 0; + } + + if (target_file > 0) { + filp_close(target_file, NULL); + target_file = NULL; + } +} + +/* + * reopen_resume_devt + * + * Having opened resume= once, we remember the major and + * minor nodes and use them to reopen the bdev for checking + * whether an image exists (possibly when starting a resume). + */ +static void reopen_resume_devt(void) +{ + toi_file_target_bdev = toi_open_by_devnum(resume_file_dev_t, FMODE_READ); + if (IS_ERR(toi_file_target_bdev)) { + printk(KERN_INFO "Got a dev_num (%lx) but failed to open it.\n", + (unsigned long) resume_file_dev_t); + return; + } + target_inode = toi_file_target_bdev->bd_inode; + set_devinfo(toi_file_target_bdev, target_inode->i_blkbits); +} + +static void toi_file_get_target_info(char *target, int get_size, + int resume_param) +{ + if (target_file) + toi_file_cleanup(0); + + if (!target || !strlen(target)) + return; + + target_file = filp_open(target, O_RDWR|O_LARGEFILE, 0); + + if (IS_ERR(target_file) || !target_file) { + + if (!resume_param) { + printk(KERN_INFO "Open file %s returned %p.\n", + target, target_file); + target_file = NULL; + return; + } + + target_file = NULL; + resume_file_dev_t = name_to_dev_t(target); + if (!resume_file_dev_t) { + struct kstat stat; + int error = vfs_stat(target, &stat); + printk(KERN_INFO "Open file %s returned %p and " + "name_to_devt failed.\n", target, + target_file); + if (error) + printk(KERN_INFO "Stating the file also failed." + " Nothing more we can do.\n"); + else + resume_file_dev_t = stat.rdev; + return; + } + + toi_file_target_bdev = toi_open_by_devnum(resume_file_dev_t, + FMODE_READ); + if (IS_ERR(toi_file_target_bdev)) { + printk(KERN_INFO "Got a dev_num (%lx) but failed to " + "open it.\n", + (unsigned long) resume_file_dev_t); + return; + } + used_devt = 1; + target_inode = toi_file_target_bdev->bd_inode; + } else + target_inode = target_file->f_mapping->host; + + if (S_ISLNK(target_inode->i_mode) || S_ISDIR(target_inode->i_mode) || + S_ISSOCK(target_inode->i_mode) || S_ISFIFO(target_inode->i_mode)) { + printk(KERN_INFO "File support works with regular files," + " character files and block devices.\n"); + goto cleanup; + } + + if (!used_devt) { + if (S_ISBLK(target_inode->i_mode)) { + toi_file_target_bdev = I_BDEV(target_inode); + if (!bd_claim(toi_file_target_bdev, &toi_fileops)) + target_claim = 1; + } else + toi_file_target_bdev = target_inode->i_sb->s_bdev; + resume_file_dev_t = toi_file_target_bdev->bd_dev; + } + + set_devinfo(toi_file_target_bdev, target_inode->i_blkbits); + + if (get_size) + target_storage_available = size_ignoring_ignored_pages(); + + if (!resume_param) + target_firstblock = bmap(target_inode, 0) << devinfo.bmap_shift; + + return; +cleanup: + target_inode = NULL; + if (target_file) { + filp_close(target_file, NULL); + target_file = NULL; + } + set_devinfo(NULL, 0); + target_storage_available = 0; +} + +static int parse_signature(struct toi_file_header *header) +{ + int have_image = !memcmp(HaveImage, header->sig, sizeof(HaveImage) - 1); + int no_image_header = !memcmp(NoImage, header->sig, + sizeof(NoImage) - 1); + + if (no_image_header) + return 0; + + if (!have_image) + return -1; + + if (header->resumed_before) + set_toi_state(TOI_RESUMED_BEFORE); + else + clear_toi_state(TOI_RESUMED_BEFORE); + + target_header_start = header->first_header_block; + return 1; +} + +/* prepare_signature */ + +static int prepare_signature(struct toi_file_header *current_header, + unsigned long first_header_block) +{ + strncpy(current_header->sig, HaveImage, sizeof(HaveImage)); + current_header->resumed_before = 0; + current_header->first_header_block = first_header_block; + return 0; +} + +static int toi_file_storage_allocated(void) +{ + if (!target_inode) + return 0; + + if (target_is_normal_file()) + return (int) target_storage_available; + else + return header_pages_allocated + main_pages_requested; +} + +static int toi_file_release_storage(void) +{ + if (test_action_state(TOI_KEEP_IMAGE) && + test_toi_state(TOI_NOW_RESUMING)) + return 0; + + toi_put_extent_chain(&block_chain); + + header_pages_allocated = 0; + main_pages_allocated = 0; + main_pages_requested = 0; + return 0; +} + +static int __toi_file_allocate_storage(int main_storage_requested, + int header_storage); + +static int toi_file_allocate_header_space(int space_requested) +{ + int i; + + if (!block_chain.first && __toi_file_allocate_storage( + main_pages_requested, space_requested)) { + printk("Failed to allocate space for the header.\n"); + return -ENOSPC; + } + + toi_extent_state_goto_start(&toi_writer_posn); + toi_bio_ops.forward_one_page(1); /* To first page */ + + for (i = 0; i < space_requested; i++) { + if (toi_bio_ops.forward_one_page(1)) { + printk(KERN_INFO "Out of space while seeking to " + "allocate header pages,\n"); + header_pages_allocated = i; + return -ENOSPC; + } + } + + header_pages_allocated = space_requested; + + /* The end of header pages will be the start of pageset 2 */ + toi_extent_state_save(&toi_writer_posn, + &toi_writer_posn_save[2]); + return 0; +} + +static int toi_file_allocate_storage(int space_requested) +{ + if (__toi_file_allocate_storage(space_requested, + header_pages_allocated)) + return -ENOSPC; + + main_pages_requested = space_requested; + return -ENOSPC; +} + +static int __toi_file_allocate_storage(int main_space_requested, + int header_space_requested) +{ + int result = 0; + + int extra_pages = DIV_ROUND_UP(main_space_requested * + (sizeof(unsigned long) + sizeof(int)), PAGE_SIZE); + int pages_to_get = main_space_requested + extra_pages + + header_space_requested; + int blocks_to_get = pages_to_get - block_chain.size; + + /* Only release_storage reduces the size */ + if (blocks_to_get < 1) + return 0; + + populate_block_list(); + + toi_message(TOI_WRITER, TOI_MEDIUM, 0, + "Finished with block_chain.size == %d.\n", + block_chain.size); + + if (block_chain.size < pages_to_get) { + printk("Block chain size (%d) < header pages (%d) + extra " + "pages (%d) + main pages (%d) (=%d pages).\n", + block_chain.size, header_pages_allocated, extra_pages, + main_space_requested, pages_to_get); + result = -ENOSPC; + } + + main_pages_requested = main_space_requested; + main_pages_allocated = main_space_requested + extra_pages; + + toi_file_allocate_header_space(header_pages_allocated); + return result; +} + +static int toi_file_write_header_init(void) +{ + toi_extent_state_goto_start(&toi_writer_posn); + + toi_writer_buffer_posn = 0; + + /* Info needed to bootstrap goes at the start of the header. + * First we save the basic info needed for reading, including the number + * of header pages. Then we save the structs containing data needed + * for reading the header pages back. + * Note that even if header pages take more than one page, when we + * read back the info, we will have restored the location of the + * next header page by the time we go to use it. + */ + + toi_bio_ops.rw_header_chunk(WRITE, &toi_fileops, + (char *) &toi_writer_posn_save, + sizeof(toi_writer_posn_save)); + + toi_bio_ops.rw_header_chunk(WRITE, &toi_fileops, + (char *) &devinfo, sizeof(devinfo)); + + toi_serialise_extent_chain(&toi_fileops, &block_chain); + + return 0; +} + +static int toi_file_write_header_cleanup(void) +{ + struct toi_file_header *header; + + /* Write any unsaved data */ + if (toi_writer_buffer_posn) + toi_bio_ops.write_header_chunk_finish(); + + toi_bio_ops.finish_all_io(); + + toi_extent_state_goto_start(&toi_writer_posn); + toi_bio_ops.forward_one_page(1); + + /* Adjust image header */ + toi_bio_ops.bdev_page_io(READ, toi_file_target_bdev, + target_firstblock, + virt_to_page(toi_writer_buffer)); + + header = (struct toi_file_header *) toi_writer_buffer; + + prepare_signature(header, + toi_writer_posn.current_offset << + devinfo.bmap_shift); + + toi_bio_ops.bdev_page_io(WRITE, toi_file_target_bdev, + target_firstblock, + virt_to_page(toi_writer_buffer)); + + toi_bio_ops.finish_all_io(); + + return 0; +} + +/* HEADER READING */ + +static int file_init(void) +{ + toi_writer_buffer_posn = 0; + + /* Read toi_file configuration */ + toi_bio_ops.bdev_page_io(READ, toi_file_target_bdev, + target_header_start, + virt_to_page((unsigned long) toi_writer_buffer)); + + return 0; +} + +/* + * read_header_init() + * + * Description: + * 1. Attempt to read the device specified with resume=. + * 2. Check the contents of the header for our signature. + * 3. Warn, ignore, reset and/or continue as appropriate. + * 4. If continuing, read the toi_file configuration section + * of the header and set up block device info so we can read + * the rest of the header & image. + * + * Returns: + * May not return if user choose to reboot at a warning. + * -EINVAL if cannot resume at this time. Booting should continue + * normally. + */ + +static int toi_file_read_header_init(void) +{ + int result; + struct block_device *tmp; + + result = file_init(); + + if (result) { + printk("FileAllocator read header init: Failed to initialise " + "reading the first page of data.\n"); + return result; + } + + memcpy(&toi_writer_posn_save, + toi_writer_buffer + toi_writer_buffer_posn, + sizeof(toi_writer_posn_save)); + + toi_writer_buffer_posn += sizeof(toi_writer_posn_save); + + tmp = devinfo.bdev; + + memcpy(&devinfo, + toi_writer_buffer + toi_writer_buffer_posn, + sizeof(devinfo)); + + devinfo.bdev = tmp; + toi_writer_buffer_posn += sizeof(devinfo); + + toi_bio_ops.read_header_init(); + toi_extent_state_goto_start(&toi_writer_posn); + toi_bio_ops.set_extra_page_forward(); + + return toi_load_extent_chain(&block_chain); +} + +static int toi_file_read_header_cleanup(void) +{ + toi_bio_ops.rw_cleanup(READ); + return 0; +} + +static int toi_file_signature_op(int op) +{ + char *cur; + int result = 0, changed = 0; + struct toi_file_header *header; + + if (toi_file_target_bdev <= 0) + return -1; + + cur = (char *) toi_get_zeroed_page(17, TOI_ATOMIC_GFP); + if (!cur) { + printk("Unable to allocate a page for reading the image " + "signature.\n"); + return -ENOMEM; + } + + toi_bio_ops.bdev_page_io(READ, toi_file_target_bdev, + target_firstblock, + virt_to_page(cur)); + + header = (struct toi_file_header *) cur; + result = parse_signature(header); + + switch (op) { + case INVALIDATE: + if (result == -1) + goto out; + + strcpy(header->sig, NoImage); + header->resumed_before = 0; + result = changed = 1; + break; + case MARK_RESUME_ATTEMPTED: + if (result == 1) { + header->resumed_before = 1; + changed = 1; + } + break; + case UNMARK_RESUME_ATTEMPTED: + if (result == 1) { + header->resumed_before = 0; + changed = 1; + } + break; + } + + if (changed) + toi_bio_ops.bdev_page_io(WRITE, toi_file_target_bdev, + target_firstblock, + virt_to_page(cur)); + +out: + toi_bio_ops.finish_all_io(); + toi_free_page(17, (unsigned long) cur); + return result; +} + +/* Print debug info + * + * Description: + */ + +static int toi_file_print_debug_stats(char *buffer, int size) +{ + int len = 0; + + if (toiActiveAllocator != &toi_fileops) { + len = snprintf_used(buffer, size, + "- FileAllocator inactive.\n"); + return len; + } + + len = snprintf_used(buffer, size, "- FileAllocator active.\n"); + + len += snprintf_used(buffer+len, size-len, " Storage available for " + "image: %ld pages.\n", + toi_file_storage_allocated()); + + return len; +} + +/* + * Storage needed + * + * Returns amount of space in the image header required + * for the toi_file's data. + * + * We ensure the space is allocated, but actually save the + * data from write_header_init and therefore don't also define a + * save_config_info routine. + */ +static int toi_file_storage_needed(void) +{ + return sig_size + strlen(toi_file_target) + 1 + + 3 * sizeof(struct extent_iterate_saved_state) + + sizeof(devinfo) + + sizeof(struct extent_chain) - 2 * sizeof(void *) + + (2 * sizeof(unsigned long) * block_chain.num_extents); +} + +/* + * toi_file_remove_image + * + */ +static int toi_file_remove_image(void) +{ + toi_file_release_storage(); + return toi_file_signature_op(INVALIDATE); +} + +/* + * Image_exists + * + */ + +static int toi_file_image_exists(void) +{ + if (!toi_file_target_bdev) + reopen_resume_devt(); + + return toi_file_signature_op(GET_IMAGE_EXISTS); +} + +/* + * Mark resume attempted. + * + * Record that we tried to resume from this image. + */ + +static void toi_file_mark_resume_attempted(int mark) +{ + toi_file_signature_op(mark ? MARK_RESUME_ATTEMPTED: + UNMARK_RESUME_ATTEMPTED); +} + +static void toi_file_set_resume_param(void) +{ + char *buffer = (char *) toi_get_zeroed_page(18, TOI_ATOMIC_GFP); + char *buffer2 = (char *) toi_get_zeroed_page(19, TOI_ATOMIC_GFP); + unsigned long sector = bmap(target_inode, 0); + int offset = 0; + + if (!buffer || !buffer2) { + if (buffer) + toi_free_page(18, (unsigned long) buffer); + if (buffer2) + toi_free_page(19, (unsigned long) buffer2); + printk("TuxOnIce: Failed to allocate memory while setting " + "resume= parameter.\n"); + return; + } + + if (toi_file_target_bdev) { + set_devinfo(toi_file_target_bdev, target_inode->i_blkbits); + + bdevname(toi_file_target_bdev, buffer2); + offset += snprintf(buffer + offset, PAGE_SIZE - offset, + "/dev/%s", buffer2); + + if (sector) + offset += snprintf(buffer + offset, PAGE_SIZE - offset, + ":0x%lx", sector << devinfo.bmap_shift); + } else + offset += snprintf(buffer + offset, PAGE_SIZE - offset, + "%s is not a valid target.", toi_file_target); + + sprintf(resume_file, "file:%s", buffer); + + toi_free_page(18, (unsigned long) buffer); + toi_free_page(19, (unsigned long) buffer2); + + toi_attempt_to_parse_resume_device(1); +} + +static int __test_toi_file_target(char *target, int resume_time, int quiet) +{ + toi_file_get_target_info(target, 0, resume_time); + if (toi_file_signature_op(GET_IMAGE_EXISTS) > -1) { + if (!quiet) + printk(KERN_INFO "TuxOnIce: FileAllocator: File " + "signature found.\n"); + if (!resume_time) + toi_file_set_resume_param(); + + toi_bio_ops.set_devinfo(&devinfo); + toi_writer_posn.chains = &block_chain; + toi_writer_posn.num_chains = 1; + + if (!resume_time) + set_toi_state(TOI_CAN_HIBERNATE); + return 0; + } + + clear_toi_state(TOI_CAN_HIBERNATE); + + if (quiet) + return 1; + + if (*target) + printk(KERN_INFO "TuxOnIce: FileAllocator: Sorry. No signature " + "found at %s.\n", target); + else + if (!resume_time) + printk(KERN_INFO "TuxOnIce: FileAllocator: Sorry. " + "Target is not set for hibernating.\n"); + + return 1; +} + +static void test_toi_file_target(void) +{ + setting_toi_file_target = 1; + + printk(KERN_INFO "TuxOnIce: Hibernating %sabled.\n", + __test_toi_file_target(toi_file_target, 0, 1) ? + "dis" : "en"); + + setting_toi_file_target = 0; +} + +/* + * Parse Image Location + * + * Attempt to parse a resume= parameter. + * File Allocator accepts: + * resume=file:DEVNAME[:FIRSTBLOCK] + * + * Where: + * DEVNAME is convertable to a dev_t by name_to_dev_t + * FIRSTBLOCK is the location of the first block in the file. + * BLOCKSIZE is the logical blocksize >= SECTOR_SIZE & <= PAGE_SIZE, + * mod SECTOR_SIZE == 0 of the device. + * Data is validated by attempting to read a header from the + * location given. Failure will result in toi_file refusing to + * save an image, and a reboot with correct parameters will be + * necessary. + */ + +static int toi_file_parse_sig_location(char *commandline, + int only_writer, int quiet) +{ + char *thischar, *devstart = NULL, *colon = NULL, *at_symbol = NULL; + int result = -EINVAL, target_blocksize = 0; + + if (strncmp(commandline, "file:", 5)) { + if (!only_writer) + return 1; + } else + commandline += 5; + + /* + * Don't check signature again if we're beginning a cycle. If we already + * did the initialisation successfully, assume we'll be okay when it + * comes to resuming. + */ + if (toi_file_target_bdev) + return 0; + + devstart = thischar = commandline; + while ((*thischar != ':') && (*thischar != '@') && + ((thischar - commandline) < 250) && (*thischar)) + thischar++; + + if (*thischar == ':') { + colon = thischar; + *colon = 0; + thischar++; + } + + while ((*thischar != '@') && ((thischar - commandline) < 250) + && (*thischar)) + thischar++; + + if (*thischar == '@') { + at_symbol = thischar; + *at_symbol = 0; + } + + /* + * For the toi_file, you can be able to resume, but not hibernate, + * because the resume= is set correctly, but the toi_file_target + * isn't. + * + * We may have come here as a result of setting resume or + * toi_file_target. We only test the toi_file target in the + * former case (it's already done in the later), and we do it before + * setting the block number ourselves. It will overwrite the values + * given on the command line if we don't. + */ + + if (!setting_toi_file_target) + __test_toi_file_target(toi_file_target, 1, 0); + + if (colon) + target_firstblock = (int) simple_strtoul(colon + 1, NULL, 0); + else + target_firstblock = 0; + + if (at_symbol) { + target_blocksize = (int) simple_strtoul(at_symbol + 1, NULL, 0); + if (target_blocksize & (SECTOR_SIZE - 1)) { + printk(KERN_INFO "FileAllocator: Blocksizes are " + "multiples of %d.\n", SECTOR_SIZE); + result = -EINVAL; + goto out; + } + } + + if (!quiet) + printk(KERN_INFO "TuxOnIce FileAllocator: Testing whether you" + " can resume:\n"); + + toi_file_get_target_info(commandline, 0, 1); + + if (!toi_file_target_bdev || IS_ERR(toi_file_target_bdev)) { + toi_file_target_bdev = NULL; + result = -1; + goto out; + } + + if (target_blocksize) + set_devinfo(toi_file_target_bdev, ffs(target_blocksize)); + + result = __test_toi_file_target(commandline, 1, 0); + +out: + if (result) + clear_toi_state(TOI_CAN_HIBERNATE); + + if (!quiet) + printk(KERN_INFO "Resuming %sabled.\n", result ? "dis" : "en"); + + if (colon) + *colon = ':'; + if (at_symbol) + *at_symbol = '@'; + + return result; +} + +/* toi_file_save_config_info + * + * Description: Save the target's name, not for resume time, but for + * all_settings. + * Arguments: Buffer: Pointer to a buffer of size PAGE_SIZE. + * Returns: Number of bytes used for saving our data. + */ + +static int toi_file_save_config_info(char *buffer) +{ + strcpy(buffer, toi_file_target); + return strlen(toi_file_target) + 1; +} + +/* toi_file_load_config_info + * + * Description: Reload target's name. + * Arguments: Buffer: Pointer to the start of the data. + * Size: Number of bytes that were saved. + */ + +static void toi_file_load_config_info(char *buffer, int size) +{ + strcpy(toi_file_target, buffer); +} + +static int toi_file_initialise(int starting_cycle) +{ + if (starting_cycle) { + if (toiActiveAllocator != &toi_fileops) + return 0; + + if (starting_cycle & SYSFS_HIBERNATE && !*toi_file_target) { + printk(KERN_INFO "FileAllocator is the active writer, " + "but no filename has been set.\n"); + return 1; + } + } + + if (toi_file_target) + toi_file_get_target_info(toi_file_target, starting_cycle, 0); + + if (starting_cycle && (toi_file_image_exists() == -1)) { + printk("%s is does not have a valid signature for " + "hibernating.\n", toi_file_target); + return 1; + } + + return 0; +} + +static struct toi_sysfs_data sysfs_params[] = { + + { + TOI_ATTR("target", SYSFS_RW), + SYSFS_STRING(toi_file_target, 256, SYSFS_NEEDS_SM_FOR_WRITE), + .write_side_effect = test_toi_file_target, + }, + + { + TOI_ATTR("enabled", SYSFS_RW), + SYSFS_INT(&toi_fileops.enabled, 0, 1, 0), + .write_side_effect = attempt_to_parse_resume_device2, + } +}; + +static struct toi_module_ops toi_fileops = { + .type = WRITER_MODULE, + .name = "file storage", + .directory = "file", + .module = THIS_MODULE, + .print_debug_info = toi_file_print_debug_stats, + .save_config_info = toi_file_save_config_info, + .load_config_info = toi_file_load_config_info, + .storage_needed = toi_file_storage_needed, + .initialise = toi_file_initialise, + .cleanup = toi_file_cleanup, + + .storage_available = toi_file_storage_available, + .storage_allocated = toi_file_storage_allocated, + .release_storage = toi_file_release_storage, + .allocate_header_space = toi_file_allocate_header_space, + .allocate_storage = toi_file_allocate_storage, + .image_exists = toi_file_image_exists, + .mark_resume_attempted = toi_file_mark_resume_attempted, + .write_header_init = toi_file_write_header_init, + .write_header_cleanup = toi_file_write_header_cleanup, + .read_header_init = toi_file_read_header_init, + .read_header_cleanup = toi_file_read_header_cleanup, + .remove_image = toi_file_remove_image, + .parse_sig_location = toi_file_parse_sig_location, + + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +/* ---- Registration ---- */ +static __init int toi_file_load(void) +{ + toi_fileops.rw_init = toi_bio_ops.rw_init; + toi_fileops.rw_cleanup = toi_bio_ops.rw_cleanup; + toi_fileops.read_page = toi_bio_ops.read_page; + toi_fileops.write_page = toi_bio_ops.write_page; + toi_fileops.rw_header_chunk = toi_bio_ops.rw_header_chunk; + + return toi_register_module(&toi_fileops); +} + +#ifdef MODULE +static __exit void toi_file_unload(void) +{ + toi_unregister_module(&toi_fileops); +} + +module_init(toi_file_load); +module_exit(toi_file_unload); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Nigel Cunningham"); +MODULE_DESCRIPTION("TuxOnIce FileAllocator"); +#else +late_initcall(toi_file_load); +#endif diff --git a/kernel/power/tuxonice_highlevel.c b/kernel/power/tuxonice_highlevel.c new file mode 100644 index 0000000..2059dba --- /dev/null +++ b/kernel/power/tuxonice_highlevel.c @@ -0,0 +1,1314 @@ +/* + * kernel/power/tuxonice_highlevel.c + */ +/** \mainpage TuxOnIce. + * + * TuxOnIce provides support for saving and restoring an image of + * system memory to an arbitrary storage device, either on the local computer, + * or across some network. The support is entirely OS based, so TuxOnIce + * works without requiring BIOS, APM or ACPI support. The vast majority of the + * code is also architecture independant, so it should be very easy to port + * the code to new architectures. TuxOnIce includes support for SMP, 4G HighMem + * and preemption. Initramfses and initrds are also supported. + * + * TuxOnIce uses a modular design, in which the method of storing the image is + * completely abstracted from the core code, as are transformations on the data + * such as compression and/or encryption (multiple 'modules' can be used to + * provide arbitrary combinations of functionality). The user interface is also + * modular, so that arbitrarily simple or complex interfaces can be used to + * provide anything from debugging information through to eye candy. + * + * \section Copyright + * + * TuxOnIce is released under the GPLv2. + * + * Copyright (C) 1998-2001 Gabor Kuti
+ * Copyright (C) 1998,2001,2002 Pavel Machek
+ * Copyright (C) 2002-2003 Florent Chabaud
+ * Copyright (C) 2002-2007 Nigel Cunningham (nigel at tuxonice net)
+ * + * \section Credits + * + * Nigel would like to thank the following people for their work: + * + * Bernard Blackham
+ * Web page & Wiki administration, some coding. A person without whom + * TuxOnIce would not be where it is. + * + * Michael Frank
+ * Extensive testing and help with improving stability. I was constantly + * amazed by the quality and quantity of Michael's help. + * + * Pavel Machek
+ * Modifications, defectiveness pointing, being with Gabor at the very + * beginning, suspend to swap space, stop all tasks. Port to 2.4.18-ac and + * 2.5.17. Even though Pavel and I disagree on the direction suspend to + * disk should take, I appreciate the valuable work he did in helping Gabor + * get the concept working. + * + * ..and of course the myriads of TuxOnIce users who have helped diagnose + * and fix bugs, made suggestions on how to improve the code, proofread + * documentation, and donated time and money. + * + * Thanks also to corporate sponsors: + * + * Redhat.Sometime employer from May 2006 (my fault, not Redhat's!). + * + * Cyclades.com. Nigel's employers from Dec 2004 until May 2006, who + * allowed him to work on TuxOnIce and PM related issues on company time. + * + * LinuxFund.org. Sponsored Nigel's work on TuxOnIce for four months Oct + * 2003 to Jan 2004. + * + * LAC Linux. Donated P4 hardware that enabled development and ongoing + * maintenance of SMP and Highmem support. + * + * OSDL. Provided access to various hardware configurations, make + * occasional small donations to the project. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "tuxonice_modules.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_prepare_image.h" +#include "tuxonice_io.h" +#include "tuxonice_ui.h" +#include "tuxonice_power_off.h" +#include "tuxonice_storage.h" +#include "tuxonice_checksum.h" +#include "tuxonice_cluster.h" +#include "tuxonice_builtin.h" +#include "tuxonice_atomic_copy.h" +#include "tuxonice_alloc.h" + +/*! Pageset metadata. */ +struct pagedir pagedir2 = {2}; + +static int get_pmsem = 0, got_pmsem; +static mm_segment_t oldfs; +static atomic_t actions_running; +static int block_dump_save; +static char pre_hibernate_command[256]; +static char post_hibernate_command[256]; + +int toi_fail_num; + +int do_toi_step(int step); + +unsigned long boot_kernel_data_buffer; + +/** + * toi_finish_anything - Cleanup after doing anything. + * + * @toi_or_resume: Whether finishing a cycle or attempt at resuming. + * + * This is our basic clean-up routine, matching start_anything below. We + * call cleanup routines, drop module references and restore process fs and + * cpus allowed masks, together with the global block_dump variable's value. + */ +void toi_finish_anything(int hibernate_or_resume) +{ + if (!atomic_dec_and_test(&actions_running)) + return; + + toi_cleanup_modules(hibernate_or_resume); + toi_put_modules(); + set_fs(oldfs); + if (hibernate_or_resume) { + block_dump = block_dump_save; + set_cpus_allowed(current, CPU_MASK_ALL); + toi_alloc_print_debug_stats(); + + if (hibernate_or_resume == SYSFS_HIBERNATE && + strlen(post_hibernate_command)) + toi_launch_userspace_program(post_hibernate_command, + 0, UMH_WAIT_PROC); + } +} + +/** + * toi_start_anything - Basic initialisation for TuxOnIce. + * + * @toi_or_resume: Whether starting a cycle or attempt at resuming. + * + * Our basic initialisation routine. Take references on modules, use the + * kernel segment, recheck resume= if no active allocator is set, initialise + * modules, save and reset block_dump and ensure we're running on CPU0. + */ +int toi_start_anything(int hibernate_or_resume) +{ + if (atomic_add_return(1, &actions_running) != 1) { + if (hibernate_or_resume) { + printk(KERN_INFO "Can't start a cycle when actions are " + "already running.\n"); + atomic_dec(&actions_running); + return -EBUSY; + } else + return 0; + } + + oldfs = get_fs(); + set_fs(KERNEL_DS); + + if (hibernate_or_resume == SYSFS_HIBERNATE && + strlen(pre_hibernate_command)) { + int result = toi_launch_userspace_program(pre_hibernate_command, + 0, UMH_WAIT_PROC); + if (result) { + printk("Pre-hibernate command '%s' returned %d. " + "Aborting.\n", pre_hibernate_command, + result); + goto out_err; + } + } + + if (hibernate_or_resume == SYSFS_HIBERNATE) + toi_print_modules(); + + if (toi_get_modules()) { + printk("TuxOnIce: Get modules failed!\n"); + goto out_err; + } + + if (hibernate_or_resume) { + block_dump_save = block_dump; + block_dump = 0; + set_cpus_allowed(current, CPU_MASK_CPU0); + } + + if (toi_initialise_modules_early(hibernate_or_resume)) + goto out_err; + + if (!toiActiveAllocator) + toi_attempt_to_parse_resume_device(!hibernate_or_resume); + + if (toi_initialise_modules_late(hibernate_or_resume)) + goto out_err; + + return 0; + +out_err: + if (hibernate_or_resume) + block_dump_save = block_dump; + toi_finish_anything(hibernate_or_resume); + return -EBUSY; +} + +/* + * Nosave page tracking. + * + * Here rather than in prepare_image because we want to do it once only at the + * start of a cycle. + */ + +/** + * mark_nosave_pages - Set up our Nosave bitmap. + * + * Build a bitmap of Nosave pages from the list. The bitmap allows faster + * use when preparing the image. + */ +static void mark_nosave_pages(void) +{ + struct nosave_region *region; + + list_for_each_entry(region, &nosave_regions, list) { + unsigned long pfn; + + for (pfn = region->start_pfn; pfn < region->end_pfn; pfn++) + SetPageNosave(pfn_to_page(pfn)); + } +} + +/** + * allocate_bitmaps: Allocate bitmaps used to record page states. + * + * Allocate the bitmaps we use to record the various TuxOnIce related + * page states. + */ +static int allocate_bitmaps(void) +{ + if (allocate_dyn_pageflags(&pageset1_map, 0) || + allocate_dyn_pageflags(&pageset1_copy_map, 0) || + allocate_dyn_pageflags(&pageset2_map, 0) || + allocate_dyn_pageflags(&io_map, 0) || + allocate_dyn_pageflags(&nosave_map, 0) || + allocate_dyn_pageflags(&free_map, 0) || + allocate_dyn_pageflags(&page_resave_map, 0)) + return 1; + + return 0; +} + +/** + * free_bitmaps: Free the bitmaps used to record page states. + * + * Free the bitmaps allocated above. It is not an error to call + * free_dyn_pageflags on a bitmap that isn't currentyl allocated. + */ +static void free_bitmaps(void) +{ + free_dyn_pageflags(&pageset1_map); + free_dyn_pageflags(&pageset1_copy_map); + free_dyn_pageflags(&pageset2_map); + free_dyn_pageflags(&io_map); + free_dyn_pageflags(&nosave_map); + free_dyn_pageflags(&free_map); + free_dyn_pageflags(&page_resave_map); +} + +/** + * io_MB_per_second: Return the number of MB/s read or written. + * + * @write: Whether to return the speed at which we wrote. + * + * Calculate the number of megabytes per second that were read or written. + */ +static int io_MB_per_second(int write) +{ + return (toi_bkd.toi_io_time[write][1]) ? + MB((unsigned long) toi_bkd.toi_io_time[write][0]) * HZ / + toi_bkd.toi_io_time[write][1] : 0; +} + +/** + * get_debug_info: Fill a buffer with debugging information. + * + * @buffer: The buffer to be filled. + * @count: The size of the buffer, in bytes. + * + * Fill a (usually PAGE_SIZEd) buffer with the debugging info that we will + * either printk or return via sysfs. + */ +#define SNPRINTF(a...) len += snprintf_used(((char *)buffer) + len, \ + count - len - 1, ## a) +static int get_toi_debug_info(const char *buffer, int count) +{ + int len = 0; + + SNPRINTF("TuxOnIce debugging info:\n"); + SNPRINTF("- TuxOnIce core : " TOI_CORE_VERSION "\n"); + SNPRINTF("- Kernel Version : " UTS_RELEASE "\n"); + SNPRINTF("- Compiler vers. : %d.%d\n", __GNUC__, __GNUC_MINOR__); + SNPRINTF("- Attempt number : %d\n", nr_hibernates); + SNPRINTF("- Parameters : %ld %ld %ld %d %d %ld\n", + toi_result, + toi_bkd.toi_action, + toi_bkd.toi_debug_state, + toi_bkd.toi_default_console_level, + image_size_limit, + toi_poweroff_method); + SNPRINTF("- Overall expected compression percentage: %d.\n", + 100 - toi_expected_compression_ratio()); + len += toi_print_module_debug_info(((char *) buffer) + len, + count - len - 1); + if (toi_bkd.toi_io_time[0][1]) { + if ((io_MB_per_second(0) < 5) || (io_MB_per_second(1) < 5)) { + SNPRINTF("- I/O speed: Write %d KB/s", + (KB((unsigned long) toi_bkd.toi_io_time[0][0]) * HZ / + toi_bkd.toi_io_time[0][1])); + if (toi_bkd.toi_io_time[1][1]) + SNPRINTF(", Read %d KB/s", + (KB((unsigned long) + toi_bkd.toi_io_time[1][0]) * HZ / + toi_bkd.toi_io_time[1][1])); + } else { + SNPRINTF("- I/O speed: Write %d MB/s", + (MB((unsigned long) toi_bkd.toi_io_time[0][0]) * HZ / + toi_bkd.toi_io_time[0][1])); + if (toi_bkd.toi_io_time[1][1]) + SNPRINTF(", Read %d MB/s", + (MB((unsigned long) + toi_bkd.toi_io_time[1][0]) * HZ / + toi_bkd.toi_io_time[1][1])); + } + SNPRINTF(".\n"); + } else + SNPRINTF("- No I/O speed stats available.\n"); + SNPRINTF("- Extra pages : %d used/%d.\n", + extra_pd1_pages_used, extra_pd1_pages_allowance); + + return len; +} + +/** + * do_cleanup: Cleanup after attempting to hibernate or resume. + * + * @get_debug_info: Whether to allocate and return debugging info. + * + * Cleanup after attempting to hibernate or resume, possibly getting + * debugging info as we do so. + */ +static void do_cleanup(int get_debug_info) +{ + int i = 0; + char *buffer = NULL; + + if (get_debug_info) + toi_prepare_status(DONT_CLEAR_BAR, "Cleaning up..."); + relink_lru_lists(); + + free_checksum_pages(); + + if (get_debug_info) + buffer = (char *) toi_get_zeroed_page(20, TOI_ATOMIC_GFP); + + if (buffer) + i = get_toi_debug_info(buffer, PAGE_SIZE); + + toi_free_extra_pagedir_memory(); + + pagedir1.size = pagedir2.size = 0; + set_highmem_size(pagedir1, 0); + set_highmem_size(pagedir2, 0); + + if (boot_kernel_data_buffer) { + toi_free_page(37, boot_kernel_data_buffer); + boot_kernel_data_buffer = 0; + } + + if (test_toi_state(TOI_NOTIFIERS_PREPARE)) { + pm_notifier_call_chain(PM_POST_HIBERNATION); + clear_toi_state(TOI_NOTIFIERS_PREPARE); + } + + thaw_processes(); + +#ifdef CONFIG_TOI_KEEP_IMAGE + if (test_action_state(TOI_KEEP_IMAGE) && + !test_result_state(TOI_ABORTED)) { + toi_message(TOI_ANY_SECTION, TOI_LOW, 1, + "TuxOnIce: Not invalidating the image due " + "to Keep Image being enabled.\n"); + set_result_state(TOI_KEPT_IMAGE); + } else +#endif + if (toiActiveAllocator) + toiActiveAllocator->remove_image(); + + free_bitmaps(); + + if (buffer && i) { + /* Printk can only handle 1023 bytes, including + * its level mangling. */ + for (i = 0; i < 3; i++) + printk("%s", buffer + (1023 * i)); + toi_free_page(20, (unsigned long) buffer); + } + + if (!test_action_state(TOI_LATE_CPU_HOTPLUG)) + enable_nonboot_cpus(); + toi_cleanup_console(); + + free_attention_list(); + + toi_deactivate_storage(0); + + clear_toi_state(TOI_IGNORE_LOGLEVEL); + clear_toi_state(TOI_TRYING_TO_RESUME); + clear_toi_state(TOI_NOW_RESUMING); + + if (got_pmsem) { + mutex_unlock(&pm_mutex); + got_pmsem = 0; + } +} + +/** + * check_still_keeping_image: We kept an image; check whether to reuse it. + * + * We enter this routine when we have kept an image. If the user has said they + * want to still keep it, all we need to do is powerdown. If powering down + * means hibernating to ram and the power doesn't run out, we'll return 1. + * If we do power off properly or the battery runs out, we'll resume via the + * normal paths. + * + * If the user has said they want to remove the previously kept image, we + * remove it, and return 0. We'll then store a new image. + */ +static int check_still_keeping_image(void) +{ + if (test_action_state(TOI_KEEP_IMAGE)) { + printk("Image already stored: powering down immediately."); + do_toi_step(STEP_HIBERNATE_POWERDOWN); + return 1; /* Just in case we're using S3 */ + } + + printk("Invalidating previous image.\n"); + toiActiveAllocator->remove_image(); + + return 0; +} + +/** + * toi_init: Prepare to hibernate to disk. + * + * Initialise variables & data structures, in preparation for + * hibernating to disk. + */ +static int toi_init(void) +{ + int result; + + toi_result = 0; + + printk(KERN_INFO "Initiating a hibernation cycle.\n"); + + nr_hibernates++; + + toi_bkd.toi_io_time[0][0] = toi_bkd.toi_io_time[0][1] = + toi_bkd.toi_io_time[1][0] = toi_bkd.toi_io_time[1][1] = 0; + + if (!test_toi_state(TOI_CAN_HIBERNATE) || + allocate_bitmaps()) + return 1; + + mark_nosave_pages(); + + toi_prepare_console(); + + result = pm_notifier_call_chain(PM_HIBERNATION_PREPARE); + if (result) { + set_result_state(TOI_NOTIFIERS_PREPARE_FAILED); + return 1; + } + set_toi_state(TOI_NOTIFIERS_PREPARE); + + boot_kernel_data_buffer = toi_get_zeroed_page(37, TOI_ATOMIC_GFP); + if (!boot_kernel_data_buffer) { + printk("TuxOnIce: Failed to allocate boot_kernel_data_buffer.\n"); + set_result_state(TOI_OUT_OF_MEMORY); + return 1; + } + + if (test_action_state(TOI_LATE_CPU_HOTPLUG) || + !disable_nonboot_cpus()) + return 1; + + set_abort_result(TOI_CPU_HOTPLUG_FAILED); + return 0; +} + +/** + * can_hibernate: Perform basic 'Can we hibernate?' tests. + * + * Perform basic tests that must pass if we're going to be able to hibernate: + * Can we get the pm_mutex? Is resume= valid (we need to know where to write + * the image header). + */ +static int can_hibernate(void) +{ + if (get_pmsem) { + if (!mutex_trylock(&pm_mutex)) { + printk(KERN_INFO "TuxOnIce: Failed to obtain " + "pm_mutex.\n"); + dump_stack(); + set_abort_result(TOI_PM_SEM); + return 0; + } + got_pmsem = 1; + } + + if (!test_toi_state(TOI_CAN_HIBERNATE)) + toi_attempt_to_parse_resume_device(0); + + if (!test_toi_state(TOI_CAN_HIBERNATE)) { + printk(KERN_INFO "TuxOnIce: Hibernation is disabled.\n" + "This may be because you haven't put something along " + "the lines of\n\nresume=swap:/dev/hda1\n\n" + "in lilo.conf or equivalent. (Where /dev/hda1 is your " + "swap partition).\n"); + set_abort_result(TOI_CANT_SUSPEND); + if (!got_pmsem) { + mutex_unlock(&pm_mutex); + got_pmsem = 0; + } + return 0; + } + + return 1; +} + +/** + * do_post_image_write: Having written an image, figure out what to do next. + * + * After writing an image, we might load an alternate image or power down. + * Powering down might involve hibernating to ram, in which case we also + * need to handle reloading pageset2. + */ +static int do_post_image_write(void) +{ + /* If switching images fails, do normal powerdown */ + if (alt_resume_param[0]) + do_toi_step(STEP_RESUME_ALT_IMAGE); + + toi_cond_pause(1, "About to power down or reboot."); + toi_power_down(); + + /* If we return, it's because we hibernated to ram */ + if (read_pageset2(1)) + panic("Attempt to reload pagedir 2 failed. Try rebooting."); + + barrier(); + mb(); + do_cleanup(1); + return 0; +} + +/** + * __save_image: Do the hard work of saving the image. + * + * High level routine for getting the image saved. The key assumptions made + * are that processes have been frozen and sufficient memory is available. + * + * We also exit through here at resume time, coming back from toi_hibernate + * after the atomic restore. This is the reason for the toi_in_hibernate + * test. + */ +static int __save_image(void) +{ + int temp_result, did_copy = 0; + + toi_prepare_status(DONT_CLEAR_BAR, "Starting to save the image.."); + + toi_message(TOI_ANY_SECTION, TOI_LOW, 1, + " - Final values: %d and %d.\n", + pagedir1.size, pagedir2.size); + + toi_cond_pause(1, "About to write pagedir2."); + + temp_result = write_pageset(&pagedir2); + + if (temp_result == -1 || test_result_state(TOI_ABORTED)) + return 1; + + toi_cond_pause(1, "About to copy pageset 1."); + + if (test_result_state(TOI_ABORTED)) + return 1; + + toi_deactivate_storage(1); + + toi_prepare_status(DONT_CLEAR_BAR, "Doing atomic copy."); + + toi_in_hibernate = 1; + + if (toi_go_atomic(PMSG_FREEZE, 1)) + goto Failed; + + temp_result = toi_hibernate(); + if (!temp_result) + did_copy = 1; + + /* We return here at resume time too! */ + toi_end_atomic(ATOMIC_ALL_STEPS, toi_in_hibernate); + +Failed: + if (toi_activate_storage(1)) + panic("Failed to reactivate our storage."); + + /* Resume time? */ + if (!toi_in_hibernate) { + copyback_post(); + return 0; + } + + /* Nope. Hibernating. So, see if we can save the image... */ + + if (temp_result || test_result_state(TOI_ABORTED)) { + if (did_copy) + goto abort_reloading_pagedir_two; + else + return 1; + } + + toi_update_status(pagedir2.size, + pagedir1.size + pagedir2.size, + NULL); + + if (test_result_state(TOI_ABORTED)) + goto abort_reloading_pagedir_two; + + toi_cond_pause(1, "About to write pageset1."); + + toi_message(TOI_ANY_SECTION, TOI_LOW, 1, + "-- Writing pageset1\n"); + + temp_result = write_pageset(&pagedir1); + + /* We didn't overwrite any memory, so no reread needs to be done. */ + if (test_action_state(TOI_TEST_FILTER_SPEED)) + return 1; + + if (temp_result == 1 || test_result_state(TOI_ABORTED)) + goto abort_reloading_pagedir_two; + + toi_cond_pause(1, "About to write header."); + + if (test_result_state(TOI_ABORTED)) + goto abort_reloading_pagedir_two; + + temp_result = write_image_header(); + + if (test_action_state(TOI_TEST_BIO)) + return 1; + + if (!temp_result && !test_result_state(TOI_ABORTED)) + return 0; + +abort_reloading_pagedir_two: + temp_result = read_pageset2(1); + + /* If that failed, we're sunk. Panic! */ + if (temp_result) + panic("Attempt to reload pagedir 2 while aborting " + "a hibernate failed."); + + return 1; +} + +/** + * do_save_image: Save the image and handle the result. + * + * Save the prepared image. If we fail or we're in the path returning + * from the atomic restore, cleanup. + */ + +static int do_save_image(void) +{ + int result = __save_image(); + if (!toi_in_hibernate || result) + do_cleanup(1); + return result; +} + + +/** + * do_prepare_image: Try to prepare an image. + * + * Seek to initialise and prepare an image to be saved. On failure, + * cleanup. + */ + +static int do_prepare_image(void) +{ + if (toi_activate_storage(0)) + return 1; + + /* + * If kept image and still keeping image and hibernating to RAM, we will + * return 1 after hibernating and resuming (provided the power doesn't + * run out. In that case, we skip directly to cleaning up and exiting. + */ + + if (!can_hibernate() || + (test_result_state(TOI_KEPT_IMAGE) && + check_still_keeping_image())) + goto cleanup; + + if (toi_init() && !toi_prepare_image() && + !test_result_state(TOI_ABORTED)) + return 0; + +cleanup: + do_cleanup(0); + return 1; +} + +/** + * do_check_can_resume: Find out whether an image has been stored. + * + * Read whether an image exists. We use the same routine as the + * image_exists sysfs entry, and just look to see whether the + * first character in the resulting buffer is a '1'. + */ +int do_check_can_resume(void) +{ + char *buf = (char *) toi_get_zeroed_page(21, TOI_ATOMIC_GFP); + int result = 0; + + if (!buf) + return 0; + + /* Only interested in first byte, so throw away return code. */ + image_exists_read(buf, PAGE_SIZE); + + if (buf[0] == '1') + result = 1; + + toi_free_page(21, (unsigned long) buf); + return result; +} + +/** + * do_load_atomic_copy: Load the first part of an image, if it exists. + * + * Check whether we have an image. If one exists, do sanity checking + * (possibly invalidating the image or even rebooting if the user + * requests that) before loading it into memory in preparation for the + * atomic restore. + * + * If and only if we have an image loaded and ready to restore, we return 1. + */ +static int do_load_atomic_copy(void) +{ + int read_image_result = 0; + + if (sizeof(swp_entry_t) != sizeof(long)) { + printk(KERN_WARNING "TuxOnIce: The size of swp_entry_t != size" + " of long. Please report this!\n"); + return 1; + } + + if (!resume_file[0]) + printk(KERN_WARNING "TuxOnIce: " + "You need to use a resume= command line parameter to " + "tell TuxOnIce where to look for an image.\n"); + + toi_activate_storage(0); + + if (!(test_toi_state(TOI_RESUME_DEVICE_OK)) && + !toi_attempt_to_parse_resume_device(0)) { + /* + * Without a usable storage device we can do nothing - + * even if noresume is given + */ + + if (!toiNumAllocators) + printk(KERN_ALERT "TuxOnIce: " + "No storage allocators have been registered.\n"); + else + printk(KERN_ALERT "TuxOnIce: " + "Missing or invalid storage location " + "(resume= parameter). Please correct and " + "rerun lilo (or equivalent) before " + "hibernating.\n"); + toi_deactivate_storage(0); + return 1; + } + + read_image_result = read_pageset1(); /* non fatal error ignored */ + + if (test_toi_state(TOI_NORESUME_SPECIFIED)) + clear_toi_state(TOI_NORESUME_SPECIFIED); + + toi_deactivate_storage(0); + + if (read_image_result) + return 1; + + return 0; +} + +/** + * prepare_restore_load_alt_image: Save & restore alt image variables. + * + * Save and restore the pageset1 maps, when loading an alternate image. + */ +static void prepare_restore_load_alt_image(int prepare) +{ + static struct dyn_pageflags pageset1_map_save, pageset1_copy_map_save; + + if (prepare) { + memcpy(&pageset1_map_save, &pageset1_map, + sizeof(struct dyn_pageflags)); + pageset1_map.bitmap = NULL; + pageset1_map.sparse = 0; + pageset1_map.initialised = 0; + memcpy(&pageset1_copy_map_save, &pageset1_copy_map, + sizeof(struct dyn_pageflags)); + pageset1_copy_map.bitmap = NULL; + pageset1_copy_map.sparse = 0; + pageset1_copy_map.initialised = 0; + set_toi_state(TOI_LOADING_ALT_IMAGE); + toi_reset_alt_image_pageset2_pfn(); + } else { + if (pageset1_map.bitmap) + free_dyn_pageflags(&pageset1_map); + memcpy(&pageset1_map, &pageset1_map_save, + sizeof(struct dyn_pageflags)); + if (pageset1_copy_map.bitmap) + free_dyn_pageflags(&pageset1_copy_map); + memcpy(&pageset1_copy_map, &pageset1_copy_map_save, + sizeof(struct dyn_pageflags)); + clear_toi_state(TOI_NOW_RESUMING); + clear_toi_state(TOI_LOADING_ALT_IMAGE); + } +} + +/** + * pre_resume_freeze: Freeze the system, before doing an atomic restore. + * + * Hot unplug cpus (if we didn't do it early) and freeze processes, in + * preparation for doing an atomic restore. + */ +int pre_resume_freeze(void) +{ + if (!test_action_state(TOI_LATE_CPU_HOTPLUG)) { + toi_prepare_status(DONT_CLEAR_BAR, "Disable nonboot cpus."); + if (disable_nonboot_cpus()) { + set_abort_result(TOI_CPU_HOTPLUG_FAILED); + return 1; + } + } + + toi_prepare_status(DONT_CLEAR_BAR, "Freeze processes."); + + if (freeze_processes()) { + printk("Some processes failed to stop.\n"); + return 1; + } + + return 0; +} + +/** + * do_toi_step: Perform a step in hibernating or resuming. + * + * Perform a step in hibernating or resuming an image. This abstraction + * is in preparation for implementing cluster support, and perhaps replacing + * uswsusp too (haven't looked whether that's possible yet). + */ +int do_toi_step(int step) +{ + switch (step) { + case STEP_HIBERNATE_PREPARE_IMAGE: + return do_prepare_image(); + case STEP_HIBERNATE_SAVE_IMAGE: + return do_save_image(); + case STEP_HIBERNATE_POWERDOWN: + return do_post_image_write(); + case STEP_RESUME_CAN_RESUME: + return do_check_can_resume(); + case STEP_RESUME_LOAD_PS1: + return do_load_atomic_copy(); + case STEP_RESUME_DO_RESTORE: + /* + * If we succeed, this doesn't return. + * Instead, we return from do_save_image() in the + * hibernated kernel. + */ + return toi_atomic_restore(); + case STEP_RESUME_ALT_IMAGE: + printk(KERN_INFO "Trying to resume alternate image.\n"); + toi_in_hibernate = 0; + save_restore_alt_param(SAVE, NOQUIET); + prepare_restore_load_alt_image(1); + if (!do_check_can_resume()) { + printk(KERN_INFO "Nothing to resume from.\n"); + goto out; + } + if (!do_load_atomic_copy()) + toi_atomic_restore(); + + printk(KERN_INFO "Failed to load image.\n"); +out: + prepare_restore_load_alt_image(0); + save_restore_alt_param(RESTORE, NOQUIET); + break; + case STEP_CLEANUP: + do_cleanup(1); + break; + case STEP_QUIET_CLEANUP: + do_cleanup(0); + break; + } + + return 0; +} +EXPORT_SYMBOL_GPL(do_toi_step); + +/* -- Functions for kickstarting a hibernate or resume --- */ + +/** + * __toi_try_resume: Try to do the steps in resuming. + * + * Check if we have an image and if so try to resume. Clear the status + * flags too. + */ +void __toi_try_resume(void) +{ + set_toi_state(TOI_TRYING_TO_RESUME); + resume_attempted = 1; + + current->flags |= PF_MEMALLOC; + + if (do_toi_step(STEP_RESUME_CAN_RESUME) && + !do_toi_step(STEP_RESUME_LOAD_PS1)) + do_toi_step(STEP_RESUME_DO_RESTORE); + + do_cleanup(0); + + current->flags &= ~PF_MEMALLOC; + + clear_toi_state(TOI_IGNORE_LOGLEVEL); + clear_toi_state(TOI_TRYING_TO_RESUME); + clear_toi_state(TOI_NOW_RESUMING); +} + +/** + * _toi_try_resume: Wrapper calling __toi_try_resume from do_mounts. + * + * Wrapper for when __toi_try_resume is called from init/do_mounts.c, + * rather than from echo > /sys/power/tuxonice/do_resume. + */ +void _toi_try_resume(void) +{ + resume_attempted = 1; + + if (toi_start_anything(SYSFS_RESUMING)) + return; + + /* Unlock will be done in do_cleanup */ + mutex_lock(&pm_mutex); + got_pmsem = 1; + + __toi_try_resume(); + + /* + * For initramfs, we have to clear the boot time + * flag after trying to resume + */ + clear_toi_state(TOI_BOOT_TIME); + toi_finish_anything(SYSFS_RESUMING); +} + +/** + * _toi_try_hibernate: Try to start a hibernation cycle. + * + * have_pmsem: Whther the pm_sem is already taken. + * + * Start a hibernation cycle, coming in from either + * echo > /sys/power/tuxonice/do_suspend + * + * or + * + * echo disk > /sys/power/state + * + * In the later case, we come in without pm_sem taken; in the + * former, it has been taken. + */ +int _toi_try_hibernate(int have_pmsem) +{ + int result = 0, sys_power_disk = 0; + + if (!atomic_read(&actions_running)) { + /* Came in via /sys/power/disk */ + if (toi_start_anything(SYSFS_HIBERNATING)) + return -EBUSY; + sys_power_disk = 1; + } + + get_pmsem = !have_pmsem; + + if (strlen(alt_resume_param)) { + attempt_to_parse_alt_resume_param(); + + if (!strlen(alt_resume_param)) { + printk(KERN_INFO "Alternate resume parameter now " + "invalid. Aborting.\n"); + goto out; + } + } + + current->flags |= PF_MEMALLOC; + + if (test_toi_state(TOI_CLUSTER_MODE)) { + toi_initiate_cluster_hibernate(); + goto out; + } + + result = do_toi_step(STEP_HIBERNATE_PREPARE_IMAGE); + if (result) + goto out; + + if (test_action_state(TOI_FREEZER_TEST)) { + do_cleanup(0); + goto out; + } + + result = do_toi_step(STEP_HIBERNATE_SAVE_IMAGE); + if (result) + goto out; + + /* This code runs at resume time too! */ + if (toi_in_hibernate) + result = do_toi_step(STEP_HIBERNATE_POWERDOWN); +out: + current->flags &= ~PF_MEMALLOC; + + if (sys_power_disk) + toi_finish_anything(SYSFS_HIBERNATING); + + return result; +} + +/* + * channel_no: If !0, -c is added to args (userui). + */ +int toi_launch_userspace_program(char *command, int channel_no, + enum umh_wait wait) +{ + int retval; + static char *envp[] = { + "HOME=/", + "TERM=linux", + "PATH=/sbin:/usr/sbin:/bin:/usr/bin", + NULL }; + static char *argv[] = + { NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL }; + char *channel = NULL; + int arg = 0, size; + char test_read[255]; + char *orig_posn = command; + + if (!strlen(orig_posn)) + return 1; + + if (channel_no) { + channel = toi_kzalloc(4, 6, GFP_KERNEL); + if (!channel) { + printk(KERN_INFO "Failed to allocate memory in " + "preparing to launch userspace program.\n"); + return 1; + } + } + + /* Up to 7 args supported */ + while (arg < 7) { + sscanf(orig_posn, "%s", test_read); + size = strlen(test_read); + if (!(size)) + break; + argv[arg] = toi_kzalloc(5, size + 1, TOI_ATOMIC_GFP); + strcpy(argv[arg], test_read); + orig_posn += size + 1; + *test_read = 0; + arg++; + } + + if (channel_no) { + sprintf(channel, "-c%d", channel_no); + argv[arg] = channel; + } else + arg--; + + retval = call_usermodehelper(argv[0], argv, envp, wait); + + /* + * If the program reports an error, retval = 256. Don't complain + * about that here. + */ + if (retval && retval != 256) + printk("Failed to launch userspace program '%s': Error %d\n", + command, retval); + + { + int i; + for (i = 0; i < arg; i++) + if (argv[i] && argv[i] != channel) + toi_kfree(5, argv[i]); + } + + toi_kfree(4, channel); + + return retval; +} + +/* + * This array contains entries that are automatically registered at + * boot. Modules and the console code register their own entries separately. + */ +static struct toi_sysfs_data sysfs_params[] = { + { TOI_ATTR("extra_pages_allowance", SYSFS_RW), + SYSFS_INT(&extra_pd1_pages_allowance, MIN_EXTRA_PAGES_ALLOWANCE, + INT_MAX, 0) + }, + + { TOI_ATTR("image_exists", SYSFS_RW), + SYSFS_CUSTOM(image_exists_read, image_exists_write, + SYSFS_NEEDS_SM_FOR_BOTH) + }, + + { TOI_ATTR("resume", SYSFS_RW), + SYSFS_STRING(resume_file, 255, SYSFS_NEEDS_SM_FOR_WRITE), + .write_side_effect = attempt_to_parse_resume_device2, + }, + + { TOI_ATTR("alt_resume_param", SYSFS_RW), + SYSFS_STRING(alt_resume_param, 255, SYSFS_NEEDS_SM_FOR_WRITE), + .write_side_effect = attempt_to_parse_alt_resume_param, + }, + { TOI_ATTR("debug_info", SYSFS_READONLY), + SYSFS_CUSTOM(get_toi_debug_info, NULL, 0) + }, + + { TOI_ATTR("ignore_rootfs", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_IGNORE_ROOTFS, 0) + }, + + { TOI_ATTR("image_size_limit", SYSFS_RW), + SYSFS_INT(&image_size_limit, -2, INT_MAX, 0) + }, + + { TOI_ATTR("last_result", SYSFS_RW), + SYSFS_UL(&toi_result, 0, 0, 0) + }, + + { TOI_ATTR("no_multithreaded_io", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_NO_MULTITHREADED_IO, 0) + }, + + { TOI_ATTR("full_pageset2", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_PAGESET2_FULL, 0) + }, + + { TOI_ATTR("reboot", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_REBOOT, 0) + }, + + { TOI_ATTR("replace_swsusp", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_REPLACE_SWSUSP, 0) + }, + + { TOI_ATTR("resume_commandline", SYSFS_RW), + SYSFS_STRING(toi_bkd.toi_nosave_commandline, COMMAND_LINE_SIZE, 0) + }, + + { TOI_ATTR("version", SYSFS_READONLY), + SYSFS_STRING(TOI_CORE_VERSION, 0, 0) + }, + + { TOI_ATTR("no_load_direct", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_NO_DIRECT_LOAD, 0) + }, + + { TOI_ATTR("freezer_test", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_FREEZER_TEST, 0) + }, + + { TOI_ATTR("test_bio", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_TEST_BIO, 0) + }, + + { TOI_ATTR("test_filter_speed", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_TEST_FILTER_SPEED, 0) + }, + + { TOI_ATTR("slow", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_SLOW, 0) + }, + + { TOI_ATTR("no_pageset2", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_NO_PAGESET2, 0) + }, + + { TOI_ATTR("late_cpu_hotplug", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_LATE_CPU_HOTPLUG, 0) + }, + + { TOI_ATTR("pre_hibernate_command", SYSFS_RW), + SYSFS_STRING(pre_hibernate_command, 0, 255) + }, + + { TOI_ATTR("post_hibernate_command", SYSFS_RW), + SYSFS_STRING(post_hibernate_command, 0, 255) + }, + +#ifdef CONFIG_TOI_KEEP_IMAGE + { TOI_ATTR("keep_image", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_KEEP_IMAGE, 0) + }, +#endif +}; + +struct toi_core_fns my_fns = { + .get_nonconflicting_page = __toi_get_nonconflicting_page, + .post_context_save = __toi_post_context_save, + .try_hibernate = _toi_try_hibernate, + .try_resume = _toi_try_resume, +}; + +/** + * core_load: Initialisation of TuxOnIce core. + * + * Initialise the core, beginning with sysfs. Checksum and so on are part of + * the core, but have their own initialisation routines because they either + * aren't compiled in all the time or have their own subdirectories. + */ +static __init int core_load(void) +{ + int i, + numfiles = sizeof(sysfs_params) / sizeof(struct toi_sysfs_data); + + strncpy(pre_hibernate_command, CONFIG_TOI_DEFAULT_PRE_HIBERNATE, 255); + strncpy(post_hibernate_command, CONFIG_TOI_DEFAULT_POST_HIBERNATE, 255); + + if (toi_sysfs_init()) + return 1; + + for (i = 0; i < numfiles; i++) + toi_register_sysfs_file(&toi_subsys.kobj, + &sysfs_params[i]); + + toi_core_fns = &my_fns; + + if (toi_alloc_init()) + return 1; + if (toi_checksum_init()) + return 1; + if (toi_cluster_init()) + return 1; + if (toi_usm_init()) + return 1; + if (toi_ui_init()) + return 1; + if (toi_poweroff_init()) + return 1; + + return 0; +} + +#ifdef MODULE +/** + * core_unload: Prepare to unload the core code. + */ +static __exit void core_unload(void) +{ + int i, + numfiles = sizeof(sysfs_params) / sizeof(struct toi_sysfs_data); + + toi_alloc_exit(); + toi_poweroff_exit(); + toi_ui_exit(); + toi_checksum_exit(); + toi_cluster_exit(); + toi_usm_exit(); + + for (i = 0; i < numfiles; i++) + toi_unregister_sysfs_file(&toi_subsys.kobj, + &sysfs_params[i]); + + toi_core_fns = NULL; + + toi_sysfs_exit(); +} +MODULE_LICENSE("GPL"); +module_init(core_load); +module_exit(core_unload); +#else +late_initcall(core_load); +#endif + +#ifdef CONFIG_TOI_EXPORTS +EXPORT_SYMBOL_GPL(pagedir2); +EXPORT_SYMBOL_GPL(toi_fail_num); +EXPORT_SYMBOL_GPL(do_check_can_resume); +#endif diff --git a/kernel/power/tuxonice_io.c b/kernel/power/tuxonice_io.c new file mode 100644 index 0000000..0521ed7 --- /dev/null +++ b/kernel/power/tuxonice_io.c @@ -0,0 +1,1415 @@ +/* + * kernel/power/tuxonice_io.c + * + * Copyright (C) 1998-2001 Gabor Kuti + * Copyright (C) 1998,2001,2002 Pavel Machek + * Copyright (C) 2002-2003 Florent Chabaud + * Copyright (C) 2002-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * It contains high level IO routines for hibernating. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "tuxonice.h" +#include "tuxonice_modules.h" +#include "tuxonice_pageflags.h" +#include "tuxonice_io.h" +#include "tuxonice_ui.h" +#include "tuxonice_storage.h" +#include "tuxonice_prepare_image.h" +#include "tuxonice_extent.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_builtin.h" +#include "tuxonice_checksum.h" +#include "tuxonice_alloc.h" +char alt_resume_param[256]; + +/* Variables shared between threads and updated under the mutex */ +static int io_write, io_finish_at, io_base, io_barmax, io_pageset, io_result; +static int io_index, io_nextupdate, io_pc, io_pc_step; +static unsigned long pfn, other_pfn; +static DEFINE_MUTEX(io_mutex); +static DEFINE_PER_CPU(struct page *, last_sought); +static DEFINE_PER_CPU(struct page *, last_high_page); +static DEFINE_PER_CPU(char *, checksum_locn); +static DEFINE_PER_CPU(struct pbe *, last_low_page); +static atomic_t worker_thread_count; +static atomic_t io_count; + +/* toi_attempt_to_parse_resume_device + * + * Can we hibernate, using the current resume= parameter? + */ +int toi_attempt_to_parse_resume_device(int quiet) +{ + struct list_head *Allocator; + struct toi_module_ops *thisAllocator; + int result, returning = 0; + + if (toi_activate_storage(0)) + return 0; + + toiActiveAllocator = NULL; + clear_toi_state(TOI_RESUME_DEVICE_OK); + clear_toi_state(TOI_CAN_RESUME); + clear_result_state(TOI_ABORTED); + + if (!toiNumAllocators) { + if (!quiet) + printk(KERN_INFO "TuxOnIce: No storage allocators have " + "been registered. Hibernating will be " + "disabled.\n"); + goto cleanup; + } + + if (!resume_file[0]) { + if (!quiet) + printk("TuxOnIce: Resume= parameter is empty." + " Hibernating will be disabled.\n"); + goto cleanup; + } + + list_for_each(Allocator, &toiAllocators) { + thisAllocator = list_entry(Allocator, struct toi_module_ops, + type_list); + + /* + * Not sure why you'd want to disable an allocator, but + * we should honour the flag if we're providing it + */ + if (!thisAllocator->enabled) + continue; + + result = thisAllocator->parse_sig_location( + resume_file, (toiNumAllocators == 1), + quiet); + + switch (result) { + case -EINVAL: + /* For this allocator, but not a valid + * configuration. Error already printed. */ + goto cleanup; + + case 0: + /* For this allocator and valid. */ + toiActiveAllocator = thisAllocator; + + set_toi_state(TOI_RESUME_DEVICE_OK); + set_toi_state(TOI_CAN_RESUME); + returning = 1; + goto cleanup; + } + } + if (!quiet) + printk("TuxOnIce: No matching enabled allocator found. " + "Resuming disabled.\n"); +cleanup: + toi_deactivate_storage(0); + return returning; +} + +void attempt_to_parse_resume_device2(void) +{ + toi_prepare_usm(); + toi_attempt_to_parse_resume_device(0); + toi_cleanup_usm(); +} + +void save_restore_alt_param(int replace, int quiet) +{ + static char resume_param_save[255]; + static unsigned long toi_state_save; + + if (replace) { + toi_state_save = toi_state; + strcpy(resume_param_save, resume_file); + strcpy(resume_file, alt_resume_param); + } else { + strcpy(resume_file, resume_param_save); + toi_state = toi_state_save; + } + toi_attempt_to_parse_resume_device(quiet); +} + +void attempt_to_parse_alt_resume_param(void) +{ + int ok = 0; + + /* Temporarily set resume_param to the poweroff value */ + if (!strlen(alt_resume_param)) + return; + + printk("=== Trying Poweroff Resume2 ===\n"); + save_restore_alt_param(SAVE, NOQUIET); + if (test_toi_state(TOI_CAN_RESUME)) + ok = 1; + + printk(KERN_INFO "=== Done ===\n"); + save_restore_alt_param(RESTORE, QUIET); + + /* If not ok, clear the string */ + if (ok) + return; + + printk(KERN_INFO "Can't resume from that location; clearing " + "alt_resume_param.\n"); + alt_resume_param[0] = '\0'; +} + +/* noresume_reset_modules + * + * Description: When we read the start of an image, modules (and especially the + * active allocator) might need to reset data structures if we + * decide to remove the image rather than resuming from it. + */ + +static void noresume_reset_modules(void) +{ + struct toi_module_ops *this_filter; + + list_for_each_entry(this_filter, &toi_filters, type_list) + if (this_filter->noresume_reset) + this_filter->noresume_reset(); + + if (toiActiveAllocator && toiActiveAllocator->noresume_reset) + toiActiveAllocator->noresume_reset(); +} + +/* fill_toi_header() + * + * Description: Fill the hibernate header structure. + * Arguments: struct toi_header: Header data structure to be filled. + */ + +static int fill_toi_header(struct toi_header *sh) +{ + int i, error; + + error = init_swsusp_header((struct swsusp_info *) sh); + if (error) + return error; + + sh->pagedir = pagedir1; + sh->pageset_2_size = pagedir2.size; + sh->param0 = toi_result; + sh->param1 = toi_bkd.toi_action; + sh->param2 = toi_bkd.toi_debug_state; + sh->param3 = toi_bkd.toi_default_console_level; + sh->root_fs = current->fs->rootmnt->mnt_sb->s_dev; + for (i = 0; i < 4; i++) + sh->io_time[i/2][i%2] = toi_bkd.toi_io_time[i/2][i%2]; + sh->bkd = boot_kernel_data_buffer; + return 0; +} + +/* + * rw_init_modules + * + * Iterate over modules, preparing the ones that will be used to read or write + * data. + */ +static int rw_init_modules(int rw, int which) +{ + struct toi_module_ops *this_module; + /* Initialise page transformers */ + list_for_each_entry(this_module, &toi_filters, type_list) { + if (!this_module->enabled) + continue; + if (this_module->rw_init && this_module->rw_init(rw, which)) { + abort_hibernate(TOI_FAILED_MODULE_INIT, + "Failed to initialise the %s filter.", + this_module->name); + return 1; + } + } + + /* Initialise allocator */ + if (toiActiveAllocator->rw_init(rw, which)) { + abort_hibernate(TOI_FAILED_MODULE_INIT, + "Failed to initialise the allocator."); + if (!rw) + toiActiveAllocator->remove_image(); + return 1; + } + + /* Initialise other modules */ + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!this_module->enabled || + this_module->type == FILTER_MODULE || + this_module->type == WRITER_MODULE) + continue; + if (this_module->rw_init && this_module->rw_init(rw, which)) { + set_abort_result(TOI_FAILED_MODULE_INIT); + printk(KERN_INFO "Setting aborted flag due to module " + "init failure.\n"); + return 1; + } + } + + return 0; +} + +/* + * rw_cleanup_modules + * + * Cleanup components after reading or writing a set of pages. + * Only the allocator may fail. + */ +static int rw_cleanup_modules(int rw) +{ + struct toi_module_ops *this_module; + int result = 0; + + /* Cleanup other modules */ + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!this_module->enabled || + this_module->type == FILTER_MODULE || + this_module->type == WRITER_MODULE) + continue; + if (this_module->rw_cleanup) + result |= this_module->rw_cleanup(rw); + } + + /* Flush data and cleanup */ + list_for_each_entry(this_module, &toi_filters, type_list) { + if (!this_module->enabled) + continue; + if (this_module->rw_cleanup) + result |= this_module->rw_cleanup(rw); + } + + result |= toiActiveAllocator->rw_cleanup(rw); + + return result; +} + +static struct page *copy_page_from_orig_page(struct page *orig_page) +{ + int is_high = PageHighMem(orig_page), index, min, max; + struct page *high_page = NULL, + **my_last_high_page = &__get_cpu_var(last_high_page), + **my_last_sought = &__get_cpu_var(last_sought); + struct pbe *this, **my_last_low_page = &__get_cpu_var(last_low_page); + void *compare; + + if (is_high) { + if (*my_last_sought && *my_last_high_page && + *my_last_sought < orig_page) + high_page = *my_last_high_page; + else + high_page = (struct page *) restore_highmem_pblist; + this = (struct pbe *) kmap(high_page); + compare = orig_page; + } else { + if (*my_last_sought && *my_last_low_page && + *my_last_sought < orig_page) + this = *my_last_low_page; + else + this = restore_pblist; + compare = page_address(orig_page); + } + + *my_last_sought = orig_page; + + /* Locate page containing pbe */ + while (this[PBES_PER_PAGE - 1].next && + this[PBES_PER_PAGE - 1].orig_address < compare) { + if (is_high) { + struct page *next_high_page = (struct page *) + this[PBES_PER_PAGE - 1].next; + kunmap(high_page); + this = kmap(next_high_page); + high_page = next_high_page; + } else + this = this[PBES_PER_PAGE - 1].next; + } + + /* Do a binary search within the page */ + min = 0; + max = PBES_PER_PAGE; + index = PBES_PER_PAGE / 2; + while (max - min) { + if (!this[index].orig_address || + this[index].orig_address > compare) + max = index; + else if (this[index].orig_address == compare) { + if (is_high) { + struct page *page = this[index].address; + *my_last_high_page = high_page; + kunmap(high_page); + return page; + } + *my_last_low_page = this; + return virt_to_page(this[index].address); + } else + min = index; + index = ((max + min) / 2); + }; + + if (is_high) + kunmap(high_page); + + abort_hibernate(TOI_FAILED_IO, "Failed to get destination page for" + " orig page %p. This[min].orig_address=%p.\n", orig_page, + this[index].orig_address); + return NULL; +} + +/* + * do_rw_loop + * + * The main I/O loop for reading or writing pages. + */ +static int worker_rw_loop(void *data) +{ + unsigned long orig_pfn, write_pfn; + int result, my_io_index = 0; + struct toi_module_ops *first_filter = toi_get_next_filter(NULL); + struct page *buffer = toi_alloc_page(28, TOI_ATOMIC_GFP); + int thread_num = atomic_add_return(1, &worker_thread_count) - 1; + + mutex_lock(&io_mutex); + + do { + int buf_size; + + /* + * What page to use? If reading, don't know yet which page's + * data will be read, so always use the buffer. If writing, + * use the copy (Pageset1) or original page (Pageset2), but + * always write the pfn of the original page. + */ + if (io_write) { + struct page *page; + char **my_checksum_locn = &__get_cpu_var(checksum_locn); + + pfn = get_next_bit_on(&io_map, pfn); + + /* Another thread could have beaten us to it. */ + if (pfn == max_pfn + 1) { + if (atomic_read(&io_count)) { + printk("Ran out of pfns but io_count " + "is still %d.\n", + atomic_read(&io_count)); + BUG(); + } + break; + } + + atomic_dec(&io_count); + + orig_pfn = pfn; + write_pfn = pfn; + + /* + * Other_pfn is updated by all threads, so we're not + * writing the same page multiple times. + */ + clear_dynpageflag(&io_map, pfn_to_page(pfn)); + if (io_pageset == 1) { + other_pfn = get_next_bit_on(&pageset1_map, + other_pfn); + write_pfn = other_pfn; + } + page = pfn_to_page(pfn); + + my_io_index = io_finish_at - atomic_read(&io_count); + + if (io_pageset == 2) + *my_checksum_locn = + tuxonice_get_next_checksum(); + + mutex_unlock(&io_mutex); + + if (io_pageset == 2 && + tuxonice_calc_checksum(page, *my_checksum_locn)) + return 1; + + result = first_filter->write_page(write_pfn, page, + PAGE_SIZE); + } else { + atomic_dec(&io_count); + mutex_unlock(&io_mutex); + + /* + * Are we aborting? If so, don't submit any more I/O as + * resetting the resume_attempted flag (from ui.c) will + * clear the bdev flags, making this thread oops. + */ + if (unlikely(test_toi_state(TOI_STOP_RESUME))) { + atomic_dec(&worker_thread_count); + if (!atomic_read(&worker_thread_count)) + set_toi_state(TOI_IO_STOPPED); + while (1) + schedule(); + } + + result = first_filter->read_page(&write_pfn, buffer, + &buf_size); + if (buf_size != PAGE_SIZE) { + abort_hibernate(TOI_FAILED_IO, + "I/O pipeline returned %d bytes instead" + " of %d.\n", buf_size, PAGE_SIZE); + mutex_lock(&io_mutex); + break; + } + } + + if (result) { + io_result = result; + if (io_write) { + printk(KERN_INFO "Write chunk returned %d.\n", + result); + abort_hibernate(TOI_FAILED_IO, + "Failed to write a chunk of the " + "image."); + mutex_lock(&io_mutex); + break; + } + panic("Read chunk returned (%d)", result); + } + + /* + * Discard reads of resaved pages while reading ps2 + * and unwanted pages while rereading ps2 when aborting. + */ + if (!io_write && !PageResave(pfn_to_page(write_pfn))) { + struct page *final_page = pfn_to_page(write_pfn), + *copy_page = final_page; + char *virt, *buffer_virt; + + if (io_pageset == 1 && !load_direct(final_page)) { + copy_page = + copy_page_from_orig_page(final_page); + BUG_ON(!copy_page); + } + + if (test_dynpageflag(&io_map, final_page)) { + virt = kmap(copy_page); + buffer_virt = kmap(buffer); + memcpy(virt, buffer_virt, PAGE_SIZE); + kunmap(copy_page); + kunmap(buffer); + clear_dynpageflag(&io_map, final_page); + mutex_lock(&io_mutex); + my_io_index = io_finish_at - + atomic_read(&io_count); + mutex_unlock(&io_mutex); + } else { + mutex_lock(&io_mutex); + atomic_inc(&io_count); + mutex_unlock(&io_mutex); + } + } + + if (!thread_num && (my_io_index + io_base) >= io_nextupdate) + io_nextupdate = toi_update_status(my_io_index + + io_base, io_barmax, " %d/%d MB ", + MB(io_base+my_io_index+1), MB(io_barmax)); + + if (!thread_num && my_io_index >= io_pc) { + printk("%s%d%%...", io_pc_step == 1 ? KERN_INFO : "", + 20 * io_pc_step); + io_pc_step++; + io_pc = io_finish_at * io_pc_step / 5; + } + + toi_cond_pause(0, NULL); + + /* + * Subtle: If there's less I/O still to be done than threads + * running, quit. This stops us doing I/O beyond the end of + * the image when reading. + * + * Possible race condition. Two threads could do the test at + * the same time; one should exit and one should continue. + * Therefore we take the mutex before comparing and exiting. + */ + + mutex_lock(&io_mutex); + + } while (atomic_read(&io_count) >= atomic_read(&worker_thread_count) && + !(io_write && test_result_state(TOI_ABORTED))); + + atomic_dec(&worker_thread_count); + mutex_unlock(&io_mutex); + + toi__free_page(28, buffer); + + return 0; +} + +void start_other_threads(void) +{ + int cpu; + struct task_struct *p; + + for_each_online_cpu(cpu) { + if (cpu == smp_processor_id()) + continue; + + p = kthread_create(worker_rw_loop, NULL, "ks2io/%d", cpu); + if (IS_ERR(p)) { + printk("ks2io for %i failed\n", cpu); + continue; + } + kthread_bind(p, cpu); + p->flags |= PF_MEMALLOC; + wake_up_process(p); + } +} + +/* + * do_rw_loop + * + * The main I/O loop for reading or writing pages. + */ +static int do_rw_loop(int write, int finish_at, struct dyn_pageflags *pageflags, + int base, int barmax, int pageset) +{ + int index = 0, cpu; + + if (!finish_at) + return 0; + + io_write = write; + io_finish_at = finish_at; + io_base = base; + io_barmax = barmax; + io_pageset = pageset; + io_index = 0; + io_pc = io_finish_at / 5; + io_pc_step = 1; + io_result = 0; + io_nextupdate = 0; + + for_each_online_cpu(cpu) { + per_cpu(last_sought, cpu) = NULL; + per_cpu(last_low_page, cpu) = NULL; + per_cpu(last_high_page, cpu) = NULL; + } + + /* Ensure all bits clear */ + clear_dyn_pageflags(&io_map); + + /* Set the bits for the pages to write */ + pfn = get_next_bit_on(pageflags, max_pfn + 1); + + while (pfn < max_pfn + 1 && index < finish_at) { + set_dynpageflag(&io_map, pfn_to_page(pfn)); + pfn = get_next_bit_on(pageflags, pfn); + index++; + } + + BUG_ON(index < finish_at); + + atomic_set(&io_count, finish_at); + + pfn = max_pfn + 1; + other_pfn = pfn; + + clear_toi_state(TOI_IO_STOPPED); + + if (!test_action_state(TOI_NO_MULTITHREADED_IO)) + start_other_threads(); + worker_rw_loop(NULL); + + while (atomic_read(&worker_thread_count)) + schedule(); + + set_toi_state(TOI_IO_STOPPED); + if (unlikely(test_toi_state(TOI_STOP_RESUME))) { + while (1) + schedule(); + } + + if (!io_result) { + printk("done.\n"); + + toi_update_status(io_base + io_finish_at, io_barmax, + " %d/%d MB ", + MB(io_base + io_finish_at), MB(io_barmax)); + } + + if (io_write && test_result_state(TOI_ABORTED)) + io_result = 1; + else { /* All I/O done? */ + if (get_next_bit_on(&io_map, max_pfn + 1) != max_pfn + 1) { + printk(KERN_INFO "Finished I/O loop but still work to " + "do?\nFinish at = %d. io_count = %d.\n", + finish_at, atomic_read(&io_count)); + BUG(); + } + } + + return io_result; +} + +/* write_pageset() + * + * Description: Write a pageset to disk. + * Arguments: pagedir: Which pagedir to write.. + * Returns: Zero on success or -1 on failure. + */ + +int write_pageset(struct pagedir *pagedir) +{ + int finish_at, base = 0, start_time, end_time; + int barmax = pagedir1.size + pagedir2.size; + long error = 0; + struct dyn_pageflags *pageflags; + + /* + * Even if there is nothing to read or write, the allocator + * may need the init/cleanup for it's housekeeping. (eg: + * Pageset1 may start where pageset2 ends when writing). + */ + finish_at = pagedir->size; + + if (pagedir->id == 1) { + toi_prepare_status(DONT_CLEAR_BAR, + "Writing kernel & process data..."); + base = pagedir2.size; + if (test_action_state(TOI_TEST_FILTER_SPEED) || + test_action_state(TOI_TEST_BIO)) + pageflags = &pageset1_map; + else + pageflags = &pageset1_copy_map; + } else { + toi_prepare_status(CLEAR_BAR, "Writing caches..."); + pageflags = &pageset2_map; + } + + start_time = jiffies; + + if (rw_init_modules(1, pagedir->id)) { + abort_hibernate(TOI_FAILED_MODULE_INIT, + "Failed to initialise modules for writing."); + error = 1; + } + + if (!error) + error = do_rw_loop(1, finish_at, pageflags, base, barmax, + pagedir->id); + + if (rw_cleanup_modules(WRITE) && !error) { + abort_hibernate(TOI_FAILED_MODULE_CLEANUP, + "Failed to cleanup after writing."); + error = 1; + } + + end_time = jiffies; + + if ((end_time - start_time) && (!test_result_state(TOI_ABORTED))) { + toi_bkd.toi_io_time[0][0] += finish_at, + toi_bkd.toi_io_time[0][1] += (end_time - start_time); + } + + return error; +} + +/* read_pageset() + * + * Description: Read a pageset from disk. + * Arguments: whichtowrite: Controls what debugging output is printed. + * overwrittenpagesonly: Whether to read the whole pageset or + * only part. + * Returns: Zero on success or -1 on failure. + */ + +static int read_pageset(struct pagedir *pagedir, int overwrittenpagesonly) +{ + int result = 0, base = 0, start_time, end_time; + int finish_at = pagedir->size; + int barmax = pagedir1.size + pagedir2.size; + struct dyn_pageflags *pageflags; + + if (pagedir->id == 1) { + toi_prepare_status(CLEAR_BAR, + "Reading kernel & process data..."); + pageflags = &pageset1_map; + } else { + toi_prepare_status(DONT_CLEAR_BAR, "Reading caches..."); + if (overwrittenpagesonly) + barmax = finish_at = min(pagedir1.size, + pagedir2.size); + else + base = pagedir1.size; + pageflags = &pageset2_map; + } + + start_time = jiffies; + + if (rw_init_modules(0, pagedir->id)) { + toiActiveAllocator->remove_image(); + result = 1; + } else + result = do_rw_loop(0, finish_at, pageflags, base, barmax, + pagedir->id); + + if (rw_cleanup_modules(READ) && !result) { + abort_hibernate(TOI_FAILED_MODULE_CLEANUP, + "Failed to cleanup after reading."); + result = 1; + } + + /* Statistics */ + end_time = jiffies; + + if ((end_time - start_time) && (!test_result_state(TOI_ABORTED))) { + toi_bkd.toi_io_time[1][0] += finish_at, + toi_bkd.toi_io_time[1][1] += (end_time - start_time); + } + + return result; +} + +/* write_module_configs() + * + * Description: Store the configuration for each module in the image header. + * Returns: Int: Zero on success, Error value otherwise. + */ +static int write_module_configs(void) +{ + struct toi_module_ops *this_module; + char *buffer = (char *) toi_get_zeroed_page(22, TOI_ATOMIC_GFP); + int len, index = 1; + struct toi_module_header toi_module_header; + + if (!buffer) { + printk(KERN_INFO "Failed to allocate a buffer for saving " + "module configuration info.\n"); + return -ENOMEM; + } + + /* + * We have to know which data goes with which module, so we at + * least write a length of zero for a module. Note that we are + * also assuming every module's config data takes <= PAGE_SIZE. + */ + + /* For each module (in registration order) */ + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!this_module->enabled || !this_module->storage_needed || + (this_module->type == WRITER_MODULE && + toiActiveAllocator != this_module)) + continue; + + /* Get the data from the module */ + len = 0; + if (this_module->save_config_info) + len = this_module->save_config_info(buffer); + + /* Save the details of the module */ + toi_module_header.enabled = this_module->enabled; + toi_module_header.type = this_module->type; + toi_module_header.index = index++; + strncpy(toi_module_header.name, this_module->name, + sizeof(toi_module_header.name)); + toiActiveAllocator->rw_header_chunk(WRITE, + this_module, + (char *) &toi_module_header, + sizeof(toi_module_header)); + + /* Save the size of the data and any data returned */ + toiActiveAllocator->rw_header_chunk(WRITE, + this_module, + (char *) &len, sizeof(int)); + if (len) + toiActiveAllocator->rw_header_chunk( + WRITE, this_module, buffer, len); + } + + /* Write a blank header to terminate the list */ + toi_module_header.name[0] = '\0'; + toiActiveAllocator->rw_header_chunk(WRITE, NULL, + (char *) &toi_module_header, sizeof(toi_module_header)); + + toi_free_page(22, (unsigned long) buffer); + return 0; +} + +/* read_module_configs() + * + * Description: Reload module configurations from the image header. + * Returns: Int. Zero on success, error value otherwise. + */ + +static int read_module_configs(void) +{ + struct toi_module_ops *this_module; + char *buffer = (char *) toi_get_zeroed_page(23, TOI_ATOMIC_GFP); + int len, result = 0; + struct toi_module_header toi_module_header; + + if (!buffer) { + printk("Failed to allocate a buffer for reloading module " + "configuration info.\n"); + return -ENOMEM; + } + + /* All modules are initially disabled. That way, if we have a module + * loaded now that wasn't loaded when we hibernated, it won't be used + * in trying to read the data. + */ + list_for_each_entry(this_module, &toi_modules, module_list) + this_module->enabled = 0; + + /* Get the first module header */ + result = toiActiveAllocator->rw_header_chunk(READ, NULL, + (char *) &toi_module_header, + sizeof(toi_module_header)); + if (result) { + printk("Failed to read the next module header.\n"); + toi_free_page(23, (unsigned long) buffer); + return -EINVAL; + } + + /* For each module (in registration order) */ + while (toi_module_header.name[0]) { + + /* Find the module */ + this_module = + toi_find_module_given_name(toi_module_header.name); + + if (!this_module) { + /* + * Is it used? Only need to worry about filters. The + * active allocator must be loaded! + */ + if (toi_module_header.enabled) { + toi_early_boot_message(1, TOI_CONTINUE_REQ, + "It looks like we need module %s for " + "reading the image but it hasn't been " + "registered.\n", + toi_module_header.name); + if (!(test_toi_state(TOI_CONTINUE_REQ))) { + toiActiveAllocator->remove_image(); + toi_free_page(23, + (unsigned long) buffer); + return -EINVAL; + } + } else + printk(KERN_INFO "Module %s configuration data " + "found, but the module hasn't " + "registered. Looks like it was " + "disabled, so we're ignoring its data.", + toi_module_header.name); + } + + /* Get the length of the data (if any) */ + result = toiActiveAllocator->rw_header_chunk(READ, NULL, + (char *) &len, sizeof(int)); + if (result) { + printk("Failed to read the length of the module %s's" + " configuration data.\n", + toi_module_header.name); + toi_free_page(23, (unsigned long) buffer); + return -EINVAL; + } + + /* Read any data and pass to the module (if we found one) */ + if (len) { + toiActiveAllocator->rw_header_chunk(READ, NULL, + buffer, len); + if (this_module) { + if (!this_module->save_config_info) { + printk("Huh? Module %s appears to have " + "a save_config_info, but not a " + "load_config_info function!\n", + this_module->name); + } else + this_module->load_config_info(buffer, + len); + } + } + + if (this_module) { + /* Now move this module to the tail of its lists. This + * will put it in order. Any new modules will end up at + * the top of the lists. They should have been set to + * disabled when loaded (people will normally not edit + * an initrd to load a new module and then hibernate + * without using it!). + */ + + toi_move_module_tail(this_module); + + /* + * We apply the disabled state; modules don't need to + * save whether they were disabled and if they do, we + * override them anyway. + */ + this_module->enabled = toi_module_header.enabled; + } + + /* Get the next module header */ + result = toiActiveAllocator->rw_header_chunk(READ, NULL, + (char *) &toi_module_header, + sizeof(toi_module_header)); + + if (result) { + printk("Failed to read the next module header.\n"); + toi_free_page(23, (unsigned long) buffer); + return -EINVAL; + } + + } + + toi_free_page(23, (unsigned long) buffer); + return 0; +} + +/* write_image_header() + * + * Description: Write the image header after write the image proper. + * Returns: Int. Zero on success or -1 on failure. + */ + +int write_image_header(void) +{ + int ret; + int total = pagedir1.size + pagedir2.size+2; + char *header_buffer = NULL; + + /* Now prepare to write the header */ + ret = toiActiveAllocator->write_header_init(); + if (ret) { + abort_hibernate(TOI_FAILED_MODULE_INIT, + "Active allocator's write_header_init" + " function failed."); + goto write_image_header_abort; + } + + /* Get a buffer */ + header_buffer = (char *) toi_get_zeroed_page(24, TOI_ATOMIC_GFP); + if (!header_buffer) { + abort_hibernate(TOI_OUT_OF_MEMORY, + "Out of memory when trying to get page for header!"); + goto write_image_header_abort; + } + + /* Write hibernate header */ + if (fill_toi_header((struct toi_header *) header_buffer)) { + abort_hibernate(TOI_OUT_OF_MEMORY, + "Failure to fill header information!"); + goto write_image_header_abort; + } + toiActiveAllocator->rw_header_chunk(WRITE, NULL, + header_buffer, sizeof(struct toi_header)); + + toi_free_page(24, (unsigned long) header_buffer); + + /* Write module configurations */ + ret = write_module_configs(); + if (ret) { + abort_hibernate(TOI_FAILED_IO, + "Failed to write module configs."); + goto write_image_header_abort; + } + + save_dyn_pageflags(&pageset1_map); + + /* Flush data and let allocator cleanup */ + if (toiActiveAllocator->write_header_cleanup()) { + abort_hibernate(TOI_FAILED_IO, + "Failed to cleanup writing header."); + goto write_image_header_abort_no_cleanup; + } + + if (test_result_state(TOI_ABORTED)) + goto write_image_header_abort_no_cleanup; + + toi_update_status(total, total, NULL); + + return 0; + +write_image_header_abort: + toiActiveAllocator->write_header_cleanup(); +write_image_header_abort_no_cleanup: + return -1; +} + +/* sanity_check() + * + * Description: Perform a few checks, seeking to ensure that the kernel being + * booted matches the one hibernated. They need to match so we can + * be _sure_ things will work. It is not absolutely impossible for + * resuming from a different kernel to work, just not assured. + * Arguments: Struct toi_header. The header which was saved at hibernate + * time. + */ +static char *sanity_check(struct toi_header *sh) +{ + char *reason = check_swsusp_image_kernel((struct swsusp_info *) sh); + + if (reason) + return reason; + + if (!test_action_state(TOI_IGNORE_ROOTFS)) { + const struct super_block *sb; + list_for_each_entry(sb, &super_blocks, s_list) { + if ((!(sb->s_flags & MS_RDONLY)) && + (sb->s_type->fs_flags & FS_REQUIRES_DEV)) + return "Device backed fs has been mounted " + "rw prior to resume or initrd/ramfs " + "is mounted rw."; + } + } + + return 0; +} + +/* __read_pageset1 + * + * Description: Test for the existence of an image and attempt to load it. + * Returns: Int. Zero if image found and pageset1 successfully loaded. + * Error if no image found or loaded. + */ +static int __read_pageset1(void) +{ + int i, result = 0; + char *header_buffer = (char *) toi_get_zeroed_page(25, TOI_ATOMIC_GFP), + *sanity_error = NULL; + struct toi_header *toi_header; + + if (!header_buffer) { + printk(KERN_INFO "Unable to allocate a page for reading the " + "signature.\n"); + return -ENOMEM; + } + + /* Check for an image */ + result = toiActiveAllocator->image_exists(); + if (!result) { + result = -ENODATA; + noresume_reset_modules(); + printk(KERN_INFO "TuxOnIce: No image found.\n"); + goto out; + } + + /* Check for noresume command line option */ + if (test_toi_state(TOI_NORESUME_SPECIFIED)) { + printk(KERN_INFO "TuxOnIce: Noresume on command line. Removed " + "image.\n"); + goto out_remove_image; + } + + /* Check whether we've resumed before */ + if (test_toi_state(TOI_RESUMED_BEFORE)) { + toi_early_boot_message(1, 0, NULL); + if (!(test_toi_state(TOI_CONTINUE_REQ))) { + printk(KERN_INFO "TuxOnIce: Tried to resume before: " + "Invalidated image.\n"); + goto out_remove_image; + } + } + + clear_toi_state(TOI_CONTINUE_REQ); + + /* + * Prepare the active allocator for reading the image header. The + * activate allocator might read its own configuration. + * + * NB: This call may never return because there might be a signature + * for a different image such that we warn the user and they choose + * to reboot. (If the device ids look erroneous (2.4 vs 2.6) or the + * location of the image might be unavailable if it was stored on a + * network connection). + */ + + result = toiActiveAllocator->read_header_init(); + if (result) { + printk("TuxOnIce: Failed to initialise, reading the image " + "header.\n"); + goto out_remove_image; + } + + /* Read hibernate header */ + result = toiActiveAllocator->rw_header_chunk(READ, NULL, + header_buffer, sizeof(struct toi_header)); + if (result < 0) { + printk("TuxOnIce: Failed to read the image signature.\n"); + goto out_remove_image; + } + + toi_header = (struct toi_header *) header_buffer; + + /* + * NB: This call may also result in a reboot rather than returning. + */ + + sanity_error = sanity_check(toi_header); + if (sanity_error) { + toi_early_boot_message(1, TOI_CONTINUE_REQ, + sanity_error); + printk(KERN_INFO "TuxOnIce: Sanity check failed.\n"); + goto out_remove_image; + } + + /* + * We have an image and it looks like it will load okay. + * + * Get metadata from header. Don't override commandline parameters. + * + * We don't need to save the image size limit because it's not used + * during resume and will be restored with the image anyway. + */ + + memcpy((char *) &pagedir1, + (char *) &toi_header->pagedir, sizeof(pagedir1)); + toi_result = toi_header->param0; + toi_bkd.toi_action = toi_header->param1; + toi_bkd.toi_debug_state = toi_header->param2; + toi_bkd.toi_default_console_level = toi_header->param3; + clear_toi_state(TOI_IGNORE_LOGLEVEL); + pagedir2.size = toi_header->pageset_2_size; + for (i = 0; i < 4; i++) + toi_bkd.toi_io_time[i/2][i%2] = + toi_header->io_time[i/2][i%2]; + boot_kernel_data_buffer = toi_header->bkd; + + /* Read module configurations */ + result = read_module_configs(); + if (result) { + pagedir1.size = pagedir2.size = 0; + printk(KERN_INFO "TuxOnIce: Failed to read TuxOnIce module " + "configurations.\n"); + clear_action_state(TOI_KEEP_IMAGE); + goto out_remove_image; + } + + toi_prepare_console(); + + set_toi_state(TOI_NOW_RESUMING); + + if (pre_resume_freeze()) + goto out_reset_console; + + toi_cond_pause(1, "About to read original pageset1 locations."); + + /* + * Read original pageset1 locations. These are the addresses we can't + * use for the data to be restored. + */ + + if (allocate_dyn_pageflags(&pageset1_map, 0) || + allocate_dyn_pageflags(&pageset1_copy_map, 0) || + allocate_dyn_pageflags(&io_map, 0)) + goto out_reset_console; + + if (load_dyn_pageflags(&pageset1_map)) + goto out_reset_console; + + /* Clean up after reading the header */ + result = toiActiveAllocator->read_header_cleanup(); + if (result) { + printk("TuxOnIce: Failed to cleanup after reading the image " + "header.\n"); + goto out_reset_console; + } + + toi_cond_pause(1, "About to read pagedir."); + + /* + * Get the addresses of pages into which we will load the kernel to + * be copied back + */ + if (toi_get_pageset1_load_addresses()) { + printk(KERN_INFO "TuxOnIce: Failed to get load addresses for " + "pageset1.\n"); + goto out_reset_console; + } + + /* Read the original kernel back */ + toi_cond_pause(1, "About to read pageset 1."); + + if (read_pageset(&pagedir1, 0)) { + toi_prepare_status(CLEAR_BAR, "Failed to read pageset 1."); + result = -EIO; + printk(KERN_INFO "TuxOnIce: Failed to get load pageset1.\n"); + goto out_reset_console; + } + + toi_cond_pause(1, "About to restore original kernel."); + result = 0; + + if (!test_action_state(TOI_KEEP_IMAGE) && + toiActiveAllocator->mark_resume_attempted) + toiActiveAllocator->mark_resume_attempted(1); + +out: + toi_free_page(25, (unsigned long) header_buffer); + return result; + +out_reset_console: + toi_cleanup_console(); + +out_remove_image: + free_dyn_pageflags(&pageset1_map); + free_dyn_pageflags(&pageset1_copy_map); + free_dyn_pageflags(&io_map); + result = -EINVAL; + if (!test_action_state(TOI_KEEP_IMAGE)) + toiActiveAllocator->remove_image(); + toiActiveAllocator->read_header_cleanup(); + noresume_reset_modules(); + goto out; +} + +/* read_pageset1() + * + * Description: Attempt to read the header and pageset1 of a hibernate image. + * Handle the outcome, complaining where appropriate. + */ + +int read_pageset1(void) +{ + int error; + + error = __read_pageset1(); + + if (error && error != -ENODATA && error != -EINVAL && + !test_result_state(TOI_ABORTED)) + abort_hibernate(TOI_IMAGE_ERROR, + "TuxOnIce: Error %d resuming\n", error); + + return error; +} + +/* + * get_have_image_data() + */ +static char *get_have_image_data(void) +{ + char *output_buffer = (char *) toi_get_zeroed_page(26, TOI_ATOMIC_GFP); + struct toi_header *toi_header; + + if (!output_buffer) { + printk(KERN_INFO "Output buffer null.\n"); + return NULL; + } + + /* Check for an image */ + if (!toiActiveAllocator->image_exists() || + toiActiveAllocator->read_header_init() || + toiActiveAllocator->rw_header_chunk(READ, NULL, + output_buffer, sizeof(struct toi_header))) { + sprintf(output_buffer, "0\n"); + /* + * From an initrd/ramfs, catting have_image and + * getting a result of 0 is sufficient. + */ + clear_toi_state(TOI_BOOT_TIME); + goto out; + } + + toi_header = (struct toi_header *) output_buffer; + + sprintf(output_buffer, "1\n%s\n%s\n", + toi_header->uts.machine, + toi_header->uts.version); + + /* Check whether we've resumed before */ + if (test_toi_state(TOI_RESUMED_BEFORE)) + strcat(output_buffer, "Resumed before.\n"); + +out: + noresume_reset_modules(); + return output_buffer; +} + +/* read_pageset2() + * + * Description: Read in part or all of pageset2 of an image, depending upon + * whether we are hibernating and have only overwritten a portion + * with pageset1 pages, or are resuming and need to read them + * all. + * Arguments: Int. Boolean. Read only pages which would have been + * overwritten by pageset1? + * Returns: Int. Zero if no error, otherwise the error value. + */ +int read_pageset2(int overwrittenpagesonly) +{ + int result = 0; + + if (!pagedir2.size) + return 0; + + result = read_pageset(&pagedir2, overwrittenpagesonly); + + toi_update_status(100, 100, NULL); + toi_cond_pause(1, "Pagedir 2 read."); + + return result; +} + +/* image_exists_read + * + * Return 0 or 1, depending on whether an image is found. + * Incoming buffer is PAGE_SIZE and result is guaranteed + * to be far less than that, so we don't worry about + * overflow. + */ +int image_exists_read(const char *page, int count) +{ + int len = 0; + char *result; + + if (toi_activate_storage(0)) + return count; + + if (!test_toi_state(TOI_RESUME_DEVICE_OK)) + toi_attempt_to_parse_resume_device(0); + + if (!toiActiveAllocator) { + len = sprintf((char *) page, "-1\n"); + } else { + result = get_have_image_data(); + if (result) { + len = sprintf((char *) page, "%s", result); + toi_free_page(26, (unsigned long) result); + } + } + + toi_deactivate_storage(0); + + return len; +} + +/* image_exists_write + * + * Invalidate an image if one exists. + */ +int image_exists_write(const char *buffer, int count) +{ + if (toi_activate_storage(0)) + return count; + + if (toiActiveAllocator && toiActiveAllocator->image_exists()) + toiActiveAllocator->remove_image(); + + toi_deactivate_storage(0); + + clear_result_state(TOI_KEPT_IMAGE); + + return count; +} + +#ifdef CONFIG_TOI_EXPORTS +EXPORT_SYMBOL_GPL(toi_attempt_to_parse_resume_device); +EXPORT_SYMBOL_GPL(attempt_to_parse_resume_device2); +#endif + diff --git a/kernel/power/tuxonice_io.h b/kernel/power/tuxonice_io.h new file mode 100644 index 0000000..70d6d90 --- /dev/null +++ b/kernel/power/tuxonice_io.h @@ -0,0 +1,67 @@ +/* + * kernel/power/tuxonice_io.h + * + * Copyright (C) 2005-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * It contains high level IO routines for hibernating. + * + */ + +#include +#include "tuxonice_pagedir.h" +#include "power.h" + +/* Non-module data saved in our image header */ +struct toi_header { + /* + * Mirror struct swsusp_info, but without + * the page aligned attribute + */ + struct new_utsname uts; + u32 version_code; + unsigned long num_physpages; + int cpus; + unsigned long image_pages; + unsigned long pages; + unsigned long size; + + /* Our own data */ + unsigned long orig_mem_free; + int page_size; + int pageset_2_size; + int param0; + int param1; + int param2; + int param3; + int progress0; + int progress1; + int progress2; + int progress3; + int io_time[2][2]; + struct pagedir pagedir; + dev_t root_fs; + unsigned long bkd; /* Boot kernel data locn */ +}; + +extern int write_pageset(struct pagedir *pagedir); +extern int write_image_header(void); +extern int read_pageset1(void); +extern int read_pageset2(int overwrittenpagesonly); + +extern int toi_attempt_to_parse_resume_device(int quiet); +extern void attempt_to_parse_resume_device2(void); +extern void attempt_to_parse_alt_resume_param(void); +int image_exists_read(const char *page, int count); +int image_exists_write(const char *buffer, int count); +extern void save_restore_alt_param(int replace, int quiet); + +/* Args to save_restore_alt_param */ +#define RESTORE 0 +#define SAVE 1 + +#define NOQUIET 0 +#define QUIET 1 + +extern dev_t name_to_dev_t(char *line); diff --git a/kernel/power/tuxonice_modules.c b/kernel/power/tuxonice_modules.c new file mode 100644 index 0000000..9c9aebe --- /dev/null +++ b/kernel/power/tuxonice_modules.c @@ -0,0 +1,461 @@ +/* + * kernel/power/tuxonice_modules.c + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + */ + +#include +#include +#include "tuxonice.h" +#include "tuxonice_modules.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_ui.h" + +LIST_HEAD(toi_filters); +LIST_HEAD(toiAllocators); +LIST_HEAD(toi_modules); + +struct toi_module_ops *toiActiveAllocator; +int toi_num_filters; +int toiNumAllocators, toi_num_modules; + +/* + * toi_header_storage_for_modules + * + * Returns the amount of space needed to store configuration + * data needed by the modules prior to copying back the original + * kernel. We can exclude data for pageset2 because it will be + * available anyway once the kernel is copied back. + */ +int toi_header_storage_for_modules(void) +{ + struct toi_module_ops *this_module; + int bytes = 0; + + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!this_module->enabled || + (this_module->type == WRITER_MODULE && + toiActiveAllocator != this_module)) + continue; + if (this_module->storage_needed) { + int this = this_module->storage_needed() + + sizeof(struct toi_module_header) + + sizeof(int); + this_module->header_requested = this; + bytes += this; + } + } + + /* One more for the empty terminator */ + return bytes + sizeof(struct toi_module_header); +} + +/* + * toi_memory_for_modules + * + * Returns the amount of memory requested by modules for + * doing their work during the cycle. + */ + +int toi_memory_for_modules(int print_parts) +{ + int bytes = 0, result; + struct toi_module_ops *this_module; + + if (print_parts) + printk(KERN_INFO "Memory for modules:\n===================\n"); + list_for_each_entry(this_module, &toi_modules, module_list) { + int this; + if (!this_module->enabled) + continue; + if (this_module->memory_needed) { + this = this_module->memory_needed(); + if (print_parts) + printk(KERN_INFO "%10d bytes (%5d pages) for " + "module '%s'.\n", + this, this >> PAGE_SHIFT, + this_module->name); + bytes += this; + } + } + + result = ((bytes + PAGE_SIZE - 1) >> PAGE_SHIFT); + if (print_parts) + printk(KERN_INFO " => %d bytes, %d pages.\n", bytes, result); + + return result; +} + +/* + * toi_expected_compression_ratio + * + * Returns the compression ratio expected when saving the image. + */ + +int toi_expected_compression_ratio(void) +{ + int ratio = 100; + struct toi_module_ops *this_module; + + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!this_module->enabled) + continue; + if (this_module->expected_compression) + ratio = ratio * this_module->expected_compression() + / 100; + } + + return ratio; +} + +/* toi_find_module_given_dir + * Functionality : Return a module (if found), given a pointer + * to its directory name + */ + +static struct toi_module_ops *toi_find_module_given_dir(char *name) +{ + struct toi_module_ops *this_module, *found_module = NULL; + + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!strcmp(name, this_module->directory)) { + found_module = this_module; + break; + } + } + + return found_module; +} + +/* toi_find_module_given_name + * Functionality : Return a module (if found), given a pointer + * to its name + */ + +struct toi_module_ops *toi_find_module_given_name(char *name) +{ + struct toi_module_ops *this_module, *found_module = NULL; + + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!strcmp(name, this_module->name)) { + found_module = this_module; + break; + } + } + + return found_module; +} + +/* + * toi_print_module_debug_info + * Functionality : Get debugging info from modules into a buffer. + */ +int toi_print_module_debug_info(char *buffer, int buffer_size) +{ + struct toi_module_ops *this_module; + int len = 0; + + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!this_module->enabled) + continue; + if (this_module->print_debug_info) { + int result; + result = this_module->print_debug_info(buffer + len, + buffer_size - len); + len += result; + } + } + + /* Ensure null terminated */ + buffer[buffer_size] = 0; + + return len; +} + +/* + * toi_register_module + * + * Register a module. + */ +int toi_register_module(struct toi_module_ops *module) +{ + int i; + struct kobject *kobj; + + module->enabled = 1; + + if (toi_find_module_given_name(module->name)) { + printk(KERN_INFO "TuxOnIce: Trying to load module %s," + " which is already registered.\n", + module->name); + return -EBUSY; + } + + switch (module->type) { + case FILTER_MODULE: + list_add_tail(&module->type_list, &toi_filters); + toi_num_filters++; + break; + case WRITER_MODULE: + list_add_tail(&module->type_list, &toiAllocators); + toiNumAllocators++; + break; + case MISC_MODULE: + case MISC_HIDDEN_MODULE: + break; + default: + printk("Hmmm. Module '%s' has an invalid type." + " It has been ignored.\n", module->name); + return -EINVAL; + } + list_add_tail(&module->module_list, &toi_modules); + toi_num_modules++; + + if (!module->directory && !module->shared_directory) + return 0; + + /* + * Modules may share a directory, but those with shared_dir + * set must be loaded (via symbol dependencies) after parents + * and unloaded beforehand. + */ + if (module->shared_directory) { + struct toi_module_ops *shared = + toi_find_module_given_dir(module->shared_directory); + if (!shared) { + printk("TuxOnIce: Module %s wants to share %s's " + "directory but %s isn't loaded.\n", + module->name, module->shared_directory, + module->shared_directory); + toi_unregister_module(module); + return -ENODEV; + } + kobj = shared->dir_kobj; + } else { + if (!strncmp(module->directory, "[ROOT]", 6)) + kobj = &toi_subsys.kobj; + else + kobj = make_toi_sysdir(module->directory); + } + module->dir_kobj = kobj; + for (i = 0; i < module->num_sysfs_entries; i++) { + int result = toi_register_sysfs_file(kobj, + &module->sysfs_data[i]); + if (result) + return result; + } + return 0; +} + +/* + * toi_unregister_module + * + * Remove a module. + */ +void toi_unregister_module(struct toi_module_ops *module) +{ + int i; + + if (module->dir_kobj) + for (i = 0; i < module->num_sysfs_entries; i++) + toi_unregister_sysfs_file(module->dir_kobj, + &module->sysfs_data[i]); + + if (!module->shared_directory && module->directory && + strncmp(module->directory, "[ROOT]", 6)) + remove_toi_sysdir(module->dir_kobj); + + switch (module->type) { + case FILTER_MODULE: + list_del(&module->type_list); + toi_num_filters--; + break; + case WRITER_MODULE: + list_del(&module->type_list); + toiNumAllocators--; + if (toiActiveAllocator == module) { + toiActiveAllocator = NULL; + clear_toi_state(TOI_CAN_RESUME); + clear_toi_state(TOI_CAN_HIBERNATE); + } + break; + case MISC_MODULE: + case MISC_HIDDEN_MODULE: + break; + default: + printk("Hmmm. Module '%s' has an invalid type." + " It has been ignored.\n", module->name); + return; + } + list_del(&module->module_list); + toi_num_modules--; +} + +/* + * toi_move_module_tail + * + * Rearrange modules when reloading the config. + */ +void toi_move_module_tail(struct toi_module_ops *module) +{ + switch (module->type) { + case FILTER_MODULE: + if (toi_num_filters > 1) + list_move_tail(&module->type_list, &toi_filters); + break; + case WRITER_MODULE: + if (toiNumAllocators > 1) + list_move_tail(&module->type_list, &toiAllocators); + break; + case MISC_MODULE: + case MISC_HIDDEN_MODULE: + break; + default: + printk("Hmmm. Module '%s' has an invalid type." + " It has been ignored.\n", module->name); + return; + } + if ((toi_num_filters + toiNumAllocators) > 1) + list_move_tail(&module->module_list, &toi_modules); +} + +/* + * toi_initialise_modules + * + * Get ready to do some work! + */ +int toi_initialise_modules(int starting_cycle, int early) +{ + struct toi_module_ops *this_module; + int result; + + list_for_each_entry(this_module, &toi_modules, module_list) { + this_module->header_requested = 0; + this_module->header_used = 0; + if (!this_module->enabled) + continue; + if (this_module->early != early) + continue; + if (this_module->initialise) { + toi_message(TOI_MEMORY, TOI_MEDIUM, 1, + "Initialising module %s.\n", + this_module->name); + result = this_module->initialise(starting_cycle); + if (result) + return result; + } + } + + return 0; +} + +/* + * toi_cleanup_modules + * + * Tell modules the work is done. + */ +void toi_cleanup_modules(int finishing_cycle) +{ + struct toi_module_ops *this_module; + + list_for_each_entry(this_module, &toi_modules, module_list) { + if (!this_module->enabled) + continue; + if (this_module->cleanup) { + toi_message(TOI_MEMORY, TOI_MEDIUM, 1, + "Cleaning up module %s.\n", + this_module->name); + this_module->cleanup(finishing_cycle); + } + } +} + +/* + * toi_get_next_filter + * + * Get the next filter in the pipeline. + */ +struct toi_module_ops *toi_get_next_filter(struct toi_module_ops *filter_sought) +{ + struct toi_module_ops *last_filter = NULL, *this_filter = NULL; + + list_for_each_entry(this_filter, &toi_filters, type_list) { + if (!this_filter->enabled) + continue; + if ((last_filter == filter_sought) || (!filter_sought)) + return this_filter; + last_filter = this_filter; + } + + return toiActiveAllocator; +} + +/** + * toi_show_modules: Printk what support is loaded. + */ +void toi_print_modules(void) +{ + struct toi_module_ops *this_module; + int prev = 0; + + printk("TuxOnIce " TOI_CORE_VERSION ", with support for"); + + list_for_each_entry(this_module, &toi_modules, module_list) { + if (this_module->type == MISC_HIDDEN_MODULE) + continue; + printk("%s %s%s%s", prev ? "," : "", + this_module->enabled ? "" : "[", + this_module->name, + this_module->enabled ? "" : "]"); + prev = 1; + } + + printk(".\n"); +} + +/* toi_get_modules + * + * Take a reference to modules so they can't go away under us. + */ + +int toi_get_modules(void) +{ + struct toi_module_ops *this_module; + + list_for_each_entry(this_module, &toi_modules, module_list) { + struct toi_module_ops *this_module2; + + if (try_module_get(this_module->module)) + continue; + + /* Failed! Reverse gets and return error */ + list_for_each_entry(this_module2, &toi_modules, + module_list) { + if (this_module == this_module2) + return -EINVAL; + module_put(this_module2->module); + } + } + return 0; +} + +/* toi_put_modules + * + * Release our references to modules we used. + */ + +void toi_put_modules(void) +{ + struct toi_module_ops *this_module; + + list_for_each_entry(this_module, &toi_modules, module_list) + module_put(this_module->module); +} + +#ifdef CONFIG_TOI_EXPORTS +EXPORT_SYMBOL_GPL(toi_register_module); +EXPORT_SYMBOL_GPL(toi_unregister_module); +EXPORT_SYMBOL_GPL(toi_get_next_filter); +EXPORT_SYMBOL_GPL(toiActiveAllocator); +#endif diff --git a/kernel/power/tuxonice_modules.h b/kernel/power/tuxonice_modules.h new file mode 100644 index 0000000..20d9751 --- /dev/null +++ b/kernel/power/tuxonice_modules.h @@ -0,0 +1,171 @@ +/* + * kernel/power/tuxonice_modules.h + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * It contains declarations for modules. Modules are additions to + * TuxOnIce that provide facilities such as image compression or + * encryption, backends for storage of the image and user interfaces. + * + */ + +#ifndef TOI_MODULES_H +#define TOI_MODULES_H + +/* This is the maximum size we store in the image header for a module name */ +#define TOI_MAX_MODULE_NAME_LENGTH 30 + +/* Per-module metadata */ +struct toi_module_header { + char name[TOI_MAX_MODULE_NAME_LENGTH]; + int enabled; + int type; + int index; + int data_length; + unsigned long signature; +}; + +enum { + FILTER_MODULE, + WRITER_MODULE, + MISC_MODULE, /* Block writer, eg. */ + MISC_HIDDEN_MODULE, +}; + +enum { + TOI_ASYNC, + TOI_SYNC +}; + +struct toi_module_ops { + /* Functions common to all modules */ + int type; + char *name; + char *directory; + char *shared_directory; + struct kobject *dir_kobj; + struct module *module; + int enabled, early; + struct list_head module_list; + + /* List of filters or allocators */ + struct list_head list, type_list; + + /* + * Requirements for memory and storage in + * the image header.. + */ + int (*memory_needed) (void); + int (*storage_needed) (void); + + int header_requested, header_used; + + int (*expected_compression) (void); + + /* + * Debug info + */ + int (*print_debug_info) (char *buffer, int size); + int (*save_config_info) (char *buffer); + void (*load_config_info) (char *buffer, int len); + + /* + * Initialise & cleanup - general routines called + * at the start and end of a cycle. + */ + int (*initialise) (int starting_cycle); + void (*cleanup) (int finishing_cycle); + + /* + * Calls for allocating storage (allocators only). + * + * Header space is allocated separately. Note that allocation + * of space for the header might result in allocated space + * being stolen from the main pool if there is no unallocated + * space. We have to be able to allocate enough space for + * the header. We can eat memory to ensure there is enough + * for the main pool. + */ + + int (*storage_available) (void); + int (*allocate_header_space) (int space_requested); + int (*allocate_storage) (int space_requested); + int (*storage_allocated) (void); + int (*release_storage) (void); + + /* + * Routines used in image I/O. + */ + int (*rw_init) (int rw, int stream_number); + int (*rw_cleanup) (int rw); + int (*write_page) (unsigned long index, struct page *buffer_page, + unsigned int buf_size); + int (*read_page) (unsigned long *index, struct page *buffer_page, + unsigned int *buf_size); + + /* Reset module if image exists but reading aborted */ + void (*noresume_reset) (void); + + /* Read and write the metadata */ + int (*write_header_init) (void); + int (*write_header_cleanup) (void); + + int (*read_header_init) (void); + int (*read_header_cleanup) (void); + + int (*rw_header_chunk) (int rw, struct toi_module_ops *owner, + char *buffer_start, int buffer_size); + + /* Attempt to parse an image location */ + int (*parse_sig_location) (char *buffer, int only_writer, int quiet); + + /* Determine whether image exists that we can restore */ + int (*image_exists) (void); + + /* Mark the image as having tried to resume */ + void (*mark_resume_attempted) (int); + + /* Destroy image if one exists */ + int (*remove_image) (void); + + /* Sysfs Data */ + struct toi_sysfs_data *sysfs_data; + int num_sysfs_entries; +}; + +extern int toi_num_modules, toiNumAllocators; + +extern struct toi_module_ops *toiActiveAllocator; +extern struct list_head toi_filters, toiAllocators, toi_modules; + +extern void toi_prepare_console_modules(void); +extern void toi_cleanup_console_modules(void); + +extern struct toi_module_ops *toi_find_module_given_name(char *name); +extern struct toi_module_ops *toi_get_next_filter(struct toi_module_ops *); + +extern int toi_register_module(struct toi_module_ops *module); +extern void toi_move_module_tail(struct toi_module_ops *module); + +extern int toi_header_storage_for_modules(void); +extern int toi_memory_for_modules(int print_parts); +extern int toi_expected_compression_ratio(void); + +extern int toi_print_module_debug_info(char *buffer, int buffer_size); +extern int toi_register_module(struct toi_module_ops *module); +extern void toi_unregister_module(struct toi_module_ops *module); + +extern int toi_initialise_modules(int starting_cycle, int early); +#define toi_initialise_modules_early(starting) \ + toi_initialise_modules(starting, 1) +#define toi_initialise_modules_late(starting) \ + toi_initialise_modules(starting, 0) +extern void toi_cleanup_modules(int finishing_cycle); + +extern void toi_print_modules(void); + +int toi_get_modules(void); +void toi_put_modules(void); +#endif diff --git a/kernel/power/tuxonice_netlink.c b/kernel/power/tuxonice_netlink.c new file mode 100644 index 0000000..80f6c38 --- /dev/null +++ b/kernel/power/tuxonice_netlink.c @@ -0,0 +1,323 @@ +/* + * kernel/power/tuxonice_netlink.c + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Functions for communicating with a userspace helper via netlink. + */ + + +#include +#include "tuxonice_netlink.h" +#include "tuxonice.h" +#include "tuxonice_modules.h" +#include "tuxonice_alloc.h" + +struct user_helper_data *uhd_list; + +/* + * Refill our pool of SKBs for use in emergencies (eg, when eating memory and + * none can be allocated). + */ +static void toi_fill_skb_pool(struct user_helper_data *uhd) +{ + while (uhd->pool_level < uhd->pool_limit) { + struct sk_buff *new_skb = + alloc_skb(NLMSG_SPACE(uhd->skb_size), TOI_ATOMIC_GFP); + + if (!new_skb) + break; + + new_skb->next = uhd->emerg_skbs; + uhd->emerg_skbs = new_skb; + uhd->pool_level++; + } +} + +/* + * Try to allocate a single skb. If we can't get one, try to use one from + * our pool. + */ +static struct sk_buff *toi_get_skb(struct user_helper_data *uhd) +{ + struct sk_buff *skb = + alloc_skb(NLMSG_SPACE(uhd->skb_size), TOI_ATOMIC_GFP); + + if (skb) + return skb; + + skb = uhd->emerg_skbs; + if (skb) { + uhd->pool_level--; + uhd->emerg_skbs = skb->next; + skb->next = NULL; + } + + return skb; +} + +static void put_skb(struct user_helper_data *uhd, struct sk_buff *skb) +{ + if (uhd->pool_level < uhd->pool_limit) { + skb->next = uhd->emerg_skbs; + uhd->emerg_skbs = skb; + } else + kfree_skb(skb); +} + +void toi_send_netlink_message(struct user_helper_data *uhd, + int type, void *params, size_t len) +{ + struct sk_buff *skb; + struct nlmsghdr *nlh; + void *dest; + struct task_struct *t; + + if (uhd->pid == -1) + return; + + skb = toi_get_skb(uhd); + if (!skb) { + printk(KERN_INFO "toi_netlink: Can't allocate skb!\n"); + return; + } + + /* NLMSG_PUT contains a hidden goto nlmsg_failure */ + nlh = NLMSG_PUT(skb, 0, uhd->sock_seq, type, len); + uhd->sock_seq++; + + dest = NLMSG_DATA(nlh); + if (params && len > 0) + memcpy(dest, params, len); + + netlink_unicast(uhd->nl, skb, uhd->pid, 0); + + read_lock(&tasklist_lock); + t = find_task_by_pid(uhd->pid); + if (!t) { + read_unlock(&tasklist_lock); + if (uhd->pid > -1) + printk(KERN_INFO "Hmm. Can't find the userspace task" + " %d.\n", uhd->pid); + return; + } + wake_up_process(t); + read_unlock(&tasklist_lock); + + yield(); + + return; + +nlmsg_failure: + if (skb) + put_skb(uhd, skb); +} +EXPORT_SYMBOL_GPL(toi_send_netlink_message); + +static void send_whether_debugging(struct user_helper_data *uhd) +{ + static int is_debugging = 1; + + toi_send_netlink_message(uhd, NETLINK_MSG_IS_DEBUGGING, + &is_debugging, sizeof(int)); +} + +/* + * Set the PF_NOFREEZE flag on the given process to ensure it can run whilst we + * are hibernating. + */ +static int nl_set_nofreeze(struct user_helper_data *uhd, int pid) +{ + struct task_struct *t; + + read_lock(&tasklist_lock); + t = find_task_by_pid(pid); + if (!t) { + read_unlock(&tasklist_lock); + printk(KERN_INFO "Strange. Can't find the userspace task %d.\n", + pid); + return -EINVAL; + } + + t->flags |= PF_NOFREEZE; + + read_unlock(&tasklist_lock); + uhd->pid = pid; + + toi_send_netlink_message(uhd, NETLINK_MSG_NOFREEZE_ACK, NULL, 0); + + return 0; +} + +/* + * Called when the userspace process has informed us that it's ready to roll. + */ +static int nl_ready(struct user_helper_data *uhd, int version) +{ + if (version != uhd->interface_version) { + printk(KERN_INFO "%s userspace process using invalid interface" + " version. Trying to continue without it.\n", + uhd->name); + if (uhd->not_ready) + uhd->not_ready(); + return -EINVAL; + } + + complete(&uhd->wait_for_process); + + return 0; +} + +void toi_netlink_close_complete(struct user_helper_data *uhd) +{ + if (uhd->nl) { + sock_release(uhd->nl->sk_socket); + uhd->nl = NULL; + } + + while (uhd->emerg_skbs) { + struct sk_buff *next = uhd->emerg_skbs->next; + kfree_skb(uhd->emerg_skbs); + uhd->emerg_skbs = next; + } + + uhd->pid = -1; +} + +static int toi_nl_gen_rcv_msg(struct user_helper_data *uhd, + struct sk_buff *skb, struct nlmsghdr *nlh) +{ + int type; + int *data; + int err; + + /* Let the more specific handler go first. It returns + * 1 for valid messages that it doesn't know. */ + err = uhd->rcv_msg(skb, nlh); + if (err != 1) + return err; + + type = nlh->nlmsg_type; + + /* Only allow one task to receive NOFREEZE privileges */ + if (type == NETLINK_MSG_NOFREEZE_ME && uhd->pid != -1) { + printk("Received extra nofreeze me requests.\n"); + return -EBUSY; + } + + data = (int *)NLMSG_DATA(nlh); + + switch (type) { + case NETLINK_MSG_NOFREEZE_ME: + return nl_set_nofreeze(uhd, nlh->nlmsg_pid); + case NETLINK_MSG_GET_DEBUGGING: + send_whether_debugging(uhd); + return 0; + case NETLINK_MSG_READY: + if (nlh->nlmsg_len < NLMSG_LENGTH(sizeof(int))) { + printk(KERN_INFO "Invalid ready mesage.\n"); + return -EINVAL; + } + return nl_ready(uhd, *data); + case NETLINK_MSG_CLEANUP: + toi_netlink_close_complete(uhd); + return 0; + } + + return -EINVAL; +} + +static void toi_user_rcv_skb(struct sk_buff *skb) +{ + int err; + struct nlmsghdr *nlh; + struct user_helper_data *uhd = uhd_list; + + while (uhd && uhd->netlink_id != skb->sk->sk_protocol) + uhd = uhd->next; + + if (!uhd) + return; + + while (skb->len >= NLMSG_SPACE(0)) { + u32 rlen; + + nlh = (struct nlmsghdr *) skb->data; + if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len) + return; + + rlen = NLMSG_ALIGN(nlh->nlmsg_len); + if (rlen > skb->len) + rlen = skb->len; + + err = toi_nl_gen_rcv_msg(uhd, skb, nlh); + if (err) + netlink_ack(skb, nlh, err); + else if (nlh->nlmsg_flags & NLM_F_ACK) + netlink_ack(skb, nlh, 0); + skb_pull(skb, rlen); + } +} + +static int netlink_prepare(struct user_helper_data *uhd) +{ + uhd->next = uhd_list; + uhd_list = uhd; + + uhd->sock_seq = 0x42c0ffee; + uhd->nl = netlink_kernel_create(&init_net, uhd->netlink_id, 0, + toi_user_rcv_skb, NULL, THIS_MODULE); + if (!uhd->nl) { + printk(KERN_INFO "Failed to allocate netlink socket for %s.\n", + uhd->name); + return -ENOMEM; + } + + toi_fill_skb_pool(uhd); + + return 0; +} + +void toi_netlink_close(struct user_helper_data *uhd) +{ + struct task_struct *t; + + read_lock(&tasklist_lock); + t = find_task_by_pid(uhd->pid); + if (t) + t->flags &= ~PF_NOFREEZE; + read_unlock(&tasklist_lock); + + toi_send_netlink_message(uhd, NETLINK_MSG_CLEANUP, NULL, 0); +} +EXPORT_SYMBOL_GPL(toi_netlink_close); + +int toi_netlink_setup(struct user_helper_data *uhd) +{ + if (netlink_prepare(uhd) < 0) { + printk(KERN_INFO "Netlink prepare failed.\n"); + return 1; + } + + if (toi_launch_userspace_program(uhd->program, uhd->netlink_id, + UMH_NO_WAIT) < 0) { + printk(KERN_INFO "Launch userspace program failed.\n"); + toi_netlink_close_complete(uhd); + return 1; + } + + /* Wait 2 seconds for the userspace process to make contact */ + wait_for_completion_timeout(&uhd->wait_for_process, 2*HZ); + + if (uhd->pid == -1) { + printk(KERN_INFO "%s: Failed to contact userspace process.\n", + uhd->name); + toi_netlink_close_complete(uhd); + return 1; + } + + return 0; +} +EXPORT_SYMBOL_GPL(toi_netlink_setup); diff --git a/kernel/power/tuxonice_netlink.h b/kernel/power/tuxonice_netlink.h new file mode 100644 index 0000000..721c222 --- /dev/null +++ b/kernel/power/tuxonice_netlink.h @@ -0,0 +1,58 @@ +/* + * kernel/power/tuxonice_netlink.h + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Declarations for functions for communicating with a userspace helper + * via netlink. + */ + +#include +#include + +#define NETLINK_MSG_BASE 0x10 + +#define NETLINK_MSG_READY 0x10 +#define NETLINK_MSG_NOFREEZE_ME 0x16 +#define NETLINK_MSG_GET_DEBUGGING 0x19 +#define NETLINK_MSG_CLEANUP 0x24 +#define NETLINK_MSG_NOFREEZE_ACK 0x27 +#define NETLINK_MSG_IS_DEBUGGING 0x28 + +struct user_helper_data { + int (*rcv_msg) (struct sk_buff *skb, struct nlmsghdr *nlh); + void (*not_ready) (void); + struct sock *nl; + u32 sock_seq; + pid_t pid; + char *comm; + char program[256]; + int pool_level; + int pool_limit; + struct sk_buff *emerg_skbs; + int skb_size; + int netlink_id; + char *name; + struct user_helper_data *next; + struct completion wait_for_process; + int interface_version; + int must_init; +}; + +#ifdef CONFIG_NET +int toi_netlink_setup(struct user_helper_data *uhd); +void toi_netlink_close(struct user_helper_data *uhd); +void toi_send_netlink_message(struct user_helper_data *uhd, + int type, void *params, size_t len); +#else +static inline int toi_netlink_setup(struct user_helper_data *uhd) +{ + return 0; +} + +static inline void toi_netlink_close(struct user_helper_data *uhd) { }; +static inline void toi_send_netlink_message(struct user_helper_data *uhd, + int type, void *params, size_t len) { }; +#endif diff --git a/kernel/power/tuxonice_pagedir.c b/kernel/power/tuxonice_pagedir.c new file mode 100644 index 0000000..93b65cd --- /dev/null +++ b/kernel/power/tuxonice_pagedir.c @@ -0,0 +1,347 @@ +/* + * kernel/power/tuxonice_pagedir.c + * + * Copyright (C) 1998-2001 Gabor Kuti + * Copyright (C) 1998,2001,2002 Pavel Machek + * Copyright (C) 2002-2003 Florent Chabaud + * Copyright (C) 2006-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Routines for handling pagesets. + * Note that pbes aren't actually stored as such. They're stored as + * bitmaps and extents. + */ + +#include +#include +#include +#include +#include +#include + +#include "tuxonice_pageflags.h" +#include "tuxonice_ui.h" +#include "tuxonice_pagedir.h" +#include "tuxonice_prepare_image.h" +#include "tuxonice.h" +#include "power.h" +#include "tuxonice_builtin.h" +#include "tuxonice_alloc.h" + +static int ptoi_pfn; +static struct pbe *this_low_pbe; +static struct pbe **last_low_pbe_ptr; + +void toi_reset_alt_image_pageset2_pfn(void) +{ + ptoi_pfn = max_pfn + 1; +} + +static struct page *first_conflicting_page; + +/* + * free_conflicting_pages + */ + +void free_conflicting_pages(void) +{ + while (first_conflicting_page) { + struct page *next = + *((struct page **) kmap(first_conflicting_page)); + kunmap(first_conflicting_page); + toi__free_page(29, first_conflicting_page); + first_conflicting_page = next; + } +} + +/* __toi_get_nonconflicting_page + * + * Description: Gets order zero pages that won't be overwritten + * while copying the original pages. + */ + +struct page *___toi_get_nonconflicting_page(int can_be_highmem) +{ + struct page *page; + int flags = TOI_ATOMIC_GFP; + if (can_be_highmem) + flags |= __GFP_HIGHMEM; + + + if (test_toi_state(TOI_LOADING_ALT_IMAGE) && pageset2_map.bitmap && + (ptoi_pfn < (max_pfn + 2))) { + /* + * ptoi_pfn = max_pfn + 1 when yet to find first ps2 pfn that + * can be used. + * = 0..max_pfn when going through list. + * = max_pfn + 2 when gone through whole list. + */ + do { + ptoi_pfn = get_next_bit_on(&pageset2_map, ptoi_pfn); + if (ptoi_pfn <= max_pfn) { + page = pfn_to_page(ptoi_pfn); + if (!PagePageset1(page) && + (can_be_highmem || !PageHighMem(page))) + return page; + } else + ptoi_pfn++; + } while (ptoi_pfn < max_pfn); + } + + do { + page = toi_alloc_page(29, flags); + if (!page) { + printk(KERN_INFO "Failed to get nonconflicting " + "page.\n"); + return 0; + } + if (PagePageset1(page)) { + struct page **next = (struct page **) kmap(page); + *next = first_conflicting_page; + first_conflicting_page = page; + kunmap(page); + } + } while (PagePageset1(page)); + + return page; +} + +unsigned long __toi_get_nonconflicting_page(void) +{ + struct page *page = ___toi_get_nonconflicting_page(0); + return page ? (unsigned long) page_address(page) : 0; +} + +struct pbe *get_next_pbe(struct page **page_ptr, struct pbe *this_pbe, + int highmem) +{ + if (((((unsigned long) this_pbe) & (PAGE_SIZE - 1)) + + 2 * sizeof(struct pbe)) > PAGE_SIZE) { + struct page *new_page = + ___toi_get_nonconflicting_page(highmem); + if (!new_page) + return ERR_PTR(-ENOMEM); + this_pbe = (struct pbe *) kmap(new_page); + memset(this_pbe, 0, PAGE_SIZE); + *page_ptr = new_page; + } else + this_pbe++; + + return this_pbe; +} + +/* get_pageset1_load_addresses + * + * Description: We check here that pagedir & pages it points to won't collide + * with pages where we're going to restore from the loaded pages + * later. + * Returns: Zero on success, one if couldn't find enough pages (shouldn't + * happen). + */ + +int toi_get_pageset1_load_addresses(void) +{ + int pfn, highallocd = 0, lowallocd = 0; + int low_needed = pagedir1.size - get_highmem_size(pagedir1); + int high_needed = get_highmem_size(pagedir1); + int low_pages_for_highmem = 0; + unsigned long flags = GFP_ATOMIC | __GFP_NOWARN | __GFP_HIGHMEM; + struct page *page, *high_pbe_page = NULL, *last_high_pbe_page = NULL, + *low_pbe_page; + struct pbe **last_high_pbe_ptr = &restore_highmem_pblist, + *this_high_pbe = NULL; + int orig_low_pfn = max_pfn + 1, orig_high_pfn = max_pfn + 1; + int high_pbes_done = 0, low_pbes_done = 0; + int low_direct = 0, high_direct = 0; + int high_to_free, low_to_free; + + last_low_pbe_ptr = &restore_pblist; + + /* First, allocate pages for the start of our pbe lists. */ + if (high_needed) { + high_pbe_page = ___toi_get_nonconflicting_page(1); + if (!high_pbe_page) + return 1; + this_high_pbe = (struct pbe *) kmap(high_pbe_page); + memset(this_high_pbe, 0, PAGE_SIZE); + } + + low_pbe_page = ___toi_get_nonconflicting_page(0); + if (!low_pbe_page) + return 1; + this_low_pbe = (struct pbe *) page_address(low_pbe_page); + + /* + * Next, allocate all possible memory to find where we can + * load data directly into destination pages. I'd like to do + * this in bigger chunks, but then we can't free pages + * individually later. + */ + + do { + page = toi_alloc_page(30, flags); + if (page) + SetPagePageset1Copy(page); + } while (page); + + /* + * Find out how many high- and lowmem pages we allocated above, + * and how many pages we can reload directly to their original + * location. + */ + BITMAP_FOR_EACH_SET(&pageset1_copy_map, pfn) { + int is_high; + page = pfn_to_page(pfn); + is_high = PageHighMem(page); + + if (PagePageset1(page)) { + if (test_action_state(TOI_NO_DIRECT_LOAD)) { + ClearPagePageset1Copy(page); + toi__free_page(30, page); + continue; + } else { + if (is_high) + high_direct++; + else + low_direct++; + } + } else { + if (is_high) + highallocd++; + else + lowallocd++; + } + } + + high_needed -= high_direct; + low_needed -= low_direct; + + /* + * Do we need to use some lowmem pages for the copies of highmem + * pages? + */ + if (high_needed > highallocd) { + low_pages_for_highmem = high_needed - highallocd; + high_needed -= low_pages_for_highmem; + low_needed += low_pages_for_highmem; + } + + high_to_free = highallocd - high_needed; + low_to_free = lowallocd - low_needed; + + /* + * Now generate our pbes (which will be used for the atomic restore, + * and free unneeded pages. + */ + BITMAP_FOR_EACH_SET(&pageset1_copy_map, pfn) { + int is_high; + page = pfn_to_page(pfn); + is_high = PageHighMem(page); + + if (PagePageset1(page)) + continue; + + /* Free the page? */ + if ((is_high && high_to_free) || + (!is_high && low_to_free)) { + ClearPagePageset1Copy(page); + toi__free_page(30, page); + if (is_high) + high_to_free--; + else + low_to_free--; + continue; + } + + /* Nope. We're going to use this page. Add a pbe. */ + if (is_high || low_pages_for_highmem) { + struct page *orig_page; + high_pbes_done++; + if (!is_high) + low_pages_for_highmem--; + do { + orig_high_pfn = get_next_bit_on(&pageset1_map, + orig_high_pfn); + BUG_ON(orig_high_pfn > max_pfn); + orig_page = pfn_to_page(orig_high_pfn); + } while (!PageHighMem(orig_page) || + load_direct(orig_page)); + + this_high_pbe->orig_address = orig_page; + this_high_pbe->address = page; + this_high_pbe->next = NULL; + if (last_high_pbe_page != high_pbe_page) { + *last_high_pbe_ptr = + (struct pbe *) high_pbe_page; + if (!last_high_pbe_page) + last_high_pbe_page = high_pbe_page; + } else + *last_high_pbe_ptr = this_high_pbe; + last_high_pbe_ptr = &this_high_pbe->next; + if (last_high_pbe_page != high_pbe_page) { + kunmap(last_high_pbe_page); + last_high_pbe_page = high_pbe_page; + } + this_high_pbe = get_next_pbe(&high_pbe_page, + this_high_pbe, 1); + if (IS_ERR(this_high_pbe)) { + printk(KERN_INFO + "This high pbe is an error.\n"); + return -ENOMEM; + } + } else { + struct page *orig_page; + low_pbes_done++; + do { + orig_low_pfn = get_next_bit_on(&pageset1_map, + orig_low_pfn); + BUG_ON(orig_low_pfn > max_pfn); + orig_page = pfn_to_page(orig_low_pfn); + } while (PageHighMem(orig_page) || + load_direct(orig_page)); + + this_low_pbe->orig_address = page_address(orig_page); + this_low_pbe->address = page_address(page); + this_low_pbe->next = NULL; + *last_low_pbe_ptr = this_low_pbe; + last_low_pbe_ptr = &this_low_pbe->next; + this_low_pbe = get_next_pbe(&low_pbe_page, + this_low_pbe, 0); + if (IS_ERR(this_low_pbe)) { + printk(KERN_INFO "this_low_pbe is an error.\n"); + return -ENOMEM; + } + } + } + + if (high_pbe_page) + kunmap(high_pbe_page); + + if (last_high_pbe_page != high_pbe_page) { + if (last_high_pbe_page) + kunmap(last_high_pbe_page); + toi__free_page(29, high_pbe_page); + } + + free_conflicting_pages(); + + return 0; +} + +int add_boot_kernel_data_pbe(void) +{ + this_low_pbe->address = (char *) __toi_get_nonconflicting_page(); + if (!this_low_pbe->address) { + printk(KERN_INFO "Failed to get bkd atomic restore buffer."); + return -ENOMEM; + } + + toi_bkd.size = sizeof(toi_bkd); + memcpy(this_low_pbe->address, &toi_bkd, sizeof(toi_bkd)); + + *last_low_pbe_ptr = this_low_pbe; + this_low_pbe->orig_address = (char *) boot_kernel_data_buffer; + this_low_pbe->next = NULL; + return 0; +} diff --git a/kernel/power/tuxonice_pagedir.h b/kernel/power/tuxonice_pagedir.h new file mode 100644 index 0000000..2f705c2 --- /dev/null +++ b/kernel/power/tuxonice_pagedir.h @@ -0,0 +1,50 @@ +/* + * kernel/power/tuxonice_pagedir.h + * + * Copyright (C) 2006-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Declarations for routines for handling pagesets. + */ + +#ifndef KERNEL_POWER_PAGEDIR_H +#define KERNEL_POWER_PAGEDIR_H + +/* Pagedir + * + * Contains the metadata for a set of pages saved in the image. + */ + +struct pagedir { + int id; + int size; +#ifdef CONFIG_HIGHMEM + int size_high; +#endif +}; + +#ifdef CONFIG_HIGHMEM +#define get_highmem_size(pagedir) (pagedir.size_high) +#define set_highmem_size(pagedir, sz) do { pagedir.size_high = sz; } while (0) +#define inc_highmem_size(pagedir) do { pagedir.size_high++; } while (0) +#define get_lowmem_size(pagedir) (pagedir.size - pagedir.size_high) +#else +#define get_highmem_size(pagedir) (0) +#define set_highmem_size(pagedir, sz) do { } while (0) +#define inc_highmem_size(pagedir) do { } while (0) +#define get_lowmem_size(pagedir) (pagedir.size) +#endif + +extern struct pagedir pagedir1, pagedir2; + +extern void toi_copy_pageset1(void); + +extern int toi_get_pageset1_load_addresses(void); + +extern unsigned long __toi_get_nonconflicting_page(void); +struct page *___toi_get_nonconflicting_page(int can_be_highmem); + +extern void toi_reset_alt_image_pageset2_pfn(void); +extern int add_boot_kernel_data_pbe(void); +#endif diff --git a/kernel/power/tuxonice_pageflags.c b/kernel/power/tuxonice_pageflags.c new file mode 100644 index 0000000..574858c --- /dev/null +++ b/kernel/power/tuxonice_pageflags.c @@ -0,0 +1,162 @@ +/* + * kernel/power/tuxonice_pageflags.c + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Routines for serialising and relocating pageflags in which we + * store our image metadata. + */ + +#include +#include +#include +#include +#include +#include +#include "tuxonice_pageflags.h" +#include "tuxonice_modules.h" +#include "tuxonice_pagedir.h" +#include "tuxonice.h" + +DECLARE_DYN_PAGEFLAGS(pageset2_map); +DECLARE_DYN_PAGEFLAGS(page_resave_map); +DECLARE_DYN_PAGEFLAGS(io_map); +DECLARE_DYN_PAGEFLAGS(nosave_map); +DECLARE_DYN_PAGEFLAGS(free_map); + +static int pages_for_zone(struct zone *zone) +{ + return DIV_ROUND_UP(zone->spanned_pages, (PAGE_SIZE << 3)); +} + +int toi_pageflags_space_needed(void) +{ + int total = 0; + struct zone *zone; + + for_each_zone(zone) + if (populated_zone(zone)) + total += sizeof(int) * 3 + pages_for_zone(zone) * + PAGE_SIZE; + + total += sizeof(int); + + return total; +} + +/* save_dyn_pageflags + * + * Description: Save a set of pageflags. + * Arguments: struct dyn_pageflags *: Pointer to the bitmap being saved. + */ + +void save_dyn_pageflags(struct dyn_pageflags *pagemap) +{ + int i, zone_idx, size, node = 0; + struct zone *zone; + struct pglist_data *pgdat; + + if (!pagemap) + return; + + for_each_online_pgdat(pgdat) { + for (zone_idx = 0; zone_idx < MAX_NR_ZONES; zone_idx++) { + zone = &pgdat->node_zones[zone_idx]; + + if (!populated_zone(zone)) + continue; + + toiActiveAllocator->rw_header_chunk(WRITE, NULL, + (char *) &node, sizeof(int)); + toiActiveAllocator->rw_header_chunk(WRITE, NULL, + (char *) &zone_idx, sizeof(int)); + size = pages_for_zone(zone); + toiActiveAllocator->rw_header_chunk(WRITE, NULL, + (char *) &size, sizeof(int)); + + for (i = 0; i < size; i++) { + if (!pagemap->bitmap[node][zone_idx][i+2]) { + printk(KERN_INFO "Sparse pagemap?\n"); + dump_pagemap(pagemap); + BUG(); + } + toiActiveAllocator->rw_header_chunk(WRITE, + NULL, (char *) pagemap->bitmap[node] + [zone_idx][i+2], + PAGE_SIZE); + } + } + node++; + } + node = -1; + toiActiveAllocator->rw_header_chunk(WRITE, NULL, + (char *) &node, sizeof(int)); +} + +/* load_dyn_pageflags + * + * Description: Load a set of pageflags. + * Arguments: struct dyn_pageflags *: Pointer to the bitmap being loaded. + * (It must be allocated before calling this routine). + */ + +int load_dyn_pageflags(struct dyn_pageflags *pagemap) +{ + int i, zone_idx, zone_check = 0, size, node = 0; + struct zone *zone; + struct pglist_data *pgdat; + + if (!pagemap) + return 1; + + for_each_online_pgdat(pgdat) { + for (zone_idx = 0; zone_idx < MAX_NR_ZONES; zone_idx++) { + zone = &pgdat->node_zones[zone_idx]; + + if (!populated_zone(zone)) + continue; + + /* Same node? */ + toiActiveAllocator->rw_header_chunk(READ, NULL, + (char *) &zone_check, sizeof(int)); + if (zone_check != node) { + printk(KERN_INFO "Node read (%d) != node " + "(%d).\n", + zone_check, node); + return 1; + } + + /* Same zone? */ + toiActiveAllocator->rw_header_chunk(READ, NULL, + (char *) &zone_check, sizeof(int)); + if (zone_check != zone_idx) { + printk(KERN_INFO "Zone read (%d) != node " + "(%d).\n", + zone_check, zone_idx); + return 1; + } + + + toiActiveAllocator->rw_header_chunk(READ, NULL, + (char *) &size, sizeof(int)); + + for (i = 0; i < size; i++) + toiActiveAllocator->rw_header_chunk(READ, NULL, + (char *) pagemap->bitmap[node][zone_idx] + [i+2], + PAGE_SIZE); + } + node++; + } + toiActiveAllocator->rw_header_chunk(READ, NULL, (char *) &zone_check, + sizeof(int)); + if (zone_check != -1) { + printk(KERN_INFO "Didn't read end of dyn pageflag data marker." + "(%x)\n", zone_check); + return 1; + } + + return 0; +} diff --git a/kernel/power/tuxonice_pageflags.h b/kernel/power/tuxonice_pageflags.h new file mode 100644 index 0000000..f976b5c --- /dev/null +++ b/kernel/power/tuxonice_pageflags.h @@ -0,0 +1,63 @@ +/* + * kernel/power/tuxonice_pageflags.h + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * TuxOnIce needs a few pageflags while working that aren't otherwise + * used. To save the struct page pageflags, we dynamically allocate + * a bitmap and use that. These are the only non order-0 allocations + * we do. + * + * NOTE!!! + * We assume that PAGE_SIZE - sizeof(void *) is a multiple of + * sizeof(unsigned long). Is this ever false? + */ + +#include +#include + +extern struct dyn_pageflags pageset1_map; +extern struct dyn_pageflags pageset1_copy_map; +extern struct dyn_pageflags pageset2_map; +extern struct dyn_pageflags page_resave_map; +extern struct dyn_pageflags io_map; +extern struct dyn_pageflags nosave_map; +extern struct dyn_pageflags free_map; + +#define PagePageset1(page) (test_dynpageflag(&pageset1_map, page)) +#define SetPagePageset1(page) (set_dynpageflag(&pageset1_map, page)) +#define ClearPagePageset1(page) (clear_dynpageflag(&pageset1_map, page)) + +#define PagePageset1Copy(page) (test_dynpageflag(&pageset1_copy_map, page)) +#define SetPagePageset1Copy(page) (set_dynpageflag(&pageset1_copy_map, page)) +#define ClearPagePageset1Copy(page) \ + (clear_dynpageflag(&pageset1_copy_map, page)) + +#define PagePageset2(page) (test_dynpageflag(&pageset2_map, page)) +#define SetPagePageset2(page) (set_dynpageflag(&pageset2_map, page)) +#define ClearPagePageset2(page) (clear_dynpageflag(&pageset2_map, page)) + +#define PageWasRW(page) (test_dynpageflag(&pageset2_map, page)) +#define SetPageWasRW(page) (set_dynpageflag(&pageset2_map, page)) +#define ClearPageWasRW(page) (clear_dynpageflag(&pageset2_map, page)) + +#define PageResave(page) (page_resave_map.bitmap ? \ + test_dynpageflag(&page_resave_map, page) : 0) +#define SetPageResave(page) (set_dynpageflag(&page_resave_map, page)) +#define ClearPageResave(page) (clear_dynpageflag(&page_resave_map, page)) + +#define PageNosave(page) (nosave_map.bitmap ? \ + test_dynpageflag(&nosave_map, page) : 0) +#define SetPageNosave(page) (set_dynpageflag(&nosave_map, page)) +#define ClearPageNosave(page) (clear_dynpageflag(&nosave_map, page)) + +#define PageNosaveFree(page) (free_map.bitmap ? \ + test_dynpageflag(&free_map, page) : 0) +#define SetPageNosaveFree(page) (set_dynpageflag(&free_map, page)) +#define ClearPageNosaveFree(page) (clear_dynpageflag(&free_map, page)) + +extern void save_dyn_pageflags(struct dyn_pageflags *pagemap); +extern int load_dyn_pageflags(struct dyn_pageflags *pagemap); +extern int toi_pageflags_space_needed(void); diff --git a/kernel/power/tuxonice_power_off.c b/kernel/power/tuxonice_power_off.c new file mode 100644 index 0000000..c10ab52 --- /dev/null +++ b/kernel/power/tuxonice_power_off.c @@ -0,0 +1,266 @@ +/* + * kernel/power/tuxonice_power_off.c + * + * Copyright (C) 2006-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Support for powering down. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include "tuxonice.h" +#include "tuxonice_ui.h" +#include "tuxonice_power_off.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_modules.h" + +unsigned long toi_poweroff_method; /* 0 - Kernel power off */ +EXPORT_SYMBOL_GPL(toi_poweroff_method); + +int wake_delay; +static char lid_state_file[256], wake_alarm_dir[256]; +static struct file *lid_file, *alarm_file, *epoch_file; +int post_wake_state = -1; + +/* + * __toi_power_down + * Functionality : Powers down or reboots the computer once the image + * has been written to disk. + * Key Assumptions : Able to reboot/power down via code called or that + * the warning emitted if the calls fail will be visible + * to the user (ie printk resumes devices). + */ + +static void __toi_power_down(int method) +{ + int error; + + if (test_action_state(TOI_REBOOT)) { + toi_prepare_status(DONT_CLEAR_BAR, "Ready to reboot."); + kernel_restart(NULL); + } + + toi_prepare_status(DONT_CLEAR_BAR, "Powering down."); + + switch (method) { + case 0: + break; + case 3: + error = pm_notifier_call_chain(PM_SUSPEND_PREPARE); + if (!error) + error = suspend_devices_and_enter(PM_SUSPEND_MEM); + pm_notifier_call_chain(PM_POST_SUSPEND); + if (!error) + return; + break; + case 4: + if (!hibernation_platform_enter()) + return; + break; + case 5: + /* Historic entry only now */ + break; + } + + if (method && method != 5) + toi_prepare_status(DONT_CLEAR_BAR, + "Falling back to alternate power off method."); + + if (test_result_state(TOI_ABORTED)) + return; + + kernel_power_off(); + kernel_halt(); + toi_prepare_status(DONT_CLEAR_BAR, "Powerdown failed."); + while (1) + cpu_relax(); +} + +#define CLOSE_FILE(file) \ + if (file) { \ + filp_close(file, NULL); file = NULL; \ + } + +static void powerdown_files_close(int toi_or_resume) +{ + if (!toi_or_resume) + return; + + CLOSE_FILE(lid_file); + CLOSE_FILE(alarm_file); + CLOSE_FILE(epoch_file); +} + +static void open_file(char *format, char *arg, struct file **var, int mode, + char *desc) +{ + char buf[256]; + + if (strlen(arg)) { + sprintf(buf, format, arg); + *var = filp_open(buf, mode, 0); + if (IS_ERR(*var) || !*var) { + printk(KERN_INFO "Failed to open %s file '%s' (%p).\n", + desc, buf, *var); + *var = 0; + } + } +} + +static int powerdown_files_open(int toi_or_resume) +{ + if (!toi_or_resume) + return 0; + + open_file("/proc/acpi/button/%s/state", lid_state_file, &lid_file, + O_RDONLY, "lid"); + + if (strlen(wake_alarm_dir)) { + open_file("/sys/class/rtc/%s/wakealarm", wake_alarm_dir, + &alarm_file, O_WRONLY, "alarm"); + + open_file("/sys/class/rtc/%s/since_epoch", wake_alarm_dir, + &epoch_file, O_RDONLY, "epoch"); + } + + return 0; +} + +static int lid_closed(void) +{ + char array[25]; + ssize_t size; + loff_t pos = 0; + + if (!lid_file) + return 0; + + size = vfs_read(lid_file, (char __user *) array, 25, &pos); + if ((int) size < 1) { + printk(KERN_INFO "Failed to read lid state file (%d).\n", + (int) size); + return 0; + } + + if (!strcmp(array, "state: closed\n")) + return 1; + + return 0; +} + +static void write_alarm_file(int value) +{ + ssize_t size; + char buf[40]; + loff_t pos = 0; + + if (!alarm_file) + return; + + sprintf(buf, "%d\n", value); + + size = vfs_write(alarm_file, (char __user *)buf, strlen(buf), &pos); + + if (size < 0) + printk(KERN_INFO "Error %d writing alarm value %s.\n", + (int) size, buf); +} + +/** + * toi_check_resleep: See whether to powerdown again after waking. + * + * After waking, check whether we should powerdown again in a (usually + * different) way. We only do this if the lid switch is still closed. + */ +void toi_check_resleep(void) +{ + /* We only return if we suspended to ram and woke. */ + if (lid_closed() && post_wake_state >= 0) + __toi_power_down(post_wake_state); +} + +void toi_power_down(void) +{ + if (alarm_file && wake_delay) { + char array[25]; + loff_t pos = 0; + size_t size = vfs_read(epoch_file, (char __user *) array, 25, + &pos); + + if (((int) size) < 1) + printk(KERN_INFO "Failed to read epoch file (%d).\n", + (int) size); + else { + unsigned long since_epoch = + simple_strtol(array, NULL, 0); + + /* Clear any wakeup time. */ + write_alarm_file(0); + + /* Set new wakeup time. */ + write_alarm_file(since_epoch + wake_delay); + } + } + + __toi_power_down(toi_poweroff_method); + + toi_check_resleep(); +} +EXPORT_SYMBOL_GPL(toi_power_down); + +static struct toi_sysfs_data sysfs_params[] = { +#if defined(CONFIG_ACPI) + { + TOI_ATTR("lid_file", SYSFS_RW), + SYSFS_STRING(lid_state_file, 256, 0), + }, + + { + TOI_ATTR("wake_delay", SYSFS_RW), + SYSFS_INT(&wake_delay, 0, INT_MAX, 0) + }, + + { + TOI_ATTR("wake_alarm_dir", SYSFS_RW), + SYSFS_STRING(wake_alarm_dir, 256, 0) + }, + + { TOI_ATTR("post_wake_state", SYSFS_RW), + SYSFS_INT(&post_wake_state, -1, 5, 0) + }, + + { TOI_ATTR("powerdown_method", SYSFS_RW), + SYSFS_UL(&toi_poweroff_method, 0, 5, 0) + }, +#endif +}; + +static struct toi_module_ops powerdown_ops = { + .type = MISC_HIDDEN_MODULE, + .name = "poweroff", + .initialise = powerdown_files_open, + .cleanup = powerdown_files_close, + .directory = "[ROOT]", + .module = THIS_MODULE, + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +int toi_poweroff_init(void) +{ + return toi_register_module(&powerdown_ops); +} + +void toi_poweroff_exit(void) +{ + toi_unregister_module(&powerdown_ops); +} diff --git a/kernel/power/tuxonice_power_off.h b/kernel/power/tuxonice_power_off.h new file mode 100644 index 0000000..1db2d16 --- /dev/null +++ b/kernel/power/tuxonice_power_off.h @@ -0,0 +1,33 @@ +/* + * kernel/power/tuxonice_power_off.h + * + * Copyright (C) 2006-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Support for the powering down. + */ + +int toi_pm_state_finish(void); +void toi_power_down(void); +extern unsigned long toi_poweroff_method; +extern int toi_platform_prepare(void); +extern void toi_platform_finish(void); +int toi_poweroff_init(void); +void toi_poweroff_exit(void); +void toi_check_resleep(void); + +extern int platform_start(int platform_mode); +extern int platform_pre_snapshot(int platform_mode); +extern int platform_leave(int platform_mode); +extern int platform_finish(int platform_mode); +extern int platform_pre_restore(int platform_mode); +extern int platform_restore_cleanup(int platform_mode); + +#define platform_test() (toi_poweroff_method == 4) +#define toi_platform_start() platform_start(platform_test()) +#define toi_platform_pre_snapshot() platform_pre_snapshot(platform_test()) +#define toi_platform_leave() platform_leave(platform_test()) +#define toi_platform_finish() platform_finish(platform_test()) +#define toi_platform_pre_restore() platform_pre_restore(platform_test()) +#define toi_platform_restore_cleanup() platform_restore_cleanup(platform_test()) diff --git a/kernel/power/tuxonice_prepare_image.c b/kernel/power/tuxonice_prepare_image.c new file mode 100644 index 0000000..85a3037 --- /dev/null +++ b/kernel/power/tuxonice_prepare_image.c @@ -0,0 +1,1058 @@ +/* + * kernel/power/tuxonice_prepare_image.c + * + * Copyright (C) 2003-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * We need to eat memory until we can: + * 1. Perform the save without changing anything (RAM_NEEDED < #pages) + * 2. Fit it all in available space (toiActiveAllocator->available_space() >= + * main_storage_needed()) + * 3. Reload the pagedir and pageset1 to places that don't collide with their + * final destinations, not knowing to what extent the resumed kernel will + * overlap with the one loaded at boot time. I think the resumed kernel + * should overlap completely, but I don't want to rely on this as it is + * an unproven assumption. We therefore assume there will be no overlap at + * all (worse case). + * 4. Meet the user's requested limit (if any) on the size of the image. + * The limit is in MB, so pages/256 (assuming 4K pages). + * + */ + +#include +#include +#include +#include +#include +#include + +#include "tuxonice_pageflags.h" +#include "tuxonice_modules.h" +#include "tuxonice_io.h" +#include "tuxonice_ui.h" +#include "tuxonice_extent.h" +#include "tuxonice_prepare_image.h" +#include "tuxonice_block_io.h" +#include "tuxonice.h" +#include "tuxonice_checksum.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_alloc.h" + +static int num_nosave, header_space_allocated, main_storage_allocated, + storage_available; +int extra_pd1_pages_allowance = MIN_EXTRA_PAGES_ALLOWANCE; +int image_size_limit; + +struct attention_list { + struct task_struct *task; + struct attention_list *next; +}; + +static struct attention_list *attention_list; + +#define PAGESET1 0 +#define PAGESET2 1 + +void free_attention_list(void) +{ + struct attention_list *last = NULL; + + while (attention_list) { + last = attention_list; + attention_list = attention_list->next; + toi_kfree(6, last); + } +} + +static int build_attention_list(void) +{ + int i, task_count = 0; + struct task_struct *p; + struct attention_list *next; + + /* + * Count all userspace process (with task->mm) marked PF_NOFREEZE. + */ + read_lock(&tasklist_lock); + for_each_process(p) + if ((p->flags & PF_NOFREEZE) || p == current) + task_count++; + read_unlock(&tasklist_lock); + + /* + * Allocate attention list structs. + */ + for (i = 0; i < task_count; i++) { + struct attention_list *this = + toi_kzalloc(6, sizeof(struct attention_list), + TOI_WAIT_GFP); + if (!this) { + printk(KERN_INFO "Failed to allocate slab for " + "attention list.\n"); + free_attention_list(); + return 1; + } + this->next = NULL; + if (attention_list) + this->next = attention_list; + attention_list = this; + } + + next = attention_list; + read_lock(&tasklist_lock); + for_each_process(p) + if ((p->flags & PF_NOFREEZE) || p == current) { + next->task = p; + next = next->next; + } + read_unlock(&tasklist_lock); + return 0; +} + +static void pageset2_full(void) +{ + struct zone *zone; + unsigned long flags; + + for_each_zone(zone) { + spin_lock_irqsave(&zone->lru_lock, flags); + if (zone_page_state(zone, NR_INACTIVE)) { + struct page *page; + list_for_each_entry(page, &zone->inactive_list, lru) + SetPagePageset2(page); + } + if (zone_page_state(zone, NR_ACTIVE)) { + struct page *page; + list_for_each_entry(page, &zone->active_list, lru) + SetPagePageset2(page); + } + spin_unlock_irqrestore(&zone->lru_lock, flags); + } +} + +/* + * toi_mark_task_as_pageset + * Functionality : Marks all the saveable pages belonging to a given process + * as belonging to a particular pageset. + */ + +static void toi_mark_task_as_pageset(struct task_struct *t, int pageset2) +{ + struct vm_area_struct *vma; + struct mm_struct *mm; + + mm = t->active_mm; + + if (!mm || !mm->mmap) + return; + + if (!irqs_disabled()) + down_read(&mm->mmap_sem); + + for (vma = mm->mmap; vma; vma = vma->vm_next) { + unsigned long posn; + + if (vma->vm_flags & (VM_PFNMAP | VM_IO | VM_RESERVED) || + !vma->vm_start) + continue; + + for (posn = vma->vm_start; posn < vma->vm_end; + posn += PAGE_SIZE) { + struct page *page = follow_page(vma, posn, 0); + if (!page) + continue; + + if (pageset2) + SetPagePageset2(page); + else { + ClearPagePageset2(page); + SetPagePageset1(page); + } + } + } + + if (!irqs_disabled()) + up_read(&mm->mmap_sem); +} + +/* mark_pages_for_pageset2 + * + * Description: Mark unshared pages in processes not needed for hibernate as + * being able to be written out in a separate pagedir. + * HighMem pages are simply marked as pageset2. They won't be + * needed during hibernate. + */ + +static void toi_mark_pages_for_pageset2(void) +{ + struct task_struct *p; + struct attention_list *this = attention_list; + + if (test_action_state(TOI_NO_PAGESET2)) + return; + + clear_dyn_pageflags(&pageset2_map); + + if (test_action_state(TOI_PAGESET2_FULL)) + pageset2_full(); + else { + read_lock(&tasklist_lock); + for_each_process(p) { + if (!p->mm || (p->flags & PF_BORROWED_MM)) + continue; + + toi_mark_task_as_pageset(p, PAGESET2); + } + read_unlock(&tasklist_lock); + } + + /* + * Because the tasks in attention_list are ones related to hibernating, + * we know that they won't go away under us. + */ + + while (this) { + if (!test_result_state(TOI_ABORTED)) + toi_mark_task_as_pageset(this->task, PAGESET1); + this = this->next; + } +} + +/* + * The atomic copy of pageset1 is stored in pageset2 pages. + * But if pageset1 is larger (normally only just after boot), + * we need to allocate extra pages to store the atomic copy. + * The following data struct and functions are used to handle + * the allocation and freeing of that memory. + */ + +static int extra_pages_allocated; + +struct extras { + struct page *page; + int order; + struct extras *next; +}; + +static struct extras *extras_list; + +/* toi_free_extra_pagedir_memory + * + * Description: Free previously allocated extra pagedir memory. + */ +void toi_free_extra_pagedir_memory(void) +{ + /* Free allocated pages */ + while (extras_list) { + struct extras *this = extras_list; + int i; + + extras_list = this->next; + + for (i = 0; i < (1 << this->order); i++) + ClearPageNosave(this->page + i); + + toi_free_pages(9, this->page, this->order); + toi_kfree(7, this); + } + + extra_pages_allocated = 0; +} + +/* toi_allocate_extra_pagedir_memory + * + * Description: Allocate memory for making the atomic copy of pagedir1 in the + * case where it is bigger than pagedir2. + * Arguments: int num_to_alloc: Number of extra pages needed. + * Result: int. Number of extra pages we now have allocated. + */ +static int toi_allocate_extra_pagedir_memory(int extra_pages_needed) +{ + int j, order, num_to_alloc = extra_pages_needed - extra_pages_allocated; + unsigned long flags = TOI_ATOMIC_GFP; + + if (num_to_alloc < 1) + return 0; + + order = fls(num_to_alloc); + if (order >= MAX_ORDER) + order = MAX_ORDER - 1; + + while (num_to_alloc) { + struct page *newpage; + unsigned long virt; + struct extras *extras_entry; + + while ((1 << order) > num_to_alloc) + order--; + + extras_entry = (struct extras *) toi_kzalloc(7, + sizeof(struct extras), TOI_ATOMIC_GFP); + + if (!extras_entry) + return extra_pages_allocated; + + virt = toi_get_free_pages(9, flags, order); + while (!virt && order) { + order--; + virt = toi_get_free_pages(9, flags, order); + } + + if (!virt) { + toi_kfree(7, extras_entry); + return extra_pages_allocated; + } + + newpage = virt_to_page(virt); + + extras_entry->page = newpage; + extras_entry->order = order; + extras_entry->next = NULL; + + if (extras_list) + extras_entry->next = extras_list; + + extras_list = extras_entry; + + for (j = 0; j < (1 << order); j++) { + SetPageNosave(newpage + j); + SetPagePageset1Copy(newpage + j); + } + + extra_pages_allocated += (1 << order); + num_to_alloc -= (1 << order); + } + + return extra_pages_allocated; +} + +/* + * real_nr_free_pages: Count pcp pages for a zone type or all zones + * (-1 for all, otherwise zone_idx() result desired). + */ +int real_nr_free_pages(unsigned long zone_idx_mask) +{ + struct zone *zone; + int result = 0, i = 0, cpu; + + /* PCP lists */ + for_each_zone(zone) { + if (!populated_zone(zone)) + continue; + + if (!(zone_idx_mask & (1 << zone_idx(zone)))) + continue; + + for_each_online_cpu(cpu) { + struct per_cpu_pageset *pset = zone_pcp(zone, cpu); + + for (i = 0; i < ARRAY_SIZE(pset->pcp); i++) { + struct per_cpu_pages *pcp; + + pcp = &pset->pcp[i]; + result += pcp->count; + } + } + + result += zone_page_state(zone, NR_FREE_PAGES); + } + return result; +} + +/* + * Discover how much extra memory will be required by the drivers + * when they're asked to hibernate. We can then ensure that amount + * of memory is available when we really want it. + */ +static void get_extra_pd1_allowance(void) +{ + int orig_num_free = real_nr_free_pages(all_zones_mask), final; + + toi_prepare_status(CLEAR_BAR, "Finding allowance for drivers."); + + suspend_console(); + device_suspend(PMSG_FREEZE); + local_irq_disable(); /* irqs might have been re-enabled on us */ + device_power_down(PMSG_FREEZE); + + final = real_nr_free_pages(all_zones_mask); + + device_power_up(); + local_irq_enable(); + device_resume(); + resume_console(); + + extra_pd1_pages_allowance = max( + orig_num_free - final + MIN_EXTRA_PAGES_ALLOWANCE, + MIN_EXTRA_PAGES_ALLOWANCE); +} + +/* + * Amount of storage needed, possibly taking into account the + * expected compression ratio and possibly also ignoring our + * allowance for extra pages. + */ +static int main_storage_needed(int use_ecr, + int ignore_extra_pd1_allow) +{ + return ((pagedir1.size + pagedir2.size + + (ignore_extra_pd1_allow ? 0 : extra_pd1_pages_allowance)) * + (use_ecr ? toi_expected_compression_ratio() : 100) / 100); +} + +/* + * Storage needed for the image header, in bytes until the return. + */ +static int header_storage_needed(void) +{ + int bytes = (int) sizeof(struct toi_header) + + toi_header_storage_for_modules() + + toi_pageflags_space_needed(); + + return DIV_ROUND_UP(bytes, PAGE_SIZE); +} + +/* + * When freeing memory, pages from either pageset might be freed. + * + * When seeking to free memory to be able to hibernate, for every ps1 page + * freed, we need 2 less pages for the atomic copy because there is one less + * page to copy and one more page into which data can be copied. + * + * Freeing ps2 pages saves us nothing directly. No more memory is available + * for the atomic copy. Indirectly, a ps1 page might be freed (slab?), but + * that's too much work to figure out. + * + * => ps1_to_free functions + * + * Of course if we just want to reduce the image size, because of storage + * limitations or an image size limit either ps will do. + * + * => any_to_free function + */ + +static int highpages_ps1_to_free(void) +{ + return max_t(int, 0, DIV_ROUND_UP(get_highmem_size(pagedir1) - + get_highmem_size(pagedir2), 2) - real_nr_free_high_pages()); +} + +static int lowpages_ps1_to_free(void) +{ + return max_t(int, 0, DIV_ROUND_UP(get_lowmem_size(pagedir1) + + extra_pd1_pages_allowance + MIN_FREE_RAM + + toi_memory_for_modules(0) - get_lowmem_size(pagedir2) - + real_nr_free_low_pages() - extra_pages_allocated, 2)); +} + +static int current_image_size(void) +{ + return pagedir1.size + pagedir2.size + header_space_allocated; +} + +static int storage_still_required(void) +{ + return max_t(int, 0, main_storage_needed(1, 1) - storage_available); +} + +static int ram_still_required(void) +{ + return max_t(int, 0, MIN_FREE_RAM + toi_memory_for_modules(0) - + real_nr_free_low_pages() + 2 * extra_pd1_pages_allowance); +} + +static int any_to_free(int use_image_size_limit) +{ + int user_limit = (use_image_size_limit && image_size_limit > 0) ? + max_t(int, 0, current_image_size() - (image_size_limit << 8)) + : 0; + + int storage_limit = storage_still_required(), + ram_limit = ram_still_required(); + + return max(max(user_limit, storage_limit), ram_limit); +} + +/* amount_needed + * + * Calculates the amount by which the image size needs to be reduced to meet + * our constraints. + */ +static int amount_needed(int use_image_size_limit) +{ + return max(highpages_ps1_to_free() + lowpages_ps1_to_free(), + any_to_free(use_image_size_limit)); +} + +static int image_not_ready(int use_image_size_limit) +{ + toi_message(TOI_EAT_MEMORY, TOI_LOW, 1, + "Amount still needed (%d) > 0:%d. Header: %d < %d: %d," + " Storage allocd: %d < %d: %d.\n", + amount_needed(use_image_size_limit), + (amount_needed(use_image_size_limit) > 0), + header_space_allocated, header_storage_needed(), + header_space_allocated < header_storage_needed(), + main_storage_allocated, + main_storage_needed(1, 1), + main_storage_allocated < main_storage_needed(1, 1)); + + toi_cond_pause(0, NULL); + + return ((amount_needed(use_image_size_limit) > 0) || + header_space_allocated < header_storage_needed() || + main_storage_allocated < main_storage_needed(1, 1)); +} + +static void display_failure_reason(int tries_exceeded) +{ + int storage_required = storage_still_required(), + ram_required = ram_still_required(), + high_ps1 = highpages_ps1_to_free(), + low_ps1 = lowpages_ps1_to_free(); + + printk(KERN_INFO "Failed to prepare the image because...\n"); + + if (!storage_available) { + printk(KERN_INFO "- You need some storage available to be " + "able to hibernate.\n"); + return; + } + + if (tries_exceeded) + printk(KERN_INFO "- The maximum number of iterations was " + "reached without successfully preparing the " + "image.\n"); + + if (header_space_allocated < header_storage_needed()) { + printk(KERN_INFO "- Insufficient header storage allocated. " + "Need %d, have %d.\n", header_storage_needed(), + header_space_allocated); + set_abort_result(TOI_INSUFFICIENT_STORAGE); + } + + if (storage_required) { + printk(KERN_INFO " - We need at least %d pages of storage " + "(ignoring the header), but only have %d.\n", + main_storage_needed(1, 1), + main_storage_allocated); + set_abort_result(TOI_INSUFFICIENT_STORAGE); + } + + if (ram_required) { + printk(KERN_INFO " - We need %d more free pages of low " + "memory.\n", ram_required); + printk(KERN_INFO " Minimum free : %8d\n", MIN_FREE_RAM); + printk(KERN_INFO " + Reqd. by modules : %8d\n", + toi_memory_for_modules(0)); + printk(KERN_INFO " - Currently free : %8d\n", + real_nr_free_low_pages()); + printk(KERN_INFO " + 2 * extra allow : %8d\n", + 2 * extra_pd1_pages_allowance); + printk(KERN_INFO " : ========\n"); + printk(KERN_INFO " Still needed : %8d\n", ram_required); + + /* Print breakdown of memory needed for modules */ + toi_memory_for_modules(1); + set_abort_result(TOI_UNABLE_TO_FREE_ENOUGH_MEMORY); + } + + if (high_ps1) { + printk(KERN_INFO "- We need to free %d highmem pageset 1 " + "pages.\n", high_ps1); + set_abort_result(TOI_UNABLE_TO_FREE_ENOUGH_MEMORY); + } + + if (low_ps1) { + printk(KERN_INFO " - We need to free %d lowmem pageset 1 " + "pages.\n", low_ps1); + set_abort_result(TOI_UNABLE_TO_FREE_ENOUGH_MEMORY); + } +} + +static void display_stats(int always, int sub_extra_pd1_allow) +{ + char buffer[255]; + snprintf(buffer, 254, + "Free:%d(%d). Sets:%d(%d),%d(%d). Header:%d/%d. Nosave:%d-%d" + "=%d. Storage:%u/%u(%u=>%u). Needed:%d,%d,%d(%d,%d,%d,%d)\n", + + /* Free */ + real_nr_free_pages(all_zones_mask), + real_nr_free_low_pages(), + + /* Sets */ + pagedir1.size, pagedir1.size - get_highmem_size(pagedir1), + pagedir2.size, pagedir2.size - get_highmem_size(pagedir2), + + /* Header */ + header_space_allocated, header_storage_needed(), + + /* Nosave */ + num_nosave, extra_pages_allocated, + num_nosave - extra_pages_allocated, + + /* Storage */ + main_storage_allocated, + storage_available, + main_storage_needed(1, sub_extra_pd1_allow), + main_storage_needed(1, 1), + + /* Needed */ + lowpages_ps1_to_free(), highpages_ps1_to_free(), + any_to_free(1), + MIN_FREE_RAM, toi_memory_for_modules(0), + extra_pd1_pages_allowance, image_size_limit << 8); + + if (always) + printk(buffer); + else + toi_message(TOI_EAT_MEMORY, TOI_MEDIUM, 1, buffer); +} + +/* generate_free_page_map + * + * Description: This routine generates a bitmap of free pages from the + * lists used by the memory manager. We then use the bitmap + * to quickly calculate which pages to save and in which + * pagesets. + */ +static void generate_free_page_map(void) +{ + int order, pfn, cpu, t; + unsigned long flags, i; + struct zone *zone; + struct list_head *curr; + + for_each_zone(zone) { + if (!populated_zone(zone)) + continue; + + spin_lock_irqsave(&zone->lock, flags); + + for (i = 0; i < zone->spanned_pages; i++) + ClearPageNosaveFree(pfn_to_page( + zone->zone_start_pfn + i)); + + for_each_migratetype_order(order, t) { + list_for_each(curr, + &zone->free_area[order].free_list[t]) { + unsigned long i; + + pfn = page_to_pfn(list_entry(curr, struct page, + lru)); + for (i = 0; i < (1UL << order); i++) + SetPageNosaveFree(pfn_to_page(pfn + i)); + } + } + + for_each_online_cpu(cpu) { + struct per_cpu_pageset *pset = zone_pcp(zone, cpu); + + for (i = 0; i < ARRAY_SIZE(pset->pcp); i++) { + struct per_cpu_pages *pcp; + struct page *page; + + pcp = &pset->pcp[i]; + list_for_each_entry(page, &pcp->list, lru) + SetPageNosaveFree(page); + } + } + + spin_unlock_irqrestore(&zone->lock, flags); + } +} + +/* size_of_free_region + * + * Description: Return the number of pages that are free, beginning with and + * including this one. + */ +static int size_of_free_region(struct page *page) +{ + struct zone *zone = page_zone(page); + struct page *posn = page, *last_in_zone = + pfn_to_page(zone->zone_start_pfn) + zone->spanned_pages - 1; + + while (posn <= last_in_zone && PageNosaveFree(posn)) + posn++; + return (posn - page); +} + +/* flag_image_pages + * + * This routine generates our lists of pages to be stored in each + * pageset. Since we store the data using extents, and adding new + * extents might allocate a new extent page, this routine may well + * be called more than once. + */ +static void flag_image_pages(int atomic_copy) +{ + int num_free = 0; + unsigned long loop; + struct zone *zone; + + pagedir1.size = 0; + pagedir2.size = 0; + + set_highmem_size(pagedir1, 0); + set_highmem_size(pagedir2, 0); + + num_nosave = 0; + + clear_dyn_pageflags(&pageset1_map); + + generate_free_page_map(); + + /* + * Pages not to be saved are marked Nosave irrespective of being + * reserved. + */ + for_each_zone(zone) { + int highmem = is_highmem(zone); + + if (!populated_zone(zone)) + continue; + + for (loop = 0; loop < zone->spanned_pages; loop++) { + unsigned long pfn = zone->zone_start_pfn + loop; + struct page *page; + int chunk_size; + + if (!pfn_valid(pfn)) + continue; + + page = pfn_to_page(pfn); + + chunk_size = size_of_free_region(page); + if (chunk_size) { + num_free += chunk_size; + loop += chunk_size - 1; + continue; + } + + if (highmem) + page = saveable_highmem_page(pfn); + else + page = saveable_page(pfn); + + if (!page || PageNosave(page)) { + num_nosave++; + continue; + } + + if (PagePageset2(page)) { + pagedir2.size++; + if (PageHighMem(page)) + inc_highmem_size(pagedir2); + else + SetPagePageset1Copy(page); + if (PageResave(page)) { + SetPagePageset1(page); + ClearPagePageset1Copy(page); + pagedir1.size++; + if (PageHighMem(page)) + inc_highmem_size(pagedir1); + } + } else { + pagedir1.size++; + SetPagePageset1(page); + if (PageHighMem(page)) + inc_highmem_size(pagedir1); + } + } + } + + if (atomic_copy) + return; + + toi_message(TOI_EAT_MEMORY, TOI_MEDIUM, 0, + "Count data pages: Set1 (%d) + Set2 (%d) + Nosave (%d) + " + "NumFree (%d) = %d.\n", + pagedir1.size, pagedir2.size, num_nosave, num_free, + pagedir1.size + pagedir2.size + num_nosave + num_free); +} + +void toi_recalculate_image_contents(int atomic_copy) +{ + clear_dyn_pageflags(&pageset1_map); + if (!atomic_copy) { + int pfn; + BITMAP_FOR_EACH_SET(&pageset2_map, pfn) + ClearPagePageset1Copy(pfn_to_page(pfn)); + /* Need to call this before getting pageset1_size! */ + toi_mark_pages_for_pageset2(); + } + flag_image_pages(atomic_copy); + + if (!atomic_copy) { + storage_available = toiActiveAllocator->storage_available(); + display_stats(0, 0); + } +} + +/* update_image + * + * Allocate [more] memory and storage for the image. + */ +static void update_image(void) +{ + int result, param_used, wanted, got; + + toi_recalculate_image_contents(0); + + /* Include allowance for growth in pagedir1 while writing pagedir 2 */ + wanted = pagedir1.size + extra_pd1_pages_allowance - + get_lowmem_size(pagedir2); + if (wanted > extra_pages_allocated) { + got = toi_allocate_extra_pagedir_memory(wanted); + if (wanted < got) { + toi_message(TOI_EAT_MEMORY, TOI_LOW, 1, + "Want %d extra pages for pageset1, got %d.\n", + wanted, got); + return; + } + } + + thaw_kernel_threads(); + + /* + * Allocate remaining storage space, if possible, up to the + * maximum we know we'll need. It's okay to allocate the + * maximum if the writer is the swapwriter, but + * we don't want to grab all available space on an NFS share. + * We therefore ignore the expected compression ratio here, + * thereby trying to allocate the maximum image size we could + * need (assuming compression doesn't expand the image), but + * don't complain if we can't get the full amount we're after. + */ + + toiActiveAllocator->allocate_storage( + min(storage_available, main_storage_needed(0, 0))); + + main_storage_allocated = toiActiveAllocator->storage_allocated(); + + param_used = header_storage_needed(); + + result = toiActiveAllocator->allocate_header_space(param_used); + + if (result) + toi_message(TOI_EAT_MEMORY, TOI_LOW, 1, + "Still need to get more storage space for header.\n"); + else + header_space_allocated = param_used; + + if (freeze_processes()) + set_abort_result(TOI_FREEZING_FAILED); + + toi_recalculate_image_contents(0); +} + +/* attempt_to_freeze + * + * Try to freeze processes. + */ + +static int attempt_to_freeze(void) +{ + int result; + + /* Stop processes before checking again */ + thaw_processes(); + toi_prepare_status(CLEAR_BAR, "Freezing processes & syncing " + "filesystems."); + result = freeze_processes(); + + if (result) + set_abort_result(TOI_FREEZING_FAILED); + + return result; +} + +/* eat_memory + * + * Try to free some memory, either to meet hard or soft constraints on the image + * characteristics. + * + * Hard constraints: + * - Pageset1 must be < half of memory; + * - We must have enough memory free at resume time to have pageset1 + * be able to be loaded in pages that don't conflict with where it has to + * be restored. + * Soft constraints + * - User specificied image size limit. + */ +static void eat_memory(void) +{ + int amount_wanted = 0; + int did_eat_memory = 0; + + /* + * Note that if we have enough storage space and enough free memory, we + * may exit without eating anything. We give up when the last 10 + * iterations ate no extra pages because we're not going to get much + * more anyway, but the few pages we get will take a lot of time. + * + * We freeze processes before beginning, and then unfreeze them if we + * need to eat memory until we think we have enough. If our attempts + * to freeze fail, we give up and abort. + */ + + toi_recalculate_image_contents(0); + amount_wanted = amount_needed(1); + + switch (image_size_limit) { + case -1: /* Don't eat any memory */ + if (amount_wanted > 0) { + set_abort_result(TOI_WOULD_EAT_MEMORY); + return; + } + break; + case -2: /* Free caches only */ + drop_pagecache(); + toi_recalculate_image_contents(0); + amount_wanted = amount_needed(1); + did_eat_memory = 1; + break; + default: + break; + } + + if (amount_wanted > 0 && !test_result_state(TOI_ABORTED) && + image_size_limit != -1) { + struct zone *zone; + int zone_idx; + + toi_prepare_status(CLEAR_BAR, + "Seeking to free %dMB of memory.", + MB(amount_wanted)); + + thaw_kernel_threads(); + + for (zone_idx = 0; zone_idx < MAX_NR_ZONES; zone_idx++) { + unsigned long zone_type_free = max_t(int, + (zone_idx == ZONE_HIGHMEM) ? + highpages_ps1_to_free() : + lowpages_ps1_to_free(), amount_wanted); + + if (zone_type_free < 0) + break; + + for_each_zone(zone) { + if (zone_idx(zone) != zone_idx) + continue; + + shrink_one_zone(zone, zone_type_free, 3); + + did_eat_memory = 1; + + toi_recalculate_image_contents(0); + + amount_wanted = amount_needed(1); + zone_type_free = max_t(int, + (zone_idx == ZONE_HIGHMEM) ? + highpages_ps1_to_free() : + lowpages_ps1_to_free(), amount_wanted); + + if (zone_type_free < 0) + break; + } + } + + toi_cond_pause(0, NULL); + + if (freeze_processes()) + set_abort_result(TOI_FREEZING_FAILED); + } + + if (did_eat_memory) { + unsigned long orig_state = get_toi_state(); + /* Freeze_processes will call sys_sync too */ + restore_toi_state(orig_state); + toi_recalculate_image_contents(0); + } + + /* Blank out image size display */ + toi_update_status(100, 100, NULL); +} + +/* toi_prepare_image + * + * Entry point to the whole image preparation section. + * + * We do four things: + * - Freeze processes; + * - Ensure image size constraints are met; + * - Complete all the preparation for saving the image, + * including allocation of storage. The only memory + * that should be needed when we're finished is that + * for actually storing the image (and we know how + * much is needed for that because the modules tell + * us). + * - Make sure that all dirty buffers are written out. + */ +#define MAX_TRIES 2 +int toi_prepare_image(void) +{ + int result = 1, tries = 1; + + header_space_allocated = 0; + main_storage_allocated = 0; + + if (attempt_to_freeze()) + return 1; + + if (!extra_pd1_pages_allowance) + get_extra_pd1_allowance(); + + storage_available = toiActiveAllocator->storage_available(); + + if (!storage_available) { + display_failure_reason(0); + set_abort_result(TOI_NOSTORAGE_AVAILABLE); + return 1; + } + + if (build_attention_list()) { + abort_hibernate(TOI_UNABLE_TO_PREPARE_IMAGE, + "Unable to successfully prepare the image.\n"); + return 1; + } + + do { + toi_prepare_status(CLEAR_BAR, + "Preparing Image. Try %d.", tries); + + eat_memory(); + + if (test_result_state(TOI_ABORTED)) + break; + + update_image(); + + tries++; + + } while (image_not_ready(1) && tries <= MAX_TRIES && + !test_result_state(TOI_ABORTED)); + + result = image_not_ready(0); + + if (!test_result_state(TOI_ABORTED)) { + if (result) { + display_stats(1, 0); + display_failure_reason(tries > MAX_TRIES); + abort_hibernate(TOI_UNABLE_TO_PREPARE_IMAGE, + "Unable to successfully prepare the image.\n"); + } else { + unlink_lru_lists(); + toi_cond_pause(1, "Image preparation complete."); + } + } + + return result ? result : allocate_checksum_pages(); +} + +#ifdef CONFIG_TOI_EXPORTS +EXPORT_SYMBOL_GPL(real_nr_free_pages); +#endif diff --git a/kernel/power/tuxonice_prepare_image.h b/kernel/power/tuxonice_prepare_image.h new file mode 100644 index 0000000..0081329 --- /dev/null +++ b/kernel/power/tuxonice_prepare_image.h @@ -0,0 +1,35 @@ +/* + * kernel/power/tuxonice_prepare_image.h + * + * Copyright (C) 2003-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + */ + +#include + +extern int toi_prepare_image(void); +extern void toi_recalculate_image_contents(int storage_available); +extern int real_nr_free_pages(unsigned long zone_idx_mask); +extern int image_size_limit; +extern void toi_free_extra_pagedir_memory(void); +extern int extra_pd1_pages_allowance; +extern void free_attention_list(void); + +#define MIN_FREE_RAM 100 +#define MIN_EXTRA_PAGES_ALLOWANCE 500 + +#define all_zones_mask ((unsigned long) ((1 << MAX_NR_ZONES) - 1)) +#ifdef CONFIG_HIGHMEM +#define real_nr_free_high_pages() (real_nr_free_pages(1 << ZONE_HIGHMEM)) +#define real_nr_free_low_pages() (real_nr_free_pages(all_zones_mask - \ + (1 << ZONE_HIGHMEM))) +#else +#define real_nr_free_high_pages() (0) +#define real_nr_free_low_pages() (real_nr_free_pages(all_zones_mask)) + +/* For eat_memory function */ +#define ZONE_HIGHMEM (MAX_NR_ZONES + 1) +#endif + diff --git a/kernel/power/tuxonice_storage.c b/kernel/power/tuxonice_storage.c new file mode 100644 index 0000000..777ff3c --- /dev/null +++ b/kernel/power/tuxonice_storage.c @@ -0,0 +1,292 @@ +/* + * kernel/power/tuxonice_storage.c + * + * Copyright (C) 2005-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Routines for talking to a userspace program that manages storage. + * + * The kernel side: + * - starts the userspace program; + * - sends messages telling it when to open and close the connection; + * - tells it when to quit; + * + * The user space side: + * - passes messages regarding status; + * + */ + +#include +#include + +#include "tuxonice_sysfs.h" +#include "tuxonice_modules.h" +#include "tuxonice_netlink.h" +#include "tuxonice_storage.h" +#include "tuxonice_ui.h" + +static struct user_helper_data usm_helper_data; +static struct toi_module_ops usm_ops; +static int message_received, usm_prepare_count; +static int storage_manager_last_action, storage_manager_action; + +static int usm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) +{ + int type; + int *data; + + type = nlh->nlmsg_type; + + /* A control message: ignore them */ + if (type < NETLINK_MSG_BASE) + return 0; + + /* Unknown message: reply with EINVAL */ + if (type >= USM_MSG_MAX) + return -EINVAL; + + /* All operations require privileges, even GET */ + if (security_netlink_recv(skb, CAP_NET_ADMIN)) + return -EPERM; + + /* Only allow one task to receive NOFREEZE privileges */ + if (type == NETLINK_MSG_NOFREEZE_ME && usm_helper_data.pid != -1) + return -EBUSY; + + data = (int *) NLMSG_DATA(nlh); + + switch (type) { + case USM_MSG_SUCCESS: + case USM_MSG_FAILED: + message_received = type; + complete(&usm_helper_data.wait_for_process); + break; + default: + printk(KERN_INFO "Storage manager doesn't recognise " + "message %d.\n", type); + } + + return 1; +} + +#ifdef CONFIG_NET +static int activations; + +int toi_activate_storage(int force) +{ + int tries = 1; + + if (usm_helper_data.pid == -1 || !usm_ops.enabled) + return 0; + + message_received = 0; + activations++; + + if (activations > 1 && !force) + return 0; + + while ((!message_received || message_received == USM_MSG_FAILED) && + tries < 2) { + toi_prepare_status(DONT_CLEAR_BAR, "Activate storage attempt " + "%d.\n", tries); + + init_completion(&usm_helper_data.wait_for_process); + + toi_send_netlink_message(&usm_helper_data, + USM_MSG_CONNECT, + NULL, 0); + + /* Wait 2 seconds for the userspace process to make contact */ + wait_for_completion_timeout(&usm_helper_data.wait_for_process, + 2*HZ); + + tries++; + } + + return 0; +} + +int toi_deactivate_storage(int force) +{ + if (usm_helper_data.pid == -1 || !usm_ops.enabled) + return 0; + + message_received = 0; + activations--; + + if (activations && !force) + return 0; + + init_completion(&usm_helper_data.wait_for_process); + + toi_send_netlink_message(&usm_helper_data, + USM_MSG_DISCONNECT, + NULL, 0); + + wait_for_completion_timeout(&usm_helper_data.wait_for_process, 2*HZ); + + if (!message_received || message_received == USM_MSG_FAILED) { + printk(KERN_INFO "Returning failure disconnecting storage.\n"); + return 1; + } + + return 0; +} +#endif + +static void storage_manager_simulate(void) +{ + printk(KERN_INFO "--- Storage manager simulate ---\n"); + toi_prepare_usm(); + schedule(); + printk(KERN_INFO "--- Activate storage 1 ---\n"); + toi_activate_storage(1); + schedule(); + printk(KERN_INFO "--- Deactivate storage 1 ---\n"); + toi_deactivate_storage(1); + schedule(); + printk(KERN_INFO "--- Cleanup usm ---\n"); + toi_cleanup_usm(); + schedule(); + printk(KERN_INFO "--- Storage manager simulate ends ---\n"); +} + +static int usm_storage_needed(void) +{ + return strlen(usm_helper_data.program); +} + +static int usm_save_config_info(char *buf) +{ + int len = strlen(usm_helper_data.program); + memcpy(buf, usm_helper_data.program, len); + return len; +} + +static void usm_load_config_info(char *buf, int size) +{ + /* Don't load the saved path if one has already been set */ + if (usm_helper_data.program[0]) + return; + + memcpy(usm_helper_data.program, buf, size); +} + +static int usm_memory_needed(void) +{ + /* ball park figure of 32 pages */ + return (32 * PAGE_SIZE); +} + +/* toi_prepare_usm + */ +int toi_prepare_usm(void) +{ + usm_prepare_count++; + + if (usm_prepare_count > 1 || !usm_ops.enabled) + return 0; + + usm_helper_data.pid = -1; + + if (!*usm_helper_data.program) + return 0; + + toi_netlink_setup(&usm_helper_data); + + if (usm_helper_data.pid == -1) + printk(KERN_INFO "TuxOnIce Storage Manager wanted, but couldn't" + " start it.\n"); + + toi_activate_storage(0); + + return (usm_helper_data.pid != -1); +} + +void toi_cleanup_usm(void) +{ + usm_prepare_count--; + + if (usm_helper_data.pid > -1 && !usm_prepare_count) { + toi_deactivate_storage(0); + toi_netlink_close(&usm_helper_data); + } +} + +static void storage_manager_activate(void) +{ + if (storage_manager_action == storage_manager_last_action) + return; + + if (storage_manager_action) + toi_prepare_usm(); + else + toi_cleanup_usm(); + + storage_manager_last_action = storage_manager_action; +} + +/* + * User interface specific /sys/power/tuxonice entries. + */ + +static struct toi_sysfs_data sysfs_params[] = { + { TOI_ATTR("simulate_atomic_copy", SYSFS_RW), + .type = TOI_SYSFS_DATA_NONE, + .write_side_effect = storage_manager_simulate, + }, + + { TOI_ATTR("enabled", SYSFS_RW), + SYSFS_INT(&usm_ops.enabled, 0, 1, 0) + }, + + { TOI_ATTR("program", SYSFS_RW), + SYSFS_STRING(usm_helper_data.program, 254, 0) + }, + + { TOI_ATTR("activate_storage", SYSFS_RW), + SYSFS_INT(&storage_manager_action, 0, 1, 0), + .write_side_effect = storage_manager_activate, + } +}; + +static struct toi_module_ops usm_ops = { + .type = MISC_MODULE, + .name = "usm", + .directory = "storage_manager", + .module = THIS_MODULE, + .storage_needed = usm_storage_needed, + .save_config_info = usm_save_config_info, + .load_config_info = usm_load_config_info, + .memory_needed = usm_memory_needed, + + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +/* toi_usm_sysfs_init + * Description: Boot time initialisation for user interface. + */ +int toi_usm_init(void) +{ + usm_helper_data.nl = NULL; + usm_helper_data.program[0] = '\0'; + usm_helper_data.pid = -1; + usm_helper_data.skb_size = 0; + usm_helper_data.pool_limit = 6; + usm_helper_data.netlink_id = NETLINK_TOI_USM; + usm_helper_data.name = "userspace storage manager"; + usm_helper_data.rcv_msg = usm_user_rcv_msg; + usm_helper_data.interface_version = 1; + usm_helper_data.must_init = 0; + init_completion(&usm_helper_data.wait_for_process); + + return toi_register_module(&usm_ops); +} + +void toi_usm_exit(void) +{ + toi_unregister_module(&usm_ops); +} diff --git a/kernel/power/tuxonice_storage.h b/kernel/power/tuxonice_storage.h new file mode 100644 index 0000000..2f895bf --- /dev/null +++ b/kernel/power/tuxonice_storage.h @@ -0,0 +1,53 @@ +/* + * kernel/power/tuxonice_storage.h + * + * Copyright (C) 2005-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + */ + +#ifdef CONFIG_NET +int toi_prepare_usm(void); +void toi_cleanup_usm(void); + +int toi_activate_storage(int force); +int toi_deactivate_storage(int force); +extern int toi_usm_init(void); +extern void toi_usm_exit(void); +#else +static inline int toi_usm_init(void) { return 0; } +static inline void toi_usm_exit(void) { } + +static inline int toi_activate_storage(int force) +{ + return 0; +} + +static inline int toi_deactivate_storage(int force) +{ + return 0; +} + +static inline int toi_prepare_usm(void) { return 0; } +static inline void toi_cleanup_usm(void) { } +#endif + +enum { + USM_MSG_BASE = 0x10, + + /* Kernel -> Userspace */ + USM_MSG_CONNECT = 0x30, + USM_MSG_DISCONNECT = 0x31, + USM_MSG_SUCCESS = 0x40, + USM_MSG_FAILED = 0x41, + + USM_MSG_MAX, +}; + +#ifdef CONFIG_NET +extern __init int toi_usm_init(void); +extern __exit void toi_usm_cleanup(void); +#else +#define toi_usm_init() do { } while (0) +#define toi_usm_cleanup() do { } while (0) +#endif diff --git a/kernel/power/tuxonice_swap.c b/kernel/power/tuxonice_swap.c new file mode 100644 index 0000000..aa5ef55 --- /dev/null +++ b/kernel/power/tuxonice_swap.c @@ -0,0 +1,1284 @@ +/* + * kernel/power/tuxonice_swap.c + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * Distributed under GPLv2. + * + * This file encapsulates functions for usage of swap space as a + * backing store. + */ + +#include +#include +#include +#include +#include +#include + +#include "tuxonice.h" +#include "tuxonice_sysfs.h" +#include "tuxonice_modules.h" +#include "tuxonice_io.h" +#include "tuxonice_ui.h" +#include "tuxonice_extent.h" +#include "tuxonice_block_io.h" +#include "tuxonice_alloc.h" + +static struct toi_module_ops toi_swapops; + +#define SIGNATURE_VER 6 + +/* --- Struct of pages stored on disk */ + +union diskpage { + union swap_header swh; /* swh.magic is the only member used */ +}; + +union p_diskpage { + union diskpage *pointer; + char *ptr; + unsigned long address; +}; + +/* Devices used for swap */ +static struct toi_bdev_info devinfo[MAX_SWAPFILES]; + +/* Extent chains for swap & blocks */ +struct extent_chain swapextents; +struct extent_chain block_chain[MAX_SWAPFILES]; + +static dev_t header_dev_t; +static struct block_device *header_block_device; +static unsigned long headerblock; + +/* For swapfile automatically swapon/off'd. */ +static char swapfilename[32] = ""; +static int toi_swapon_status; + +/* Header Page Information */ +static int header_pages_allocated; + +/* Swap Pages */ +static int main_pages_allocated, main_pages_requested; + +/* User Specified Parameters. */ + +static unsigned long resume_firstblock; +static dev_t resume_swap_dev_t; +static struct block_device *resume_block_device; + +struct sysinfo swapinfo; + +/* Block devices open. */ +struct bdev_opened { + dev_t device; + struct block_device *bdev; +}; + +/* + * Entry MAX_SWAPFILES is the resume block device, which may + * be a swap device not enabled when we hibernate. + * Entry MAX_SWAPFILES + 1 is the header block device, which + * is needed before we find out which slot it occupies. + * + * We use a separate struct to devInfo so that we can track + * the bdevs we open, because if we need to abort resuming + * prior to the atomic restore, they need to be closed, but + * closing them after sucessfully resuming would be wrong. + */ +static struct bdev_opened *bdevs_opened[MAX_SWAPFILES + 2]; + +/** + * close_bdev: Close a swap bdev. + * + * int: The swap entry number to close. + */ +static void close_bdev(int i) +{ + struct bdev_opened *this = bdevs_opened[i]; + + if (!this) + return; + + blkdev_put(this->bdev); + toi_kfree(8, this); + bdevs_opened[i] = NULL; +} + +/** + * close_bdevs: Close all bdevs we opened. + * + * Close all bdevs that we opened and reset the related vars. + */ +static void close_bdevs(void) +{ + int i; + + for (i = 0; i < MAX_SWAPFILES + 2; i++) + close_bdev(i); + + resume_block_device = header_block_device = NULL; +} + +/** + * open_bdev: Open a bdev at resume time. + * + * index: The swap index. May be MAX_SWAPFILES for the resume_dev_t + * (the user can have resume= pointing at a swap partition/file that isn't + * swapon'd when they hibernate. MAX_SWAPFILES+1 for the first page of the + * header. It will be from a swap partition that was enabled when we hibernated, + * but we don't know it's real index until we read that first page. + * dev_t: The device major/minor. + * display_errs: Whether to try to do this quietly. + * + * We stored a dev_t in the image header. Open the matching device without + * requiring /dev/ in most cases and record the details needed + * to close it later and avoid duplicating work. + */ +static struct block_device *open_bdev(int index, dev_t device, int display_errs) +{ + struct bdev_opened *this; + struct block_device *bdev; + + if (bdevs_opened[index]) { + if (bdevs_opened[index]->device == device) + return bdevs_opened[index]->bdev; + + close_bdev(index); + } + + bdev = toi_open_by_devnum(device, FMODE_READ); + + if (IS_ERR(bdev) || !bdev) { + if (display_errs) + toi_early_boot_message(1, TOI_CONTINUE_REQ, + "Failed to get access to block device " + "\"%x\" (error %d).\n Maybe you need " + "to run mknod and/or lvmsetup in an " + "initrd/ramfs?", device, bdev); + return ERR_PTR(-EINVAL); + } + + this = toi_kzalloc(8, sizeof(struct bdev_opened), GFP_KERNEL); + if (!this) { + printk(KERN_WARNING "TuxOnIce: Failed to allocate memory for " + "opening a bdev."); + blkdev_put(bdev); + return ERR_PTR(-ENOMEM); + } + + bdevs_opened[index] = this; + this->device = device; + this->bdev = bdev; + + return bdev; +} + +/** + * enable_swapfile: Swapon the user specified swapfile prior to hibernating. + * + * Activate the given swapfile if it wasn't already enabled. Remember whether + * we really did swapon it for swapoffing later. + */ +static void enable_swapfile(void) +{ + int activateswapresult = -EINVAL; + + if (swapfilename[0]) { + /* Attempt to swap on with maximum priority */ + activateswapresult = sys_swapon(swapfilename, 0xFFFF); + if (activateswapresult && activateswapresult != -EBUSY) + printk("TuxOnIce: The swapfile/partition specified by " + "/sys/power/tuxonice/swap/swapfile " + "(%s) could not be turned on (error %d). " + "Attempting to continue.\n", + swapfilename, activateswapresult); + if (!activateswapresult) + toi_swapon_status = 1; + } +} + +/** + * disable_swapfile: Swapoff any file swaponed at the start of the cycle. + * + * If we did successfully swapon a file at the start of the cycle, swapoff + * it now (finishing up). + */ +static void disable_swapfile(void) +{ + if (!toi_swapon_status) + return; + + sys_swapoff(swapfilename); + toi_swapon_status = 0; +} + +/** + * try_to_parse_resume_device: Try to parse resume= + * + * Any "swap:" has been stripped away and we just have the path to deal with. + * We attempt to do name_to_dev_t, open and stat the file. Having opened the + * file, get the struct block_device * to match. + */ +static int try_to_parse_resume_device(char *commandline, int quiet) +{ + struct kstat stat; + int error = 0; + + resume_swap_dev_t = name_to_dev_t(commandline); + + if (!resume_swap_dev_t) { + struct file *file = filp_open(commandline, + O_RDONLY|O_LARGEFILE, 0); + + if (!IS_ERR(file) && file) { + vfs_getattr(file->f_vfsmnt, file->f_dentry, &stat); + filp_close(file, NULL); + } else + error = vfs_stat(commandline, &stat); + if (!error) + resume_swap_dev_t = stat.rdev; + } + + if (!resume_swap_dev_t) { + if (quiet) + return 1; + + if (test_toi_state(TOI_TRYING_TO_RESUME)) + toi_early_boot_message(1, TOI_CONTINUE_REQ, + "Failed to translate \"%s\" into a device id.\n", + commandline); + else + printk("TuxOnIce: Can't translate \"%s\" into a device " + "id yet.\n", commandline); + return 1; + } + + resume_block_device = open_bdev(MAX_SWAPFILES, resume_swap_dev_t, 0); + if (IS_ERR(resume_block_device)) { + if (!quiet) + toi_early_boot_message(1, TOI_CONTINUE_REQ, + "Failed to get access to \"%s\", where" + " the swap header should be found.", + commandline); + return 1; + } + + return 0; +} + +/* + * If we have read part of the image, we might have filled memory with + * data that should be zeroed out. + */ +static void toi_swap_noresume_reset(void) +{ + memset((char *) &devinfo, 0, sizeof(devinfo)); +} + +static int parse_signature(char *header, int restore) +{ + int type = -1; + + if (!memcmp("SWAP-SPACE", header, 10)) + return 0; + else if (!memcmp("SWAPSPACE2", header, 10)) + return 1; + + else if (!memcmp("S1SUSP", header, 6)) + type = 2; + else if (!memcmp("S2SUSP", header, 6)) + type = 3; + else if (!memcmp("S1SUSPEND", header, 9)) + type = 4; + + else if (!memcmp("z", header, 1)) + type = 12; + else if (!memcmp("Z", header, 1)) + type = 13; + + /* + * Put bdev of hibernate header in last byte of swap header + * (unsigned short) + */ + if (type > 11) { + dev_t *header_ptr = (dev_t *) &header[1]; + unsigned char *headerblocksize_ptr = + (unsigned char *) &header[5]; + u32 *headerblock_ptr = (u32 *) &header[6]; + header_dev_t = *header_ptr; + /* + * We are now using the highest bit of the char to indicate + * whether we have attempted to resume from this image before. + */ + clear_toi_state(TOI_RESUMED_BEFORE); + if (((int) *headerblocksize_ptr) & 0x80) + set_toi_state(TOI_RESUMED_BEFORE); + headerblock = (unsigned long) *headerblock_ptr; + } + + if ((restore) && (type > 5)) { + /* We only reset our own signatures */ + if (type & 1) + memcpy(header, "SWAPSPACE2", 10); + else + memcpy(header, "SWAP-SPACE", 10); + } + + return type; +} + +/* + * prepare_signature + */ +static int prepare_signature(dev_t bdev, unsigned long block, + char *current_header) +{ + int current_type = parse_signature(current_header, 0); + dev_t *header_ptr = (dev_t *) (¤t_header[1]); + unsigned long *headerblock_ptr = + (unsigned long *) (¤t_header[6]); + + if ((current_type > 1) && (current_type < 6)) + return 1; + + /* At the moment, I don't have a way to handle the block being + * > 32 bits. Not enough room in the signature and no way to + * safely put the data elsewhere. */ + + if (BITS_PER_LONG == 64 && ffs(block) > 31) { + toi_prepare_status(DONT_CLEAR_BAR, + "Header sector requires 33+ bits. " + "Would not be able to resume."); + return 1; + } + + if (current_type & 1) + current_header[0] = 'Z'; + else + current_header[0] = 'z'; + *header_ptr = bdev; + /* prev is the first/last swap page of the resume area */ + *headerblock_ptr = (unsigned long) block; + return 0; +} + +static int __toi_swap_allocate_storage(int main_storage_requested, + int header_storage); + +static int toi_swap_allocate_header_space(int space_requested) +{ + int i; + + if (!swapextents.size && __toi_swap_allocate_storage( + main_pages_requested, space_requested)) { + printk("Failed to allocate space for the header.\n"); + return -ENOSPC; + } + + toi_extent_state_goto_start(&toi_writer_posn); + toi_bio_ops.forward_one_page(1); /* To first page */ + + for (i = 0; i < space_requested; i++) { + if (toi_bio_ops.forward_one_page(1)) { + printk(KERN_INFO "Out of space while seeking to " + "allocate header pages,\n"); + header_pages_allocated = i; + return -ENOSPC; + } + + } + + header_pages_allocated = space_requested; + + /* The end of header pages will be the start of pageset 2; + * we are now sitting on the first pageset2 page. */ + toi_extent_state_save(&toi_writer_posn, + &toi_writer_posn_save[2]); + return 0; +} + +static void free_block_chains(void) +{ + int i; + + for (i = 0; i < MAX_SWAPFILES; i++) + if (block_chain[i].first) + toi_put_extent_chain(&block_chain[i]); +} + +static int get_main_pool_phys_params(void) +{ + struct extent *extentpointer = NULL; + unsigned long address; + int extent_min = -1, extent_max = -1, last_chain = -1; + + free_block_chains(); + + toi_extent_for_each(&swapextents, extentpointer, address) { + swp_entry_t swap_address = extent_val_to_swap_entry(address); + pgoff_t offset = swp_offset(swap_address); + unsigned swapfilenum = swp_type(swap_address); + struct swap_info_struct *sis = + get_swap_info_struct(swapfilenum); + sector_t new_sector = map_swap_page(sis, offset); + + if ((new_sector == extent_max + 1) && + (last_chain == swapfilenum)) { + extent_max++; + continue; + } + + if (extent_min > -1) { + if (test_action_state(TOI_TEST_BIO)) + printk(KERN_INFO + "Adding extent chain %d %d-%d.\n", + swapfilenum, + extent_min << + devinfo[last_chain].bmap_shift, + extent_max << + devinfo[last_chain].bmap_shift); + + if (toi_add_to_extent_chain( + &block_chain[last_chain], + extent_min, extent_max)) { + free_block_chains(); + return -ENOMEM; + } + } + extent_min = extent_max = new_sector; + last_chain = swapfilenum; + } + + if (extent_min > -1) { + if (test_action_state(TOI_TEST_BIO)) + printk(KERN_INFO "Adding extent chain %d %d-%d.\n", + last_chain, + extent_min << + devinfo[last_chain].bmap_shift, + extent_max << + devinfo[last_chain].bmap_shift); + if (toi_add_to_extent_chain( + &block_chain[last_chain], + extent_min, extent_max)) { + free_block_chains(); + return -ENOMEM; + } + } + + return toi_swap_allocate_header_space(header_pages_allocated); +} + +static int toi_swap_storage_allocated(void) +{ + return main_pages_requested + header_pages_allocated; +} + +static int toi_swap_storage_available(void) +{ + int diff; + + si_swapinfo(&swapinfo); + diff = (((int) swapinfo.freeswap + main_pages_allocated) * + (sizeof(unsigned long) + sizeof(int)) / + (PAGE_SIZE + sizeof(unsigned long) + sizeof(int))) + 1; + return (int) swapinfo.freeswap + main_pages_allocated - diff; +} + +static int toi_swap_initialise(int starting_cycle) +{ + if (!starting_cycle) + return 0; + + enable_swapfile(); + + if (resume_swap_dev_t && !resume_block_device && + IS_ERR(resume_block_device = + open_bdev(MAX_SWAPFILES, resume_swap_dev_t, 1))) + return 1; + + return 0; +} + +static void toi_swap_cleanup(int ending_cycle) +{ + if (ending_cycle) + disable_swapfile(); + + close_bdevs(); +} + +static int toi_swap_release_storage(void) +{ + if (test_action_state(TOI_KEEP_IMAGE) && + test_toi_state(TOI_NOW_RESUMING)) + return 0; + + header_pages_allocated = 0; + main_pages_allocated = 0; + + if (swapextents.first) { + /* Free swap entries */ + struct extent *extentpointer; + unsigned long extentvalue; + toi_extent_for_each(&swapextents, extentpointer, + extentvalue) + swap_free(extent_val_to_swap_entry(extentvalue)); + + toi_put_extent_chain(&swapextents); + + free_block_chains(); + } + + return 0; +} + +static int toi_swap_allocate_storage(int space_requested) +{ + if (!__toi_swap_allocate_storage(space_requested, + header_pages_allocated)) { + main_pages_requested = space_requested; + return 0; + } + + return -ENOSPC; +} + +static void free_swap_range(unsigned long min, unsigned long max) +{ + int j; + + for (j = min; j <= max; j++) + swap_free(extent_val_to_swap_entry(j)); +} + +/* + * Round robin allocation (where swap storage has the same priority). + * could make this very inefficient, so we track extents allocated on + * a per-swapfiles basis. + */ +static int __toi_swap_allocate_storage(int main_space_requested, + int header_space_requested) +{ + int i, result = 0, to_add[MAX_SWAPFILES], pages_to_get, extra_pages, + gotten = 0; + unsigned long extent_min[MAX_SWAPFILES], extent_max[MAX_SWAPFILES]; + + extra_pages = DIV_ROUND_UP(main_space_requested * (sizeof(unsigned long) + + sizeof(int)), PAGE_SIZE); + pages_to_get = main_space_requested + extra_pages + + header_space_requested - swapextents.size; + + if (pages_to_get < 1) + return 0; + + for (i = 0; i < MAX_SWAPFILES; i++) { + struct swap_info_struct *si = get_swap_info_struct(i); + to_add[i] = 0; + if (!si->bdev) + continue; + devinfo[i].bdev = si->bdev; + devinfo[i].dev_t = si->bdev->bd_dev; + devinfo[i].bmap_shift = 3; + devinfo[i].blocks_per_page = 1; + } + + for (i = 0; i < pages_to_get; i++) { + swp_entry_t entry; + unsigned long new_value; + unsigned swapfilenum; + + entry = get_swap_page(); + if (!entry.val) + break; + + swapfilenum = swp_type(entry); + new_value = swap_entry_to_extent_val(entry); + + if (!to_add[swapfilenum]) { + to_add[swapfilenum] = 1; + extent_min[swapfilenum] = new_value; + extent_max[swapfilenum] = new_value; + gotten++; + continue; + } + + if (new_value == extent_max[swapfilenum] + 1) { + extent_max[swapfilenum]++; + gotten++; + continue; + } + + if (toi_add_to_extent_chain(&swapextents, + extent_min[swapfilenum], + extent_max[swapfilenum])) { + printk(KERN_INFO "Failed to allocate extent for " + "%lu-%lu.\n", extent_min[swapfilenum], + extent_max[swapfilenum]); + free_swap_range(extent_min[swapfilenum], + extent_max[swapfilenum]); + swap_free(entry); + gotten -= (extent_max[swapfilenum] - + extent_min[swapfilenum] + 1); + /* Don't try to add again below */ + to_add[swapfilenum] = 0; + break; + } else { + extent_min[swapfilenum] = new_value; + extent_max[swapfilenum] = new_value; + gotten++; + } + } + + for (i = 0; i < MAX_SWAPFILES; i++) { + if (!to_add[i] || !toi_add_to_extent_chain(&swapextents, + extent_min[i], extent_max[i])) + continue; + + free_swap_range(extent_min[i], extent_max[i]); + gotten -= (extent_max[i] - extent_min[i] + 1); + break; + } + + if (gotten < pages_to_get) + result = -ENOSPC; + + main_pages_allocated += gotten; + + return result ? result : get_main_pool_phys_params(); +} + +static int toi_swap_write_header_init(void) +{ + int i, result; + struct swap_info_struct *si; + + toi_extent_state_goto_start(&toi_writer_posn); + + toi_writer_buffer_posn = 0; + + /* Info needed to bootstrap goes at the start of the header. + * First we save the positions and devinfo, including the number + * of header pages. Then we save the structs containing data needed + * for reading the header pages back. + * Note that even if header pages take more than one page, when we + * read back the info, we will have restored the location of the + * next header page by the time we go to use it. + */ + + /* Forward one page will be done prior to the read */ + for (i = 0; i < MAX_SWAPFILES; i++) { + si = get_swap_info_struct(i); + if (si->swap_file) + devinfo[i].dev_t = si->bdev->bd_dev; + else + devinfo[i].dev_t = (dev_t) 0; + } + + result = toi_bio_ops.rw_header_chunk(WRITE, &toi_swapops, + (char *) &toi_writer_posn_save, + sizeof(toi_writer_posn_save)); + + if (result) + return result; + + result = toi_bio_ops.rw_header_chunk(WRITE, &toi_swapops, + (char *) &devinfo, sizeof(devinfo)); + + if (result) + return result; + + for (i = 0; i < MAX_SWAPFILES; i++) + toi_serialise_extent_chain(&toi_swapops, &block_chain[i]); + + return 0; +} + +static int toi_swap_write_header_cleanup(void) +{ + int result; + struct swap_info_struct *si; + + /* Write any unsaved data */ + if (toi_writer_buffer_posn) + toi_bio_ops.write_header_chunk_finish(); + + toi_bio_ops.finish_all_io(); + + toi_extent_state_goto_start(&toi_writer_posn); + toi_bio_ops.forward_one_page(1); + + /* Adjust swap header */ + toi_bio_ops.bdev_page_io(READ, resume_block_device, + resume_firstblock, + virt_to_page(toi_writer_buffer)); + + si = get_swap_info_struct(toi_writer_posn.current_chain); + result = prepare_signature(si->bdev->bd_dev, + toi_writer_posn.current_offset, + ((union swap_header *) toi_writer_buffer)->magic.magic); + + if (!result) + toi_bio_ops.bdev_page_io(WRITE, resume_block_device, + resume_firstblock, + virt_to_page(toi_writer_buffer)); + + toi_bio_ops.finish_all_io(); + + return result; +} + +/* ------------------------- HEADER READING ------------------------- */ + +/* + * read_header_init() + * + * Description: + * 1. Attempt to read the device specified with resume=. + * 2. Check the contents of the swap header for our signature. + * 3. Warn, ignore, reset and/or continue as appropriate. + * 4. If continuing, read the toi_swap configuration section + * of the header and set up block device info so we can read + * the rest of the header & image. + * + * Returns: + * May not return if user choose to reboot at a warning. + * -EINVAL if cannot resume at this time. Booting should continue + * normally. + */ + +static int toi_swap_read_header_init(void) +{ + int i, result = 0; + + if (!header_dev_t) { + printk(KERN_INFO "read_header_init called when we haven't " + "verified there is an image!\n"); + return -EINVAL; + } + + /* + * If the header is not on the resume_swap_dev_t, get the resume device + * first. + */ + if (header_dev_t != resume_swap_dev_t) { + header_block_device = open_bdev(MAX_SWAPFILES + 1, + header_dev_t, 1); + + if (IS_ERR(header_block_device)) + return PTR_ERR(header_block_device); + } else + header_block_device = resume_block_device; + + /* + * Read toi_swap configuration. + * Headerblock size taken into account already. + */ + toi_bio_ops.bdev_page_io(READ, header_block_device, + headerblock << 3, + virt_to_page((unsigned long) toi_writer_buffer)); + + memcpy(&toi_writer_posn_save, toi_writer_buffer, 3 * + sizeof(struct extent_iterate_saved_state)); + + toi_writer_buffer_posn = 3 * sizeof(struct extent_iterate_saved_state); + + memcpy(&devinfo, toi_writer_buffer + toi_writer_buffer_posn, + sizeof(devinfo)); + + toi_writer_buffer_posn += sizeof(devinfo); + + /* Restore device info */ + for (i = 0; i < MAX_SWAPFILES; i++) { + dev_t thisdevice = devinfo[i].dev_t; + struct block_device *result; + + devinfo[i].bdev = NULL; + + if (!thisdevice) + continue; + + if (thisdevice == resume_swap_dev_t) { + devinfo[i].bdev = resume_block_device; + continue; + } + + if (thisdevice == header_dev_t) { + devinfo[i].bdev = header_block_device; + continue; + } + + result = open_bdev(i, thisdevice, 1); + if (IS_ERR(result)) + return PTR_ERR(result); + devinfo[i].bdev = bdevs_opened[i]->bdev; + } + + toi_bio_ops.read_header_init(); + toi_extent_state_goto_start(&toi_writer_posn); + toi_bio_ops.set_extra_page_forward(); + + for (i = 0; i < MAX_SWAPFILES && !result; i++) + result = toi_load_extent_chain(&block_chain[i]); + + return result; +} + +static int toi_swap_read_header_cleanup(void) +{ + toi_bio_ops.rw_cleanup(READ); + return 0; +} + +/* toi_swap_remove_image + * + */ +static int toi_swap_remove_image(void) +{ + union p_diskpage cur; + int result = 0; + char newsig[11]; + + cur.address = toi_get_zeroed_page(31, TOI_ATOMIC_GFP); + if (!cur.address) { + printk(KERN_INFO "Unable to allocate a page for restoring " + "the swap signature.\n"); + return -ENOMEM; + } + + /* + * If nr_hibernates == 0, we must be booting, so no swap pages + * will be recorded as used yet. + */ + + if (nr_hibernates > 0) + toi_swap_release_storage(); + + /* + * We don't do a sanity check here: we want to restore the swap + * whatever version of kernel made the hibernate image. + * + * We need to write swap, but swap may not be enabled so + * we write the device directly + */ + + toi_bio_ops.bdev_page_io(READ, resume_block_device, + resume_firstblock, + virt_to_page(cur.pointer)); + + result = parse_signature(cur.pointer->swh.magic.magic, 1); + + if (result < 5) + goto out; + + strncpy(newsig, cur.pointer->swh.magic.magic, 10); + newsig[10] = 0; + + toi_bio_ops.bdev_page_io(WRITE, resume_block_device, + resume_firstblock, + virt_to_page(cur.pointer)); +out: + toi_bio_ops.finish_all_io(); + toi_free_page(31, cur.address); + return 0; +} + +/* + * workspace_size + * + * Description: + * Returns the number of bytes of RAM needed for this + * code to do its work. (Used when calculating whether + * we have enough memory to be able to hibernate & resume). + * + */ +static int toi_swap_memory_needed(void) +{ + return 1; +} + +/* + * Print debug info + * + * Description: + */ +static int toi_swap_print_debug_stats(char *buffer, int size) +{ + int len = 0; + struct sysinfo sysinfo; + + if (toiActiveAllocator != &toi_swapops) { + len = snprintf_used(buffer, size, + "- SwapAllocator inactive.\n"); + return len; + } + + len = snprintf_used(buffer, size, "- SwapAllocator active.\n"); + if (swapfilename[0]) + len += snprintf_used(buffer+len, size-len, + " Attempting to automatically swapon: %s.\n", + swapfilename); + + si_swapinfo(&sysinfo); + + len += snprintf_used(buffer+len, size-len, + " Swap available for image: %ld pages.\n", + (int) sysinfo.freeswap + toi_swap_storage_allocated()); + + return len; +} + +/* + * Storage needed + * + * Returns amount of space in the swap header required + * for the toi_swap's data. This ignores the links between + * pages, which we factor in when allocating the space. + * + * We ensure the space is allocated, but actually save the + * data from write_header_init and therefore don't also define a + * save_config_info routine. + */ +static int toi_swap_storage_needed(void) +{ + int i, result; + result = sizeof(toi_writer_posn_save) + sizeof(devinfo); + + for (i = 0; i < MAX_SWAPFILES; i++) { + result += 3 * sizeof(int); + result += (2 * sizeof(unsigned long) * + block_chain[i].num_extents); + } + + return result; +} + +/* + * Image_exists + */ +static int toi_swap_image_exists(void) +{ + int signature_found; + union p_diskpage diskpage; + + if (!resume_swap_dev_t) { + printk(KERN_INFO "Not even trying to read header " + "because resume_swap_dev_t is not set.\n"); + return 0; + } + + if (!resume_block_device && + IS_ERR(resume_block_device = + open_bdev(MAX_SWAPFILES, resume_swap_dev_t, 1))) { + printk(KERN_INFO "Failed to open resume dev_t (%x).\n", + resume_swap_dev_t); + return 0; + } + + diskpage.address = toi_get_zeroed_page(32, TOI_ATOMIC_GFP); + + toi_bio_ops.bdev_page_io(READ, resume_block_device, + resume_firstblock, + virt_to_page(diskpage.ptr)); + toi_bio_ops.finish_all_io(); + + signature_found = parse_signature(diskpage.pointer->swh.magic.magic, 0); + toi_free_page(32, diskpage.address); + + if (signature_found < 2) { + printk(KERN_INFO "TuxOnIce: Normal swapspace found.\n"); + return 0; /* Normal swap space */ + } else if (signature_found == -1) { + printk(KERN_ERR "TuxOnIce: Unable to find a signature. Could " + "you have moved a swap file?\n"); + return 0; + } else if (signature_found < 6) { + printk(KERN_INFO "TuxOnIce: Detected another implementation's " + "signature.\n"); + return 0; + } else if ((signature_found >> 1) != SIGNATURE_VER) { + if (!test_toi_state(TOI_NORESUME_SPECIFIED)) { + toi_early_boot_message(1, TOI_CONTINUE_REQ, + "Found a different style hibernate image signature."); + set_toi_state(TOI_NORESUME_SPECIFIED); + printk(KERN_INFO "TuxOnIce: Dectected another " + "implementation's signature.\n"); + } + } + + return 1; +} + +/* + * Mark resume attempted. + * + * Record that we tried to resume from this image. + */ +static void toi_swap_mark_resume_attempted(int mark) +{ + union p_diskpage diskpage; + int signature_found; + + if (!resume_swap_dev_t) { + printk(KERN_INFO "Not even trying to record attempt at resuming" + " because resume_swap_dev_t is not set.\n"); + return; + } + + diskpage.address = toi_get_zeroed_page(35, TOI_ATOMIC_GFP); + + toi_bio_ops.bdev_page_io(READ, resume_block_device, + resume_firstblock, + virt_to_page(diskpage.ptr)); + signature_found = parse_signature(diskpage.pointer->swh.magic.magic, 0); + + switch (signature_found) { + case 12: + case 13: + diskpage.pointer->swh.magic.magic[5] &= ~0x80; + if (mark) + diskpage.pointer->swh.magic.magic[5] |= 0x80; + break; + } + + toi_bio_ops.bdev_page_io(WRITE, resume_block_device, + resume_firstblock, + virt_to_page(diskpage.ptr)); + toi_bio_ops.finish_all_io(); + toi_free_page(35, diskpage.address); + return; +} + +/* + * Parse Image Location + * + * Attempt to parse a resume= parameter. + * Swap Writer accepts: + * resume=swap:DEVNAME[:FIRSTBLOCK][@BLOCKSIZE] + * + * Where: + * DEVNAME is convertable to a dev_t by name_to_dev_t + * FIRSTBLOCK is the location of the first block in the swap file + * (specifying for a swap partition is nonsensical but not prohibited). + * Data is validated by attempting to read a swap header from the + * location given. Failure will result in toi_swap refusing to + * save an image, and a reboot with correct parameters will be + * necessary. + */ +static int toi_swap_parse_sig_location(char *commandline, + int only_allocator, int quiet) +{ + char *thischar, *devstart, *colon = NULL; + union p_diskpage diskpage; + int signature_found, result = -EINVAL, temp_result; + + if (strncmp(commandline, "swap:", 5)) { + /* + * Failing swap:, we'll take a simple + * resume=/dev/hda2, but fall through to + * other allocators if /dev/ isn't matched. + */ + if (strncmp(commandline, "/dev/", 5)) + return 1; + } else + commandline += 5; + + devstart = thischar = commandline; + while ((*thischar != ':') && (*thischar != '@') && + ((thischar - commandline) < 250) && (*thischar)) + thischar++; + + if (*thischar == ':') { + colon = thischar; + *colon = 0; + thischar++; + } + + while ((thischar - commandline) < 250 && *thischar) + thischar++; + + if (colon) + resume_firstblock = (int) simple_strtoul(colon + 1, NULL, 0); + else + resume_firstblock = 0; + + clear_toi_state(TOI_CAN_HIBERNATE); + clear_toi_state(TOI_CAN_RESUME); + + temp_result = try_to_parse_resume_device(devstart, quiet); + + if (colon) + *colon = ':'; + + if (temp_result) + return -EINVAL; + + diskpage.address = toi_get_zeroed_page(33, TOI_ATOMIC_GFP); + if (!diskpage.address) { + printk(KERN_ERR "TuxOnIce: SwapAllocator: Failed to allocate " + "a diskpage for I/O.\n"); + return -ENOMEM; + } + + toi_bio_ops.bdev_page_io(READ, resume_block_device, + resume_firstblock, virt_to_page(diskpage.ptr)); + + toi_bio_ops.finish_all_io(); + + signature_found = parse_signature(diskpage.pointer->swh.magic.magic, 0); + + if (signature_found != -1) { + result = 0; + + toi_bio_ops.set_devinfo(devinfo); + toi_writer_posn.chains = &block_chain[0]; + toi_writer_posn.num_chains = MAX_SWAPFILES; + set_toi_state(TOI_CAN_HIBERNATE); + set_toi_state(TOI_CAN_RESUME); + } else + if (!quiet) + printk(KERN_ERR "TuxOnIce: SwapAllocator: No swap " + "signature found at %s.\n", devstart); + toi_free_page(33, (unsigned long) diskpage.address); + return result; + +} + +static int header_locations_read_sysfs(const char *page, int count) +{ + int i, printedpartitionsmessage = 0, len = 0, haveswap = 0; + struct inode *swapf = 0; + int zone; + char *path_page = (char *) toi_get_free_page(10, GFP_KERNEL); + char *path, *output = (char *) page; + int path_len; + + if (!page) + return 0; + + for (i = 0; i < MAX_SWAPFILES; i++) { + struct swap_info_struct *si = get_swap_info_struct(i); + + if (!si->swap_file) + continue; + + if (S_ISBLK(si->swap_file->f_mapping->host->i_mode)) { + haveswap = 1; + if (!printedpartitionsmessage) { + len += sprintf(output + len, + "For swap partitions, simply use the " + "format: resume=swap:/dev/hda1.\n"); + printedpartitionsmessage = 1; + } + } else { + path_len = 0; + + path = d_path(si->swap_file->f_dentry, + si->swap_file->f_vfsmnt, + path_page, + PAGE_SIZE); + path_len = snprintf(path_page, 31, "%s", path); + + haveswap = 1; + swapf = si->swap_file->f_mapping->host; + zone = bmap(swapf, 0); + if (!zone) { + len += sprintf(output + len, + "Swapfile %s has been corrupted. Reuse" + " mkswap on it and try again.\n", + path_page); + } else { + char name_buffer[255]; + len += sprintf(output + len, + "For swapfile `%s`," + " use resume=swap:/dev/%s:0x%x.\n", + path_page, + bdevname(si->bdev, name_buffer), + zone << (swapf->i_blkbits - 9)); + } + } + } + + if (!haveswap) + len = sprintf(output, "You need to turn on swap partitions " + "before examining this file.\n"); + + toi_free_page(10, (unsigned long) path_page); + return len; +} + +static struct toi_sysfs_data sysfs_params[] = { + { + TOI_ATTR("swapfilename", SYSFS_RW), + SYSFS_STRING(swapfilename, 255, 0) + }, + + { + TOI_ATTR("headerlocations", SYSFS_READONLY), + SYSFS_CUSTOM(header_locations_read_sysfs, NULL, 0) + }, + + { TOI_ATTR("enabled", SYSFS_RW), + SYSFS_INT(&toi_swapops.enabled, 0, 1, 0), + .write_side_effect = attempt_to_parse_resume_device2, + } +}; + +static struct toi_module_ops toi_swapops = { + .type = WRITER_MODULE, + .name = "swap storage", + .directory = "swap", + .module = THIS_MODULE, + .memory_needed = toi_swap_memory_needed, + .print_debug_info = toi_swap_print_debug_stats, + .storage_needed = toi_swap_storage_needed, + .initialise = toi_swap_initialise, + .cleanup = toi_swap_cleanup, + + .noresume_reset = toi_swap_noresume_reset, + .storage_available = toi_swap_storage_available, + .storage_allocated = toi_swap_storage_allocated, + .release_storage = toi_swap_release_storage, + .allocate_header_space = toi_swap_allocate_header_space, + .allocate_storage = toi_swap_allocate_storage, + .image_exists = toi_swap_image_exists, + .mark_resume_attempted = toi_swap_mark_resume_attempted, + .write_header_init = toi_swap_write_header_init, + .write_header_cleanup = toi_swap_write_header_cleanup, + .read_header_init = toi_swap_read_header_init, + .read_header_cleanup = toi_swap_read_header_cleanup, + .remove_image = toi_swap_remove_image, + .parse_sig_location = toi_swap_parse_sig_location, + + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +/* ---- Registration ---- */ +static __init int toi_swap_load(void) +{ + toi_swapops.rw_init = toi_bio_ops.rw_init; + toi_swapops.rw_cleanup = toi_bio_ops.rw_cleanup; + toi_swapops.read_page = toi_bio_ops.read_page; + toi_swapops.write_page = toi_bio_ops.write_page; + toi_swapops.rw_header_chunk = toi_bio_ops.rw_header_chunk; + + return toi_register_module(&toi_swapops); +} + +#ifdef MODULE +static __exit void toi_swap_unload(void) +{ + toi_unregister_module(&toi_swapops); +} + +module_init(toi_swap_load); +module_exit(toi_swap_unload); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Nigel Cunningham"); +MODULE_DESCRIPTION("TuxOnIce SwapAllocator"); +#else +late_initcall(toi_swap_load); +#endif diff --git a/kernel/power/tuxonice_sysfs.c b/kernel/power/tuxonice_sysfs.c new file mode 100644 index 0000000..605ebdc --- /dev/null +++ b/kernel/power/tuxonice_sysfs.c @@ -0,0 +1,361 @@ +/* + * kernel/power/tuxonice_sysfs.c + * + * Copyright (C) 2002-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * This file contains support for sysfs entries for tuning TuxOnIce. + * + * We have a generic handler that deals with the most common cases, and + * hooks for special handlers to use. + */ + +#include +#include +#include + +#include "tuxonice_sysfs.h" +#include "tuxonice.h" +#include "tuxonice_storage.h" +#include "tuxonice_alloc.h" + +static int toi_sysfs_initialised; + +static void toi_initialise_sysfs(void); + +static struct toi_sysfs_data sysfs_params[]; + +#define to_sysfs_data(_attr) container_of(_attr, struct toi_sysfs_data, attr) + +static void toi_main_wrapper(void) +{ + _toi_try_hibernate(0); +} + +static ssize_t toi_attr_show(struct kobject *kobj, struct attribute *attr, + char *page) +{ + struct toi_sysfs_data *sysfs_data = to_sysfs_data(attr); + int len = 0; + + if (toi_start_anything(0)) + return -EBUSY; + + if (sysfs_data->flags & SYSFS_NEEDS_SM_FOR_READ) + toi_prepare_usm(); + + switch (sysfs_data->type) { + case TOI_SYSFS_DATA_CUSTOM: + len = (sysfs_data->data.special.read_sysfs) ? + (sysfs_data->data.special.read_sysfs)(page, PAGE_SIZE) + : 0; + break; + case TOI_SYSFS_DATA_BIT: + len = sprintf(page, "%d\n", + -test_bit(sysfs_data->data.bit.bit, + sysfs_data->data.bit.bit_vector)); + break; + case TOI_SYSFS_DATA_INTEGER: + len = sprintf(page, "%d\n", + *(sysfs_data->data.integer.variable)); + break; + case TOI_SYSFS_DATA_LONG: + len = sprintf(page, "%ld\n", + *(sysfs_data->data.a_long.variable)); + break; + case TOI_SYSFS_DATA_UL: + len = sprintf(page, "%lu\n", + *(sysfs_data->data.ul.variable)); + break; + case TOI_SYSFS_DATA_STRING: + len = sprintf(page, "%s\n", + sysfs_data->data.string.variable); + break; + } + /* Side effect routine? */ + if (sysfs_data->read_side_effect) + sysfs_data->read_side_effect(); + + if (sysfs_data->flags & SYSFS_NEEDS_SM_FOR_READ) + toi_cleanup_usm(); + + toi_finish_anything(0); + + return len; +} + +#define BOUND(_variable, _type) \ + do { \ + if (*_variable < sysfs_data->data._type.minimum) \ + *_variable = sysfs_data->data._type.minimum; \ + else if (*_variable > sysfs_data->data._type.maximum) \ + *_variable = sysfs_data->data._type.maximum; \ + } while (0) + +static ssize_t toi_attr_store(struct kobject *kobj, struct attribute *attr, + const char *my_buf, size_t count) +{ + int assigned_temp_buffer = 0, result = count; + struct toi_sysfs_data *sysfs_data = to_sysfs_data(attr); + + if (toi_start_anything((sysfs_data->flags & SYSFS_HIBERNATE_OR_RESUME))) + return -EBUSY; + + ((char *) my_buf)[count] = 0; + + if (sysfs_data->flags & SYSFS_NEEDS_SM_FOR_WRITE) + toi_prepare_usm(); + + switch (sysfs_data->type) { + case TOI_SYSFS_DATA_CUSTOM: + if (sysfs_data->data.special.write_sysfs) + result = (sysfs_data->data.special.write_sysfs) + (my_buf, count); + break; + case TOI_SYSFS_DATA_BIT: + { + int value = simple_strtoul(my_buf, NULL, 0); + if (value) + set_bit(sysfs_data->data.bit.bit, + (sysfs_data->data.bit.bit_vector)); + else + clear_bit(sysfs_data->data.bit.bit, + (sysfs_data->data.bit.bit_vector)); + } + break; + case TOI_SYSFS_DATA_INTEGER: + { + int *variable = + sysfs_data->data.integer.variable; + *variable = simple_strtol(my_buf, NULL, 0); + BOUND(variable, integer); + break; + } + case TOI_SYSFS_DATA_LONG: + { + long *variable = + sysfs_data->data.a_long.variable; + *variable = simple_strtol(my_buf, NULL, 0); + BOUND(variable, a_long); + break; + } + case TOI_SYSFS_DATA_UL: + { + unsigned long *variable = + sysfs_data->data.ul.variable; + *variable = simple_strtoul(my_buf, NULL, 0); + BOUND(variable, ul); + break; + } + break; + case TOI_SYSFS_DATA_STRING: + { + int copy_len = count; + char *variable = + sysfs_data->data.string.variable; + + if (sysfs_data->data.string.max_length && + (copy_len > sysfs_data->data.string.max_length)) + copy_len = sysfs_data->data.string.max_length; + + if (!variable) { + variable = (char *) toi_get_zeroed_page(31, + TOI_ATOMIC_GFP); + sysfs_data->data.string.variable = variable; + assigned_temp_buffer = 1; + } + strncpy(variable, my_buf, copy_len); + if ((copy_len) && + (my_buf[copy_len - 1] == '\n')) + variable[count - 1] = 0; + variable[count] = 0; + } + break; + } + + /* Side effect routine? */ + if (sysfs_data->write_side_effect) + sysfs_data->write_side_effect(); + + /* Free temporary buffers */ + if (assigned_temp_buffer) { + toi_free_page(31, + (unsigned long) sysfs_data->data.string.variable); + sysfs_data->data.string.variable = NULL; + } + + if (sysfs_data->flags & SYSFS_NEEDS_SM_FOR_WRITE) + toi_cleanup_usm(); + + toi_finish_anything(sysfs_data->flags & SYSFS_HIBERNATE_OR_RESUME); + + return result; +} + +static struct sysfs_ops toi_sysfs_ops = { + .show = &toi_attr_show, + .store = &toi_attr_store, +}; + +static struct kobj_type toi_ktype = { + .sysfs_ops = &toi_sysfs_ops, +}; + +decl_subsys_name(toi, tuxonice, &toi_ktype, NULL); + +/* Non-module sysfs entries. + * + * This array contains entries that are automatically registered at + * boot. Modules and the console code register their own entries separately. + * + * NB: If you move do_hibernate, change toi_write_sysfs's test so that + * toi_start_anything still gets a 1 when the user echos > do_hibernate! + */ + +static struct toi_sysfs_data sysfs_params[] = { + { TOI_ATTR("do_hibernate", SYSFS_WRITEONLY), + SYSFS_CUSTOM(NULL, NULL, SYSFS_HIBERNATING), + .write_side_effect = toi_main_wrapper + }, + + { TOI_ATTR("do_resume", SYSFS_WRITEONLY), + SYSFS_CUSTOM(NULL, NULL, SYSFS_RESUMING), + .write_side_effect = __toi_try_resume + }, + +}; + +void remove_toi_sysdir(struct kobject *kobj) +{ + if (!kobj) + return; + + kobject_unregister(kobj); + + toi_kfree(34, kobj); +} + +struct kobject *make_toi_sysdir(char *name) +{ + struct kobject *kobj = toi_kzalloc(34, sizeof(struct kobject), + GFP_KERNEL); + int err; + + if (!kobj) { + printk(KERN_INFO "TuxOnIce: Can't allocate kobject for sysfs " + "dir!\n"); + return NULL; + } + + err = kobject_set_name(kobj, "%s", name); + + if (err) { + toi_kfree(34, kobj); + return NULL; + } + + kobj->kset = &toi_subsys; + + err = kobject_register(kobj); + + if (err) + toi_kfree(34, kobj); + + return err ? NULL : kobj; +} + +/* toi_register_sysfs_file + * + * Helper for registering a new /sysfs/tuxonice entry. + */ + +int toi_register_sysfs_file( + struct kobject *kobj, + struct toi_sysfs_data *toi_sysfs_data) +{ + int result; + + if (!toi_sysfs_initialised) + toi_initialise_sysfs(); + + result = sysfs_create_file(kobj, &toi_sysfs_data->attr); + if (result) + printk(KERN_INFO "TuxOnIce: sysfs_create_file for %s " + "returned %d.\n", + toi_sysfs_data->attr.name, result); + + return result; +} +EXPORT_SYMBOL_GPL(toi_register_sysfs_file); + +/* toi_unregister_sysfs_file + * + * Helper for removing unwanted /sys/power/tuxonice entries. + * + */ +void toi_unregister_sysfs_file(struct kobject *kobj, + struct toi_sysfs_data *toi_sysfs_data) +{ + sysfs_remove_file(kobj, &toi_sysfs_data->attr); +} +EXPORT_SYMBOL_GPL(toi_unregister_sysfs_file); + +void toi_cleanup_sysfs(void) +{ + int i, + numfiles = sizeof(sysfs_params) / sizeof(struct toi_sysfs_data); + + if (!toi_sysfs_initialised) + return; + + for (i = 0; i < numfiles; i++) + toi_unregister_sysfs_file(&toi_subsys.kobj, + &sysfs_params[i]); + + kobj_set_kset_s(&toi_subsys, power_subsys); + subsystem_unregister(&toi_subsys); + + toi_sysfs_initialised = 0; +} + +/* toi_initialise_sysfs + * + * Initialise the /sysfs/tuxonice directory. + */ + +static void toi_initialise_sysfs(void) +{ + int i, error; + int numfiles = sizeof(sysfs_params) / sizeof(struct toi_sysfs_data); + + if (toi_sysfs_initialised) + return; + + /* Make our TuxOnIce directory a child of /sys/power */ + kobj_set_kset_s(&toi_subsys, power_subsys); + error = subsystem_register(&toi_subsys); + + if (error) + return; + + /* Make it use the .store and .show routines above */ + kobj_set_kset_s(&toi_subsys, toi_subsys); + + toi_sysfs_initialised = 1; + + for (i = 0; i < numfiles; i++) + toi_register_sysfs_file(&toi_subsys.kobj, + &sysfs_params[i]); +} + +int toi_sysfs_init(void) +{ + toi_initialise_sysfs(); + return 0; +} + +void toi_sysfs_exit(void) +{ + toi_cleanup_sysfs(); +} diff --git a/kernel/power/tuxonice_sysfs.h b/kernel/power/tuxonice_sysfs.h new file mode 100644 index 0000000..c1361cf --- /dev/null +++ b/kernel/power/tuxonice_sysfs.h @@ -0,0 +1,127 @@ +/* + * kernel/power/tuxonice_sysfs.h + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + */ + +#include +#include "power.h" + +struct toi_sysfs_data { + struct attribute attr; + int type; + int flags; + union { + struct { + unsigned long *bit_vector; + int bit; + } bit; + struct { + int *variable; + int minimum; + int maximum; + } integer; + struct { + long *variable; + long minimum; + long maximum; + } a_long; + struct { + unsigned long *variable; + unsigned long minimum; + unsigned long maximum; + } ul; + struct { + char *variable; + int max_length; + } string; + struct { + int (*read_sysfs) (const char *buffer, int count); + int (*write_sysfs) (const char *buffer, int count); + void *data; + } special; + } data; + + /* Side effects routines. Used, eg, for reparsing the + * resume= entry when it changes */ + void (*read_side_effect) (void); + void (*write_side_effect) (void); + struct list_head sysfs_data_list; +}; + +enum { + TOI_SYSFS_DATA_NONE = 1, + TOI_SYSFS_DATA_CUSTOM, + TOI_SYSFS_DATA_BIT, + TOI_SYSFS_DATA_INTEGER, + TOI_SYSFS_DATA_UL, + TOI_SYSFS_DATA_LONG, + TOI_SYSFS_DATA_STRING +}; + +#define TOI_ATTR(_name, _mode) \ + .attr = {.name = _name , .mode = _mode } + +#define SYSFS_BIT(_ul, _bit, _flags) \ + .type = TOI_SYSFS_DATA_BIT, \ + .flags = _flags, \ + .data = { .bit = { .bit_vector = _ul, .bit = _bit } } + +#define SYSFS_INT(_int, _min, _max, _flags) \ + .type = TOI_SYSFS_DATA_INTEGER, \ + .flags = _flags, \ + .data = { .integer = { .variable = _int, .minimum = _min, \ + .maximum = _max } } + +#define SYSFS_UL(_ul, _min, _max, _flags) \ + .type = TOI_SYSFS_DATA_UL, \ + .flags = _flags, \ + .data = { .ul = { .variable = _ul, .minimum = _min, \ + .maximum = _max } } + +#define SYSFS_LONG(_long, _min, _max, _flags) \ + .type = TOI_SYSFS_DATA_LONG, \ + .flags = _flags, \ + .data = { .a_long = { .variable = _long, .minimum = _min, \ + .maximum = _max } } + +#define SYSFS_STRING(_string, _max_len, _flags) \ + .type = TOI_SYSFS_DATA_STRING, \ + .flags = _flags, \ + .data = { .string = { .variable = _string, .max_length = _max_len } } + +#define SYSFS_CUSTOM(_read, _write, _flags) \ + .type = TOI_SYSFS_DATA_CUSTOM, \ + .flags = _flags, \ + .data = { .special = { .read_sysfs = _read, .write_sysfs = _write } } + +#define SYSFS_WRITEONLY 0200 +#define SYSFS_READONLY 0444 +#define SYSFS_RW 0644 + +/* Flags */ +#define SYSFS_NEEDS_SM_FOR_READ 1 +#define SYSFS_NEEDS_SM_FOR_WRITE 2 +#define SYSFS_HIBERNATE 4 +#define SYSFS_RESUME 8 +#define SYSFS_HIBERNATE_OR_RESUME (SYSFS_HIBERNATE | SYSFS_RESUME) +#define SYSFS_HIBERNATING (SYSFS_HIBERNATE | SYSFS_NEEDS_SM_FOR_WRITE) +#define SYSFS_RESUMING (SYSFS_RESUME | SYSFS_NEEDS_SM_FOR_WRITE) +#define SYSFS_NEEDS_SM_FOR_BOTH \ + (SYSFS_NEEDS_SM_FOR_READ | SYSFS_NEEDS_SM_FOR_WRITE) + +int toi_register_sysfs_file(struct kobject *kobj, + struct toi_sysfs_data *toi_sysfs_data); +void toi_unregister_sysfs_file(struct kobject *kobj, + struct toi_sysfs_data *toi_sysfs_data); + +extern struct kset toi_subsys; + +struct kobject *make_toi_sysdir(char *name); +void remove_toi_sysdir(struct kobject *obj); +extern void toi_cleanup_sysfs(void); + +extern int toi_sysfs_init(void); +extern void toi_sysfs_exit(void); diff --git a/kernel/power/tuxonice_ui.c b/kernel/power/tuxonice_ui.c new file mode 100644 index 0000000..d1ae961 --- /dev/null +++ b/kernel/power/tuxonice_ui.c @@ -0,0 +1,261 @@ +/* + * kernel/power/tuxonice_ui.c + * + * Copyright (C) 1998-2001 Gabor Kuti + * Copyright (C) 1998,2001,2002 Pavel Machek + * Copyright (C) 2002-2003 Florent Chabaud + * Copyright (C) 2002-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Routines for TuxOnIce's user interface. + * + * The user interface code talks to a userspace program via a + * netlink socket. + * + * The kernel side: + * - starts the userui program; + * - sends text messages and progress bar status; + * + * The user space side: + * - passes messages regarding user requests (abort, toggle reboot etc) + * + */ + +#define __KERNEL_SYSCALLS__ + +#include + +#include "tuxonice_sysfs.h" +#include "tuxonice_modules.h" +#include "tuxonice.h" +#include "tuxonice_ui.h" +#include "tuxonice_netlink.h" +#include "tuxonice_power_off.h" + +static char local_printf_buf[1024]; /* Same as printk - should be safe */ +extern int toi_wait; +struct ui_ops *toi_current_ui; + +/** + * toi_wait_for_keypress - Wait for keypress via userui or /dev/console. + * + * @timeout: Maximum time to wait. + * + * Wait for a keypress, either from userui or /dev/console if userui isn't + * available. The non-userui path is particularly for at boot-time, prior + * to userui being started, when we have an important warning to give to + * the user. + */ +static char toi_wait_for_keypress(int timeout) +{ + if (toi_current_ui && toi_current_ui->wait_for_key(timeout)) + return ' '; + + return toi_wait_for_keypress_dev_console(timeout); +} + +/* toi_early_boot_message() + * Description: Handle errors early in the process of booting. + * The user may press C to continue booting, perhaps + * invalidating the image, or space to reboot. + * This works from either the serial console or normally + * attached keyboard. + * + * Note that we come in here from init, while the kernel is + * locked. If we want to get events from the serial console, + * we need to temporarily unlock the kernel. + * + * toi_early_boot_message may also be called post-boot. + * In this case, it simply printks the message and returns. + * + * Arguments: int Whether we are able to erase the image. + * int default_answer. What to do when we timeout. This + * will normally be continue, but the user might + * provide command line options (__setup) to override + * particular cases. + * Char *. Pointer to a string explaining why we're moaning. + */ + +#define say(message, a...) printk(KERN_EMERG message, ##a) + +void toi_early_boot_message(int message_detail, int default_answer, + char *warning_reason, ...) +{ +#if defined(CONFIG_VT) || defined(CONFIG_SERIAL_CONSOLE) + unsigned long orig_state = get_toi_state(), continue_req = 0; + unsigned long orig_loglevel = console_loglevel; + int can_ask = 1; +#else + int can_ask = 0; +#endif + + va_list args; + int printed_len; + + if (!toi_wait) { + set_toi_state(TOI_CONTINUE_REQ); + can_ask = 0; + } + + if (warning_reason) { + va_start(args, warning_reason); + printed_len = vsnprintf(local_printf_buf, + sizeof(local_printf_buf), + warning_reason, + args); + va_end(args); + } + + if (!test_toi_state(TOI_BOOT_TIME)) { + printk("TuxOnIce: %s\n", local_printf_buf); + return; + } + + if (!can_ask) { + continue_req = !!default_answer; + goto post_ask; + } + +#if defined(CONFIG_VT) || defined(CONFIG_SERIAL_CONSOLE) + console_loglevel = 7; + + say("=== TuxOnIce ===\n\n"); + if (warning_reason) { + say("BIG FAT WARNING!! %s\n\n", local_printf_buf); + switch (message_detail) { + case 0: + say("If you continue booting, note that any image WILL" + "NOT BE REMOVED.\nTuxOnIce is unable to do so " + "because the appropriate modules aren't\n" + "loaded. You should manually remove the image " + "to avoid any\npossibility of corrupting your " + "filesystem(s) later.\n"); + break; + case 1: + say("If you want to use the current TuxOnIce image, " + "reboot and try\nagain with the same kernel " + "that you hibernated from. If you want\n" + "to forget that image, continue and the image " + "will be erased.\n"); + break; + } + say("Press SPACE to reboot or C to continue booting with " + "this kernel\n\n"); + if (toi_wait > 0) + say("Default action if you don't select one in %d " + "seconds is: %s.\n", + toi_wait, + default_answer == TOI_CONTINUE_REQ ? + "continue booting" : "reboot"); + } else { + say("BIG FAT WARNING!!\n\n" + "You have tried to resume from this image before.\n" + "If it failed once, it may well fail again.\n" + "Would you like to remove the image and boot " + "normally?\nThis will be equivalent to entering " + "noresume on the\nkernel command line.\n\n" + "Press SPACE to remove the image or C to continue " + "resuming.\n\n"); + if (toi_wait > 0) + say("Default action if you don't select one in %d " + "seconds is: %s.\n", toi_wait, + !!default_answer ? + "continue resuming" : "remove the image"); + } + console_loglevel = orig_loglevel; + + set_toi_state(TOI_SANITY_CHECK_PROMPT); + clear_toi_state(TOI_CONTINUE_REQ); + + if (toi_wait_for_keypress(toi_wait) == 0) /* We timed out */ + continue_req = !!default_answer; + else + continue_req = test_toi_state(TOI_CONTINUE_REQ); + +#endif /* CONFIG_VT or CONFIG_SERIAL_CONSOLE */ + +post_ask: + if ((warning_reason) && (!continue_req)) + machine_restart(NULL); + + restore_toi_state(orig_state); + if (continue_req) + set_toi_state(TOI_CONTINUE_REQ); +} +#undef say + +/* + * User interface specific /sys/power/tuxonice entries. + */ + +static struct toi_sysfs_data sysfs_params[] = { +#if defined(CONFIG_NET) && defined(CONFIG_SYSFS) + { TOI_ATTR("default_console_level", SYSFS_RW), + SYSFS_INT(&toi_bkd.toi_default_console_level, 0, 7, 0) + }, + + { TOI_ATTR("debug_sections", SYSFS_RW), + SYSFS_UL(&toi_bkd.toi_debug_state, 0, 1 << 30, 0) + }, + + { TOI_ATTR("log_everything", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_LOGALL, 0) + }, +#endif + { TOI_ATTR("pm_prepare_console", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_PM_PREPARE_CONSOLE, 0) + } +}; + +static struct toi_module_ops userui_ops = { + .type = MISC_HIDDEN_MODULE, + .name = "printk ui", + .directory = "user_interface", + .module = THIS_MODULE, + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +int toi_register_ui_ops(struct ui_ops *this_ui) +{ + if (toi_current_ui) { + printk(KERN_INFO "Only one TuxOnIce user interface module can " + "be loaded at a time."); + return -EBUSY; + } + + toi_current_ui = this_ui; + + return 0; +} + +void toi_remove_ui_ops(struct ui_ops *this_ui) +{ + if (toi_current_ui != this_ui) + return; + + toi_current_ui = NULL; +} + +/* toi_console_sysfs_init + * Description: Boot time initialisation for user interface. + */ + +int toi_ui_init(void) +{ + return toi_register_module(&userui_ops); +} + +void toi_ui_exit(void) +{ + toi_unregister_module(&userui_ops); +} + +#ifdef CONFIG_TOI_EXPORTS +EXPORT_SYMBOL_GPL(toi_current_ui); +EXPORT_SYMBOL_GPL(toi_early_boot_message); +EXPORT_SYMBOL_GPL(toi_register_ui_ops); +EXPORT_SYMBOL_GPL(toi_remove_ui_ops); +#endif diff --git a/kernel/power/tuxonice_ui.h b/kernel/power/tuxonice_ui.h new file mode 100644 index 0000000..13adaed --- /dev/null +++ b/kernel/power/tuxonice_ui.h @@ -0,0 +1,104 @@ +/* + * kernel/power/tuxonice_ui.h + * + * Copyright (C) 2004-2007 Nigel Cunningham (nigel at tuxonice net) + */ + +enum { + DONT_CLEAR_BAR, + CLEAR_BAR +}; + +enum { + /* Userspace -> Kernel */ + USERUI_MSG_ABORT = 0x11, + USERUI_MSG_SET_STATE = 0x12, + USERUI_MSG_GET_STATE = 0x13, + USERUI_MSG_GET_DEBUG_STATE = 0x14, + USERUI_MSG_SET_DEBUG_STATE = 0x15, + USERUI_MSG_SPACE = 0x18, + USERUI_MSG_GET_POWERDOWN_METHOD = 0x1A, + USERUI_MSG_SET_POWERDOWN_METHOD = 0x1B, + USERUI_MSG_GET_LOGLEVEL = 0x1C, + USERUI_MSG_SET_LOGLEVEL = 0x1D, + USERUI_MSG_PRINTK = 0x1E, + + /* Kernel -> Userspace */ + USERUI_MSG_MESSAGE = 0x21, + USERUI_MSG_PROGRESS = 0x22, + USERUI_MSG_POST_ATOMIC_RESTORE = 0x25, + + USERUI_MSG_MAX, +}; + +struct userui_msg_params { + unsigned long a, b, c, d; + char text[255]; +}; + +struct ui_ops { + char (*wait_for_key) (int timeout); + unsigned long (*update_status) (unsigned long value, + unsigned long maximum, const char *fmt, ...); + void (*prepare_status) (int clearbar, const char *fmt, ...); + void (*cond_pause) (int pause, char *message); + void (*abort)(int result_code, const char *fmt, ...); + void (*prepare)(void); + void (*cleanup)(void); + void (*post_atomic_restore)(void); + void (*message)(unsigned long section, unsigned long level, + int normally_logged, const char *fmt, ...); +}; + +extern struct ui_ops *toi_current_ui; + +#define toi_update_status(val, max, fmt, args...) \ + (toi_current_ui ? (toi_current_ui->update_status) (val, max, fmt, ##args) : \ + max) + +#define toi_ui_post_atomic_restore(void) \ + do { if (toi_current_ui) \ + (toi_current_ui->post_atomic_restore)(); \ + } while (0) + +#define toi_prepare_console(void) \ + do { if (toi_current_ui) \ + (toi_current_ui->prepare)(); \ + } while (0) + +#define toi_cleanup_console(void) \ + do { if (toi_current_ui) \ + (toi_current_ui->cleanup)(); \ + } while (0) + +#define abort_hibernate(result, fmt, args...) \ + do { if (toi_current_ui) \ + (toi_current_ui->abort)(result, fmt, ##args); \ + else { \ + set_abort_result(result); \ + } \ + } while (0) + +#define toi_cond_pause(pause, message) \ + do { if (toi_current_ui) \ + (toi_current_ui->cond_pause)(pause, message); \ + } while (0) + +#define toi_prepare_status(clear, fmt, args...) \ + do { if (toi_current_ui) \ + (toi_current_ui->prepare_status)(clear, fmt, ##args); \ + else \ + printk(fmt, ##args); \ + } while (0) + +#define toi_message(sn, lev, log, fmt, a...) \ +do { \ + if (toi_current_ui && (!sn || test_debug_state(sn))) \ + toi_current_ui->message(sn, lev, log, fmt, ##a); \ +} while (0) + +__exit void toi_ui_cleanup(void); +extern int toi_ui_init(void); +extern void toi_ui_exit(void); +extern int toi_register_ui_ops(struct ui_ops *this_ui); +extern void toi_remove_ui_ops(struct ui_ops *this_ui); diff --git a/kernel/power/tuxonice_userui.c b/kernel/power/tuxonice_userui.c new file mode 100644 index 0000000..ea29367 --- /dev/null +++ b/kernel/power/tuxonice_userui.c @@ -0,0 +1,674 @@ +/* + * kernel/power/user_ui.c + * + * Copyright (C) 2005-2007 Bernard Blackham + * Copyright (C) 2002-2007 Nigel Cunningham (nigel at tuxonice net) + * + * This file is released under the GPLv2. + * + * Routines for TuxOnIce's user interface. + * + * The user interface code talks to a userspace program via a + * netlink socket. + * + * The kernel side: + * - starts the userui program; + * - sends text messages and progress bar status; + * + * The user space side: + * - passes messages regarding user requests (abort, toggle reboot etc) + * + */ + +#define __KERNEL_SYSCALLS__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "tuxonice_sysfs.h" +#include "tuxonice_modules.h" +#include "tuxonice.h" +#include "tuxonice_ui.h" +#include "tuxonice_netlink.h" +#include "tuxonice_power_off.h" + +static char local_printf_buf[1024]; /* Same as printk - should be safe */ + +static struct user_helper_data ui_helper_data; +static struct toi_module_ops userui_ops; +static int orig_kmsg; + +static char lastheader[512]; +static int lastheader_message_len; +static int ui_helper_changed; /* Used at resume-time so don't overwrite value + set from initrd/ramfs. */ + +/* Number of distinct progress amounts that userspace can display */ +static int progress_granularity = 30; + +static DECLARE_WAIT_QUEUE_HEAD(userui_wait_for_key); + +/** + * ui_nl_set_state - Update toi_action based on a message from userui. + * + * @n: The bit (1 << bit) to set. + */ +static void ui_nl_set_state(int n) +{ + /* Only let them change certain settings */ + static const int toi_action_mask = + (1 << TOI_REBOOT) | (1 << TOI_PAUSE) | + (1 << TOI_SLOW) | (1 << TOI_LOGALL) | + (1 << TOI_SINGLESTEP) | + (1 << TOI_PAUSE_NEAR_PAGESET_END); + + toi_bkd.toi_action = (toi_bkd.toi_action & (~toi_action_mask)) | + (n & toi_action_mask); + + if (!test_action_state(TOI_PAUSE) && + !test_action_state(TOI_SINGLESTEP)) + wake_up_interruptible(&userui_wait_for_key); +} + +/** + * userui_post_atomic_restore - Tell userui that atomic restore just happened. + * + * Tell userui that atomic restore just occured, so that it can do things like + * redrawing the screen, re-getting settings and so on. + */ +static void userui_post_atomic_restore(void) +{ + toi_send_netlink_message(&ui_helper_data, + USERUI_MSG_POST_ATOMIC_RESTORE, NULL, 0); +} + +/** + * userui_storage_needed - Report how much memory in image header is needed. + */ +static int userui_storage_needed(void) +{ + return sizeof(ui_helper_data.program) + 1 + sizeof(int); +} + +/** + * userui_save_config_info - Fill buffer with config info for image header. + * + * @buf: Buffer into which to put the config info we want to save. + */ +static int userui_save_config_info(char *buf) +{ + *((int *) buf) = progress_granularity; + memcpy(buf + sizeof(int), ui_helper_data.program, + sizeof(ui_helper_data.program)); + return sizeof(ui_helper_data.program) + sizeof(int) + 1; +} + +/** + * userui_load_config_info - Restore config info from buffer. + * + * @buf: Buffer containing header info loaded. + * @size: Size of data loaded for this module. + */ +static void userui_load_config_info(char *buf, int size) +{ + progress_granularity = *((int *) buf); + size -= sizeof(int); + + /* Don't load the saved path if one has already been set */ + if (ui_helper_changed) + return; + + if (size > sizeof(ui_helper_data.program)) + size = sizeof(ui_helper_data.program); + + memcpy(ui_helper_data.program, buf + sizeof(int), size); + ui_helper_data.program[sizeof(ui_helper_data.program)-1] = '\0'; +} + +/** + * set_ui_program_set: Record that userui program was changed. + * + * Side effect routine for when the userui program is set. In an initrd or + * ramfs, the user may set a location for the userui program. If this happens, + * we don't want to reload the value that was saved in the image header. This + * routine allows us to flag that we shouldn't restore the program name from + * the image header. + */ +static void set_ui_program_set(void) +{ + ui_helper_changed = 1; +} + +/** + * userui_memory_needed - Tell core how much memory to reserve for us. + */ +static int userui_memory_needed(void) +{ + /* ball park figure of 128 pages */ + return (128 * PAGE_SIZE); +} + +/** + * userui_update_status - Update the progress bar and (if on) in-bar message. + * + * @value: Current progress percentage numerator. + * @maximum: Current progress percentage denominator. + * @fmt: Message to be displayed in the middle of the progress bar. + * + * Note that a NULL message does not mean that any previous message is erased! + * For that, you need toi_prepare_status with clearbar on. + * + * Returns an unsigned long, being the next numerator (as determined by the + * maximum and progress granularity) where status needs to be updated. + * This is to reduce unnecessary calls to update_status. + */ +static unsigned long userui_update_status(unsigned long value, + unsigned long maximum, const char *fmt, ...) +{ + static int last_step = -1; + struct userui_msg_params msg; + int bitshift; + int this_step; + unsigned long next_update; + + if (ui_helper_data.pid == -1) + return 0; + + if ((!maximum) || (!progress_granularity)) + return maximum; + + if (value < 0) + value = 0; + + if (value > maximum) + value = maximum; + + /* Try to avoid math problems - we can't do 64 bit math here + * (and shouldn't need it - anyone got screen resolution + * of 65536 pixels or more?) */ + bitshift = fls(maximum) - 16; + if (bitshift > 0) { + unsigned long temp_maximum = maximum >> bitshift; + unsigned long temp_value = value >> bitshift; + this_step = (int) + (temp_value * progress_granularity / temp_maximum); + next_update = (((this_step + 1) * temp_maximum / + progress_granularity) + 1) << bitshift; + } else { + this_step = (int) (value * progress_granularity / maximum); + next_update = ((this_step + 1) * maximum / + progress_granularity) + 1; + } + + if (this_step == last_step) + return next_update; + + memset(&msg, 0, sizeof(msg)); + + msg.a = this_step; + msg.b = progress_granularity; + + if (fmt) { + va_list args; + va_start(args, fmt); + vsnprintf(msg.text, sizeof(msg.text), fmt, args); + va_end(args); + msg.text[sizeof(msg.text)-1] = '\0'; + } + + toi_send_netlink_message(&ui_helper_data, USERUI_MSG_PROGRESS, + &msg, sizeof(msg)); + last_step = this_step; + + return next_update; +} + +/** + * userui_message - Display a message without necessarily logging it. + * + * @section: Type of message. Messages can be filtered by type. + * @level: Degree of importance of the message. Lower values = higher priority. + * @normally_logged: Whether logged even if log_everything is off. + * @fmt: Message (and parameters). + * + * This function is intended to do the same job as printk, but without normally + * logging what is printed. The point is to be able to get debugging info on + * screen without filling the logs with "1/534. ^M 2/534^M. 3/534^M" + * + * It may be called from an interrupt context - can't sleep! + */ +static void userui_message(unsigned long section, unsigned long level, + int normally_logged, const char *fmt, ...) +{ + struct userui_msg_params msg; + + if ((level) && (level > console_loglevel)) + return; + + memset(&msg, 0, sizeof(msg)); + + msg.a = section; + msg.b = level; + msg.c = normally_logged; + + if (fmt) { + va_list args; + va_start(args, fmt); + vsnprintf(msg.text, sizeof(msg.text), fmt, args); + va_end(args); + msg.text[sizeof(msg.text)-1] = '\0'; + } + + if (test_action_state(TOI_LOGALL)) + printk(KERN_INFO "%s\n", msg.text); + + toi_send_netlink_message(&ui_helper_data, USERUI_MSG_MESSAGE, + &msg, sizeof(msg)); +} + +/** + * wait_for_key_via_userui - Wait for userui to receive a keypress. + */ +static void wait_for_key_via_userui(void) +{ + DECLARE_WAITQUEUE(wait, current); + + add_wait_queue(&userui_wait_for_key, &wait); + set_current_state(TASK_INTERRUPTIBLE); + + interruptible_sleep_on(&userui_wait_for_key); + + set_current_state(TASK_RUNNING); + remove_wait_queue(&userui_wait_for_key, &wait); +} + +/** + * userui_prepare_status - Display high level messages. + * + * @clearbar: Whether to clear the progress bar. + * @fmt...: New message for the title. + * + * Prepare the 'nice display', drawing the header and version, along with the + * current action and perhaps also resetting the progress bar. + */ +static void userui_prepare_status(int clearbar, const char *fmt, ...) +{ + va_list args; + + if (fmt) { + va_start(args, fmt); + lastheader_message_len = vsnprintf(lastheader, 512, fmt, args); + va_end(args); + } + + if (clearbar) + toi_update_status(0, 1, NULL); + + if (ui_helper_data.pid == -1) + printk(KERN_EMERG "%s\n", lastheader); + else + toi_message(0, TOI_STATUS, 1, lastheader, NULL); +} + +/** + * toi_wait_for_keypress - Wait for keypress via userui. + * + * @timeout: Maximum time to wait. + * + * Wait for a keypress from userui. + * + * FIXME: Implement timeout? + */ +static char userui_wait_for_keypress(int timeout) +{ + char key = '\0'; + + if (ui_helper_data.pid != -1) { + wait_for_key_via_userui(); + key = ' '; + } + + return key; +} + +/** + * userui_abort_hibernate - Abort a cycle & tell user if they didn't request it. + * + * @result_code: Reason why we're aborting (1 << bit). + * @fmt: Message to display if telling the user what's going on. + * + * Abort a cycle. If this wasn't at the user's request (and we're displaying + * output), tell the user why and wait for them to acknowledge the message. + */ +static void userui_abort_hibernate(int result_code, const char *fmt, ...) +{ + va_list args; + int printed_len = 0; + + set_result_state(result_code); + + if (test_result_state(TOI_ABORTED)) + return; + + set_result_state(TOI_ABORTED); + + if (test_result_state(TOI_ABORT_REQUESTED)) + return; + + va_start(args, fmt); + printed_len = vsnprintf(local_printf_buf, sizeof(local_printf_buf), + fmt, args); + va_end(args); + if (ui_helper_data.pid != -1) + printed_len = sprintf(local_printf_buf + printed_len, + " (Press SPACE to continue)"); + + toi_prepare_status(CLEAR_BAR, local_printf_buf); + + if (ui_helper_data.pid != -1) + userui_wait_for_keypress(0); +} + +/** + * request_abort_hibernate - Abort hibernating or resuming at user request. + * + * Handle the user requesting the cancellation of a hibernation or resume by + * pressing escape. + */ +static void request_abort_hibernate(void) +{ + if (test_result_state(TOI_ABORT_REQUESTED)) + return; + + if (test_toi_state(TOI_NOW_RESUMING)) { + toi_prepare_status(CLEAR_BAR, "Escape pressed. " + "Powering down again."); + set_toi_state(TOI_STOP_RESUME); + while (!test_toi_state(TOI_IO_STOPPED)) + schedule(); + if (toiActiveAllocator->mark_resume_attempted) + toiActiveAllocator->mark_resume_attempted(0); + toi_power_down(); + } + + toi_prepare_status(CLEAR_BAR, "--- ESCAPE PRESSED :" + " ABORTING HIBERNATION ---"); + set_abort_result(TOI_ABORT_REQUESTED); + wake_up_interruptible(&userui_wait_for_key); +} + +/** + * userui_user_rcv_msg - Receive a netlink message from userui. + * + * @skb: skb received. + * @nlh: Netlink header received. + */ +static int userui_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) +{ + int type; + int *data; + + type = nlh->nlmsg_type; + + /* A control message: ignore them */ + if (type < NETLINK_MSG_BASE) + return 0; + + /* Unknown message: reply with EINVAL */ + if (type >= USERUI_MSG_MAX) + return -EINVAL; + + /* All operations require privileges, even GET */ + if (security_netlink_recv(skb, CAP_NET_ADMIN)) + return -EPERM; + + /* Only allow one task to receive NOFREEZE privileges */ + if (type == NETLINK_MSG_NOFREEZE_ME && ui_helper_data.pid != -1) { + printk(KERN_INFO "Got NOFREEZE_ME request when " + "ui_helper_data.pid is %d.\n", ui_helper_data.pid); + return -EBUSY; + } + + data = (int *) NLMSG_DATA(nlh); + + switch (type) { + case USERUI_MSG_ABORT: + request_abort_hibernate(); + return 0; + case USERUI_MSG_GET_STATE: + toi_send_netlink_message(&ui_helper_data, + USERUI_MSG_GET_STATE, &toi_bkd.toi_action, + sizeof(toi_bkd.toi_action)); + return 0; + case USERUI_MSG_GET_DEBUG_STATE: + toi_send_netlink_message(&ui_helper_data, + USERUI_MSG_GET_DEBUG_STATE, + &toi_bkd.toi_debug_state, + sizeof(toi_bkd.toi_debug_state)); + return 0; + case USERUI_MSG_SET_STATE: + if (nlh->nlmsg_len < NLMSG_LENGTH(sizeof(int))) + return -EINVAL; + ui_nl_set_state(*data); + return 0; + case USERUI_MSG_SET_DEBUG_STATE: + if (nlh->nlmsg_len < NLMSG_LENGTH(sizeof(int))) + return -EINVAL; + toi_bkd.toi_debug_state = (*data); + return 0; + case USERUI_MSG_SPACE: + wake_up_interruptible(&userui_wait_for_key); + return 0; + case USERUI_MSG_GET_POWERDOWN_METHOD: + toi_send_netlink_message(&ui_helper_data, + USERUI_MSG_GET_POWERDOWN_METHOD, + &toi_poweroff_method, + sizeof(toi_poweroff_method)); + return 0; + case USERUI_MSG_SET_POWERDOWN_METHOD: + if (nlh->nlmsg_len < NLMSG_LENGTH(sizeof(int))) + return -EINVAL; + toi_poweroff_method = (*data); + return 0; + case USERUI_MSG_GET_LOGLEVEL: + toi_send_netlink_message(&ui_helper_data, + USERUI_MSG_GET_LOGLEVEL, + &toi_bkd.toi_default_console_level, + sizeof(toi_bkd.toi_default_console_level)); + return 0; + case USERUI_MSG_SET_LOGLEVEL: + if (nlh->nlmsg_len < NLMSG_LENGTH(sizeof(int))) + return -EINVAL; + toi_bkd.toi_default_console_level = (*data); + return 0; + case USERUI_MSG_PRINTK: + printk((char *) data); + return 0; + } + + /* Unhandled here */ + return 1; +} + +/** + * userui_cond_pause - Possibly pause at user request. + * + * @pause: Whether to pause or just display the message. + * @message: Message to display at the start of pausing. + * + * Potentially pause and wait for the user to tell us to continue. We normally + * only pause when @pause is set. While paused, the user can do things like + * changing the loglevel, toggling the display of debugging sections and such + * like. + */ +static void userui_cond_pause(int pause, char *message) +{ + int displayed_message = 0, last_key = 0; + + while (last_key != 32 && + ui_helper_data.pid != -1 && + ((test_action_state(TOI_PAUSE) && pause) || + (test_action_state(TOI_SINGLESTEP)))) { + if (!displayed_message) { + toi_prepare_status(DONT_CLEAR_BAR, + "%s Press SPACE to continue.%s", + message ? message : "", + (test_action_state(TOI_SINGLESTEP)) ? + " Single step on." : ""); + displayed_message = 1; + } + last_key = userui_wait_for_keypress(0); + } + schedule(); +} + +/** + * userui_prepare_console - Prepare the console for use. + * + * Prepare a console for use, saving current kmsg settings and attempting to + * start userui. Console loglevel changes are handled by userui. + */ +static void userui_prepare_console(void) +{ + orig_kmsg = kmsg_redirect; + kmsg_redirect = fg_console + 1; + + ui_helper_data.pid = -1; + + if (!userui_ops.enabled) { + printk("TuxOnIce: Userui disabled.\n"); + return; + } + + if (*ui_helper_data.program) + toi_netlink_setup(&ui_helper_data); + else + printk(KERN_INFO "TuxOnIce: Userui program not configured.\n"); +} + +/** + * userui_cleanup_console - Cleanup after a cycle. + * + * Tell userui to cleanup, and restore kmsg_redirect to its original value. + */ + +static void userui_cleanup_console(void) +{ + if (ui_helper_data.pid > -1) + toi_netlink_close(&ui_helper_data); + + kmsg_redirect = orig_kmsg; +} + +/* + * User interface specific /sys/power/tuxonice entries. + */ + +static struct toi_sysfs_data sysfs_params[] = { +#if defined(CONFIG_NET) && defined(CONFIG_SYSFS) + { TOI_ATTR("enable_escape", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_CAN_CANCEL, 0) + }, + + { TOI_ATTR("pause_between_steps", SYSFS_RW), + SYSFS_BIT(&toi_bkd.toi_action, TOI_PAUSE, 0) + }, + + { TOI_ATTR("enabled", SYSFS_RW), + SYSFS_INT(&userui_ops.enabled, 0, 1, 0) + }, + + { TOI_ATTR("progress_granularity", SYSFS_RW), + SYSFS_INT(&progress_granularity, 1, 2048, 0) + }, + + { TOI_ATTR("program", SYSFS_RW), + SYSFS_STRING(ui_helper_data.program, 255, 0), + .write_side_effect = set_ui_program_set, + }, +#endif +}; + +static struct toi_module_ops userui_ops = { + .type = MISC_MODULE, + .name = "userui", + .shared_directory = "user_interface", + .module = THIS_MODULE, + .storage_needed = userui_storage_needed, + .save_config_info = userui_save_config_info, + .load_config_info = userui_load_config_info, + .memory_needed = userui_memory_needed, + .sysfs_data = sysfs_params, + .num_sysfs_entries = sizeof(sysfs_params) / + sizeof(struct toi_sysfs_data), +}; + +static struct ui_ops my_ui_ops = { + .post_atomic_restore = userui_post_atomic_restore, + .update_status = userui_update_status, + .message = userui_message, + .prepare_status = userui_prepare_status, + .abort = userui_abort_hibernate, + .cond_pause = userui_cond_pause, + .prepare = userui_prepare_console, + .cleanup = userui_cleanup_console, + .wait_for_key = userui_wait_for_keypress, +}; + +/** + * toi_user_ui_init - Boot time initialisation for user interface. + * + * Invoked from the core init routine. + */ +static __init int toi_user_ui_init(void) +{ + int result; + + ui_helper_data.nl = NULL; + strncpy(ui_helper_data.program, CONFIG_TOI_USERUI_DEFAULT_PATH, 255); + ui_helper_data.pid = -1; + ui_helper_data.skb_size = sizeof(struct userui_msg_params); + ui_helper_data.pool_limit = 6; + ui_helper_data.netlink_id = NETLINK_TOI_USERUI; + ui_helper_data.name = "userspace ui"; + ui_helper_data.rcv_msg = userui_user_rcv_msg; + ui_helper_data.interface_version = 7; + ui_helper_data.must_init = 0; + ui_helper_data.not_ready = userui_cleanup_console; + init_completion(&ui_helper_data.wait_for_process); + result = toi_register_module(&userui_ops); + if (!result) + result = toi_register_ui_ops(&my_ui_ops); + if (result) + toi_unregister_module(&userui_ops); + + return result; +} + +#ifdef MODULE +/** + * toi_user_ui_ext - Cleanup code for if the core is unloaded. + */ +static __exit void toi_user_ui_exit(void) +{ + toi_remove_ui_ops(&my_ui_ops); + toi_unregister_module(&userui_ops); +} + +module_init(toi_user_ui_init); +module_exit(toi_user_ui_exit); +MODULE_AUTHOR("Nigel Cunningham"); +MODULE_DESCRIPTION("TuxOnIce Userui Support"); +MODULE_LICENSE("GPL"); +#else +late_initcall(toi_user_ui_init); +#endif diff --git a/kernel/printk.c b/kernel/printk.c index 89011bf..ee96793 100644 --- a/kernel/printk.c +++ b/kernel/printk.c @@ -33,6 +33,7 @@ #include #include #include +#include #include @@ -93,9 +94,12 @@ static DEFINE_SPINLOCK(logbuf_lock); * The indices into log_buf are not constrained to log_buf_len - they * must be masked before subscripting */ -static unsigned long log_start; /* Index into log_buf: next char to be read by syslog() */ -static unsigned long con_start; /* Index into log_buf: next char to be sent to consoles */ -static unsigned long log_end; /* Index into log_buf: most-recently-written-char + 1 */ +/* Index into log_buf: next char to be read by syslog() */ +static unsigned long POSS_NOSAVE log_start; +/* Index into log_buf: next char to be sent to consoles */ +static unsigned long POSS_NOSAVE con_start; +/* Index into log_buf: most-recently-written-char + 1 */ +static unsigned long POSS_NOSAVE log_end; /* * Array of consoles built from command line options (console=) @@ -118,10 +122,11 @@ static int console_may_schedule; #ifdef CONFIG_PRINTK -static char __log_buf[__LOG_BUF_LEN]; -static char *log_buf = __log_buf; -static int log_buf_len = __LOG_BUF_LEN; -static unsigned long logged_chars; /* Number of chars produced since last read+clear operation */ +static POSS_NOSAVE char __log_buf[__LOG_BUF_LEN]; +static POSS_NOSAVE char *log_buf = __log_buf; +static POSS_NOSAVE int log_buf_len = __LOG_BUF_LEN; +/* Number of chars produced since last read+clear operation */ +static POSS_NOSAVE unsigned long logged_chars; static int __init log_buf_len_setup(char *str) { @@ -885,6 +890,7 @@ void suspend_console(void) acquire_console_sem(); console_suspended = 1; } +EXPORT_SYMBOL(suspend_console); void resume_console(void) { @@ -893,6 +899,7 @@ void resume_console(void) console_suspended = 0; release_console_sem(); } +EXPORT_SYMBOL(resume_console); /** * acquire_console_sem - lock the console system for exclusive use. diff --git a/kernel/timer.c b/kernel/timer.c index 26671f4..70c2e8a 100644 --- a/kernel/timer.c +++ b/kernel/timer.c @@ -37,6 +37,8 @@ #include #include #include +#include +#include #include #include @@ -868,6 +870,59 @@ unsigned long avenrun[3]; EXPORT_SYMBOL(avenrun); +#ifdef CONFIG_PM +static unsigned long avenrun_save[3]; +/* + * save_avenrun - Record the values prior to starting a hibernation cycle. + * We do this to make the work done in hibernation invisible to userspace + * post-suspend. Some programs, including some MTAs, watch the load average + * and stop work until it lowers. Without this, they would stop working for + * a while post-resume, unnecessarily. + */ + +static void save_avenrun(void) +{ + avenrun_save[0] = avenrun[0]; + avenrun_save[1] = avenrun[1]; + avenrun_save[2] = avenrun[2]; +} + +static void restore_avenrun(void) +{ + if (!avenrun_save[0]) + return; + + avenrun[0] = avenrun_save[0]; + avenrun[1] = avenrun_save[1]; + avenrun[2] = avenrun_save[2]; + + avenrun_save[0] = 0; +} + +static int avenrun_pm_callback(struct notifier_block *nfb, + unsigned long action, + void *ignored) +{ + switch (action) { + case PM_HIBERNATION_PREPARE: + save_avenrun(); + return NOTIFY_OK; + case PM_POST_HIBERNATION: + restore_avenrun(); + return NOTIFY_OK; + } + + return NOTIFY_DONE; +} + +static void register_pm_notifier_callback(void) +{ + pm_notifier(avenrun_pm_callback, 0); +} +#else +static inline void register_pm_notifier_callback(void) { } +#endif + /* * calc_load - given tick count, update the avenrun load estimates. * This is called while holding a write_lock on xtime_lock. @@ -1358,6 +1413,7 @@ void __init init_timers(void) BUG_ON(err == NOTIFY_BAD); register_cpu_notifier(&timers_nb); open_softirq(TIMER_SOFTIRQ, run_timer_softirq, NULL); + register_pm_notifier_callback(); } /** diff --git a/lib/vsprintf.c b/lib/vsprintf.c index 7b481ce..54d569a 100644 --- a/lib/vsprintf.c +++ b/lib/vsprintf.c @@ -355,6 +355,29 @@ static char *number(char *buf, char *end, unsigned long long num, int base, int return buf; } +/* + * vsnprintf_used + * + * Functionality : Print a string with parameters to a buffer of a + * limited size. Unlike vsnprintf, we return the number + * of bytes actually put in the buffer, not the number + * that would have been put in if it was big enough. + */ +int snprintf_used(char *buffer, int buffer_size, const char *fmt, ...) +{ + int result; + va_list args; + + if (!buffer_size) + return 0; + + va_start(args, fmt); + result = vsnprintf(buffer, buffer_size, fmt, args); + va_end(args); + + return result > buffer_size ? buffer_size : result; +} + /** * vsnprintf - Format a string and place it in a buffer * @buf: The buffer to place the result into diff --git a/mm/Makefile b/mm/Makefile index 5c0b0ea..a0ea700 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -11,7 +11,7 @@ obj-y := bootmem.o filemap.o mempool.o oom_kill.o fadvise.o \ page_alloc.o page-writeback.o pdflush.o \ readahead.o swap.o truncate.o vmscan.o \ prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \ - page_isolation.o $(mmu-y) + dyn_pageflags.o page_isolation.o $(mmu-y) obj-$(CONFIG_BOUNCE) += bounce.o obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o thrash.o diff --git a/mm/dyn_pageflags.c b/mm/dyn_pageflags.c new file mode 100644 index 0000000..30d95f0 --- /dev/null +++ b/mm/dyn_pageflags.c @@ -0,0 +1,801 @@ +/* + * lib/dyn_pageflags.c + * + * Copyright (C) 2004-2007 Nigel Cunningham + * + * This file is released under the GPLv2. + * + * Routines for dynamically allocating and releasing bitmaps + * used as pseudo-pageflags. + * + * We use bitmaps, built out of order zero allocations and + * linked together by kzalloc'd arrays of pointers into + * an array that looks like... + * + * pageflags->bitmap[node][zone_id][page_num][ul] + * + * All of this is transparent to the caller, who just uses + * the allocate & free routines to create/destroy bitmaps, + * and get/set/clear to operate on individual flags. + * + * Bitmaps can be sparse, with the individual pages only being + * allocated when a bit is set in the page. + * + * Memory hotplugging support is work in progress. A zone's + * start_pfn may change. If it does, we need to reallocate + * the zone bitmap, adding additional pages to the front to + * cover the bitmap. For simplicity, we don't shift the + * contents of existing pages around. The lock is only used + * to avoid reentrancy when resizing zones. The replacement + * of old data with new is done atomically. If we try to test + * a bit in the new area before the update is completed, we + * know it's zero. + * + * TuxOnIce knows the structure of these pageflags, so that + * it can serialise them in the image header. TODO: Make + * that support more generic so that TuxOnIce doesn't need + * to know how dyn_pageflags are stored. + */ + +/* Avoid warnings in include/linux/mm.h */ +struct page; +struct dyn_pageflags; +int test_dynpageflag(struct dyn_pageflags *bitmap, struct page *page); + +#include +#include +#include + +static LIST_HEAD(flags_list); +static DEFINE_SPINLOCK(flags_list_lock); + +static void* (*dyn_allocator)(unsigned long size, unsigned long flags); + +static int dyn_pageflags_debug; + +#define PR_DEBUG(a, b...) \ + do { if (dyn_pageflags_debug) printk(a, ##b); } while (0) +#define DUMP_DEBUG(bitmap) \ + do { if (dyn_pageflags_debug) dump_pagemap(bitmap); } while (0) + +#if BITS_PER_LONG == 32 +#define UL_SHIFT 5 +#else +#if BITS_PER_LONG == 64 +#define UL_SHIFT 6 +#else +#error Bits per long not 32 or 64? +#endif +#endif + +#define BIT_NUM_MASK ((sizeof(unsigned long) << 3) - 1) +#define PAGE_NUM_MASK (~((1 << (PAGE_SHIFT + 3)) - 1)) +#define UL_NUM_MASK (~(BIT_NUM_MASK | PAGE_NUM_MASK)) + +/* + * PAGENUMBER gives the index of the page within the zone. + * PAGEINDEX gives the index of the unsigned long within that page. + * PAGEBIT gives the index of the bit within the unsigned long. + */ +#define PAGENUMBER(zone_offset) ((int) (zone_offset >> (PAGE_SHIFT + 3))) +#define PAGEINDEX(zone_offset) ((int) ((zone_offset & UL_NUM_MASK) >> UL_SHIFT)) +#define PAGEBIT(zone_offset) ((int) (zone_offset & BIT_NUM_MASK)) + +#define PAGE_UL_PTR(bitmap, node, zone_num, zone_pfn) \ + ((bitmap[node][zone_num][PAGENUMBER(zone_pfn)])+PAGEINDEX(zone_pfn)) + +#define pages_for_zone(zone) \ + (DIV_ROUND_UP((zone)->spanned_pages, (PAGE_SIZE << 3))) + +#define pages_for_span(span) \ + (DIV_ROUND_UP(span, PAGE_SIZE << 3)) + +/* __maybe_unused for testing functions below */ +#define GET_BIT_AND_UL(pageflags, page) \ + struct zone *zone = page_zone(page); \ + unsigned long pfn = page_to_pfn(page); \ + unsigned long zone_pfn = pfn - zone->zone_start_pfn; \ + int node = page_to_nid(page); \ + int zone_num = zone_idx(zone); \ + int pagenum = PAGENUMBER(zone_pfn) + 2; \ + int page_offset = PAGEINDEX(zone_pfn); \ + unsigned long **zone_array = ((pageflags)->bitmap && \ + (pageflags)->bitmap[node] && \ + (pageflags)->bitmap[node][zone_num]) ? \ + (pageflags)->bitmap[node][zone_num] : NULL; \ + unsigned long __maybe_unused *ul = (zone_array && \ + (unsigned long) zone_array[0] <= pfn && \ + (unsigned long) zone_array[1] >= (pagenum-2) && \ + zone_array[pagenum]) ? zone_array[pagenum] + page_offset : \ + NULL; \ + int bit __maybe_unused = PAGEBIT(zone_pfn); + +#define for_each_online_pgdat_zone(pgdat, zone_nr) \ + for_each_online_pgdat(pgdat) \ + for (zone_nr = 0; zone_nr < MAX_NR_ZONES; zone_nr++) + +/** + * dump_pagemap - Display the contents of a bitmap for debugging purposes. + * + * @pagemap: The array to be dumped. + */ +void dump_pagemap(struct dyn_pageflags *pagemap) +{ + int i = 0; + struct pglist_data *pgdat; + unsigned long ****bitmap = pagemap->bitmap; + + printk(" --- Dump bitmap %p ---\n", pagemap); + + printk(KERN_INFO "%p: Sparse flag = %d\n", + &pagemap->sparse, pagemap->sparse); + printk(KERN_INFO "%p: Bitmap = %p\n", + &pagemap->bitmap, bitmap); + + if (!bitmap) + goto out; + + for_each_online_pgdat(pgdat) { + int node_id = pgdat->node_id, zone_nr; + printk(KERN_INFO "%p: Node %d => %p\n", + &bitmap[node_id], node_id, + bitmap[node_id]); + if (!bitmap[node_id]) + continue; + for (zone_nr = 0; zone_nr < MAX_NR_ZONES; zone_nr++) { + printk(KERN_INFO "%p: Zone %d => %p%s\n", + &bitmap[node_id][zone_nr], zone_nr, + bitmap[node_id][zone_nr], + bitmap[node_id][zone_nr] ? "" : + " (empty)"); + if (!bitmap[node_id][zone_nr]) + continue; + + printk(KERN_INFO "%p: Zone start pfn = %p\n", + &bitmap[node_id][zone_nr][0], + bitmap[node_id][zone_nr][0]); + printk(KERN_INFO "%p: Number of pages = %p\n", + &bitmap[node_id][zone_nr][1], + bitmap[node_id][zone_nr][1]); + for (i = 2; i < (unsigned long) bitmap[node_id] + [zone_nr][1] + 2; i++) + printk(KERN_INFO + "%p: Page %2d = %p\n", + &bitmap[node_id][zone_nr][i], + i - 2, + bitmap[node_id][zone_nr][i]); + } + } +out: + printk(KERN_INFO " --- Dump of bitmap %p finishes\n", pagemap); +} +EXPORT_SYMBOL_GPL(dump_pagemap); + +/** + * clear_dyn_pageflags - Zero all pageflags in a bitmap. + * + * @pagemap: The array to be cleared. + * + * Clear an array used to store dynamically allocated pageflags. + */ +void clear_dyn_pageflags(struct dyn_pageflags *pagemap) +{ + int i = 0, zone_idx; + struct pglist_data *pgdat; + unsigned long ****bitmap = pagemap->bitmap; + + for_each_online_pgdat_zone(pgdat, zone_idx) { + int node_id = pgdat->node_id; + struct zone *zone = &pgdat->node_zones[zone_idx]; + + if (!populated_zone(zone) || + (!bitmap[node_id] || !bitmap[node_id][zone_idx])) + continue; + + for (i = 2; i < pages_for_zone(zone) + 2; i++) + if (bitmap[node_id][zone_idx][i]) + memset((bitmap[node_id][zone_idx][i]), 0, + PAGE_SIZE); + } +} +EXPORT_SYMBOL_GPL(clear_dyn_pageflags); + +/** + * Allocators. + * + * During boot time, we want to use alloc_bootmem_low. Afterwards, we want + * kzalloc. These routines let us do that without causing compile time warnings + * about mismatched sections, as would happen if we did a simple + * boot ? alloc_bootmem_low() : kzalloc() below. + */ + +/** + * boot_time_allocator - Allocator used while booting. + * + * @size: Number of bytes wanted. + * @flags: Allocation flags (ignored here). + */ +static __init void *boot_time_allocator(unsigned long size, unsigned long flags) +{ + return alloc_bootmem_low(size); +} + +/** + * normal_allocator - Allocator used post-boot. + * + * @size: Number of bytes wanted. + * @flags: Allocation flags. + * + * Allocate memory for our page flags. + */ +static void *normal_allocator(unsigned long size, unsigned long flags) +{ + if (size == PAGE_SIZE) + return (void *) get_zeroed_page(flags); + else + return kzalloc(size, flags); +} + +/** + * dyn_pageflags_init - Do the earliest initialisation. + * + * Very early in the boot process, set our allocator (alloc_bootmem_low) and + * allocate bitmaps for slab and buddy pageflags. + */ +void __init dyn_pageflags_init(void) +{ + dyn_allocator = boot_time_allocator; +} + +/** + * dyn_pageflags_use_kzalloc - Reset the allocator for normal use. + * + * Reset the allocator to our normal, post boot function. + */ +void __init dyn_pageflags_use_kzalloc(void) +{ + dyn_allocator = (void *) normal_allocator; +} + +/** + * try_alloc_dyn_pageflag_part - Try to allocate a pointer array. + * + * Try to allocate a contiguous array of pointers. + */ +static int try_alloc_dyn_pageflag_part(int nr_ptrs, void **ptr) +{ + *ptr = (*dyn_allocator)(sizeof(void *) * nr_ptrs, GFP_ATOMIC); + + if (*ptr) + return 0; + + printk(KERN_INFO + "Error. Unable to allocate memory for dynamic pageflags."); + return -ENOMEM; +} + +static int populate_bitmap_page(struct dyn_pageflags *pageflags, int take_lock, + unsigned long **page_ptr) +{ + void *address; + unsigned long flags = 0; + + if (take_lock) + spin_lock_irqsave(&pageflags->struct_lock, flags); + + /* + * The page may have been allocated while we waited. + */ + if (*page_ptr) + goto out; + + address = (*dyn_allocator)(PAGE_SIZE, GFP_ATOMIC); + + if (!address) { + PR_DEBUG("Error. Unable to allocate memory for " + "dynamic pageflags page."); + if (pageflags) + spin_unlock_irqrestore(&pageflags->struct_lock, flags); + return -ENOMEM; + } + + *page_ptr = address; +out: + if (take_lock) + spin_unlock_irqrestore(&pageflags->struct_lock, flags); + return 0; +} + +/** + * resize_zone_bitmap - Resize the array of pages for a bitmap. + * + * Shrink or extend a list of pages for a zone in a bitmap, preserving + * existing data. + */ +static int resize_zone_bitmap(struct dyn_pageflags *pagemap, struct zone *zone, + unsigned long old_pages, unsigned long new_pages, + unsigned long copy_offset, int take_lock) +{ + unsigned long **new_ptr = NULL, ****bitmap = pagemap->bitmap; + int node_id = zone_to_nid(zone), zone_idx = zone_idx(zone), + to_copy = min(old_pages, new_pages), result = 0; + unsigned long **old_ptr = bitmap[node_id][zone_idx], i; + + if (new_pages) { + if (try_alloc_dyn_pageflag_part(new_pages + 2, + (void **) &new_ptr)) + return -ENOMEM; + + if (old_pages) + memcpy(new_ptr + 2 + copy_offset, old_ptr + 2, + sizeof(unsigned long) * to_copy); + + new_ptr[0] = (void *) zone->zone_start_pfn; + new_ptr[1] = (void *) new_pages; + } + + /* Free/alloc bitmap pages. */ + if (old_pages > new_pages) { + for (i = new_pages + 2; i < old_pages + 2; i++) + if (old_ptr[i]) + free_page((unsigned long) old_ptr[i]); + } else if (!pagemap->sparse) { + for (i = old_pages + 2; i < new_pages + 2; i++) + if (populate_bitmap_page(NULL, take_lock, + (unsigned long **) &new_ptr[i])) { + result = -ENOMEM; + break; + } + } + + bitmap[node_id][zone_idx] = new_ptr; + kfree(old_ptr); + return result; +} + +/** + * check_dyn_pageflag_range - Resize a section of a dyn_pageflag array. + * + * @pagemap: The array to be worked on. + * @zone: The zone to get in sync with reality. + * + * Check the pagemap has correct allocations for the zone. This can be + * invoked when allocating a new bitmap, or for hot[un]plug, and so + * must deal with any disparities between zone_start_pfn/spanned_pages + * and what we have allocated. In addition, we must deal with the possibility + * of zone_start_pfn having changed. + */ +int check_dyn_pageflag_zone(struct dyn_pageflags *pagemap, struct zone *zone, + int force_free_all, int take_lock) +{ + int node_id = zone_to_nid(zone), zone_idx = zone_idx(zone); + unsigned long copy_offset = 0, old_pages, new_pages; + unsigned long **old_ptr = pagemap->bitmap[node_id][zone_idx]; + + old_pages = old_ptr ? (unsigned long) old_ptr[1] : 0; + new_pages = force_free_all ? 0 : pages_for_span(zone->spanned_pages); + + if (old_pages == new_pages && + (!old_pages || (unsigned long) old_ptr[0] == zone->zone_start_pfn)) + return 0; + + if (old_pages && + (unsigned long) old_ptr[0] != zone->zone_start_pfn) + copy_offset = pages_for_span((unsigned long) old_ptr[0] - + zone->zone_start_pfn); + + /* New/expanded zone? */ + return resize_zone_bitmap(pagemap, zone, old_pages, new_pages, + copy_offset, take_lock); +} + +#ifdef CONFIG_MEMORY_HOTPLUG_SPARSE +/** + * dyn_pageflags_hotplug - Add pages to bitmaps for hotplugged memory. + * + * Seek to expand bitmaps for hotplugged memory. We ignore any failure. + * Since we handle sparse bitmaps anyway, they'll be automatically + * populated as needed. + */ +void dyn_pageflags_hotplug(struct zone *zone) +{ + struct dyn_pageflags *this; + + list_for_each_entry(this, &flags_list, list) + check_dyn_pageflag_zone(this, zone, 0, 1); +} +#endif + +/** + * free_dyn_pageflags - Free an array of dynamically allocated pageflags. + * + * @pagemap: The array to be freed. + * + * Free a dynamically allocated pageflags bitmap. + */ +void free_dyn_pageflags(struct dyn_pageflags *pagemap) +{ + int zone_idx; + struct pglist_data *pgdat; + unsigned long flags; + + DUMP_DEBUG(pagemap); + + if (!pagemap->bitmap) + return; + + for_each_online_pgdat_zone(pgdat, zone_idx) + check_dyn_pageflag_zone(pagemap, + &pgdat->node_zones[zone_idx], 1, 1); + + for_each_online_pgdat(pgdat) { + int i = pgdat->node_id; + + if (pagemap->bitmap[i]) + kfree((pagemap->bitmap)[i]); + } + + kfree(pagemap->bitmap); + pagemap->bitmap = NULL; + + pagemap->initialised = 0; + + if (!pagemap->sparse) { + spin_lock_irqsave(&flags_list_lock, flags); + list_del_init(&pagemap->list); + pagemap->sparse = 1; + spin_unlock_irqrestore(&flags_list_lock, flags); + } +} +EXPORT_SYMBOL_GPL(free_dyn_pageflags); + +/** + * allocate_dyn_pageflags - Allocate a bitmap. + * + * @pagemap: The bitmap we want to allocate. + * @sparse: Whether to make the array sparse. + * + * The array we're preparing. If sparse, we don't allocate the actual + * pages until they're needed. If not sparse, we add the bitmap to the + * list so that if we're supporting memory hotplugging, we can allocate + * new pages on hotplug events. + * + * This routine may be called directly, or indirectly when the first bit + * needs to be set on a previously unused bitmap. + */ +int allocate_dyn_pageflags(struct dyn_pageflags *pagemap, int sparse) +{ + int zone_idx, result = -ENOMEM; + struct zone *zone; + struct pglist_data *pgdat; + unsigned long flags; + + if (!sparse && (pagemap->sparse || !pagemap->initialised)) { + spin_lock_irqsave(&flags_list_lock, flags); + list_add(&pagemap->list, &flags_list); + spin_unlock_irqrestore(&flags_list_lock, flags); + } + + spin_lock_irqsave(&pagemap->struct_lock, flags); + + pagemap->initialised = 1; + pagemap->sparse = sparse; + + if (!pagemap->bitmap && try_alloc_dyn_pageflag_part((1 << NODES_WIDTH), + (void **) &pagemap->bitmap)) + goto out; + + for_each_online_pgdat(pgdat) { + int node_id = pgdat->node_id; + + if (!pagemap->bitmap[node_id] && + try_alloc_dyn_pageflag_part(MAX_NR_ZONES, + (void **) &(pagemap->bitmap)[node_id])) + goto out; + + for (zone_idx = 0; zone_idx < MAX_NR_ZONES; zone_idx++) { + zone = &pgdat->node_zones[zone_idx]; + + if (populated_zone(zone) && + check_dyn_pageflag_zone(pagemap, zone, 0, 0)) + goto out; + } + } + + result = 0; + +out: + spin_unlock_irqrestore(&pagemap->struct_lock, flags); + return result; +} +EXPORT_SYMBOL_GPL(allocate_dyn_pageflags); + +/** + * test_dynpageflag - Test a page in a bitmap. + * + * @bitmap: The bitmap we're checking. + * @page: The page for which we want to test the matching bit. + * + * Test whether the bit is on in the array. The array may be sparse, + * in which case the result is zero. + */ +int test_dynpageflag(struct dyn_pageflags *bitmap, struct page *page) +{ + GET_BIT_AND_UL(bitmap, page); + return ul ? test_bit(bit, ul) : 0; +} +EXPORT_SYMBOL_GPL(test_dynpageflag); + +/** + * set_dynpageflag - Set a bit in a bitmap. + * + * @bitmap: The bitmap we're operating on. + * @page: The page for which we want to set the matching bit. + * + * Set the associated bit in the array. If the array is sparse, we + * seek to allocate the missing page. + */ +void set_dynpageflag(struct dyn_pageflags *pageflags, struct page *page) +{ + GET_BIT_AND_UL(pageflags, page); + + if (!ul) { + /* + * Sparse, hotplugged or unprepared. + * Allocate / fill gaps in high levels + */ + if (allocate_dyn_pageflags(pageflags, 1) || + populate_bitmap_page(pageflags, 1, (unsigned long **) + &pageflags->bitmap[node][zone_num][pagenum])) { + printk(KERN_EMERG "Failed to allocate storage in a " + "sparse bitmap.\n"); + dump_pagemap(pageflags); + BUG(); + } + set_dynpageflag(pageflags, page); + } else + set_bit(bit, ul); +} +EXPORT_SYMBOL_GPL(set_dynpageflag); + +/** + * clear_dynpageflag - Clear a bit in a bitmap. + * + * @bitmap: The bitmap we're operating on. + * @page: The page for which we want to clear the matching bit. + * + * Clear the associated bit in the array. It is not an error to be asked + * to clear a bit on a page we haven't allocated. + */ +void clear_dynpageflag(struct dyn_pageflags *bitmap, struct page *page) +{ + GET_BIT_AND_UL(bitmap, page); + if (ul) + clear_bit(bit, ul); +} +EXPORT_SYMBOL_GPL(clear_dynpageflag); + +/** + * get_next_bit_on - Get the next bit in a bitmap. + * + * @pageflags: The bitmap we're searching. + * @counter: The previous pfn. We always return a value > this. + * + * Given a pfn (possibly max_pfn+1), find the next pfn in the bitmap that + * is set. If there are no more flags set, return max_pfn+1. + */ +unsigned long get_next_bit_on(struct dyn_pageflags *pageflags, + unsigned long counter) +{ + struct page *page; + struct zone *zone; + unsigned long *ul = NULL; + unsigned long zone_offset; + int pagebit, zone_num, first = (counter == (max_pfn + 1)), node; + + if (first) + counter = first_online_pgdat()->node_zones->zone_start_pfn; + + page = pfn_to_page(counter); + zone = page_zone(page); + node = zone->zone_pgdat->node_id; + zone_num = zone_idx(zone); + zone_offset = counter - zone->zone_start_pfn; + + if (first) + goto test; + + do { + zone_offset++; + + if (zone_offset >= zone->spanned_pages) { + do { + zone = next_zone(zone); + if (!zone) + return max_pfn + 1; + } while (!zone->spanned_pages); + + zone_num = zone_idx(zone); + node = zone->zone_pgdat->node_id; + zone_offset = 0; + } +test: + pagebit = PAGEBIT(zone_offset); + + if (!pagebit || !ul) { + ul = pageflags->bitmap[node][zone_num] + [PAGENUMBER(zone_offset)+2]; + if (ul) + ul += PAGEINDEX(zone_offset); + else { + PR_DEBUG("Unallocated page. Skipping from zone" + " offset %lu to the start of the next " + "one.\n", zone_offset); + zone_offset = roundup(zone_offset + 1, + PAGE_SIZE << 3) - 1; + PR_DEBUG("New zone offset is %lu.\n", + zone_offset); + continue; + } + } + + if (!ul || !(*ul & ~((1 << pagebit) - 1))) { + zone_offset += BITS_PER_LONG - pagebit - 1; + continue; + } + + } while (!ul || !test_bit(pagebit, ul)); + + return zone->zone_start_pfn + zone_offset; +} +EXPORT_SYMBOL_GPL(get_next_bit_on); + +#ifdef SELF_TEST +#include + +static __init int dyn_pageflags_test(void) +{ + struct dyn_pageflags test_map; + struct page *test_page1 = pfn_to_page(1); + unsigned long pfn = 0, start, end; + int i, iterations; + + memset(&test_map, 0, sizeof(test_map)); + + printk("Dynpageflags testing...\n"); + + printk(KERN_INFO "Set page 1..."); + set_dynpageflag(&test_map, test_page1); + if (test_dynpageflag(&test_map, test_page1)) + printk(KERN_INFO "Ok.\n"); + else + printk(KERN_INFO "FAILED.\n"); + + printk(KERN_INFO "Test memory hotplugging #1 ..."); + { + unsigned long orig_size; + GET_BIT_AND_UL(&test_map, test_page1); + orig_size = (unsigned long) test_map.bitmap[node][zone_num][1]; + /* + * Use the code triggered when zone_start_pfn lowers, + * checking that our bit is then set in the third page. + */ + resize_zone_bitmap(&test_map, zone, orig_size, + orig_size + 2, 2); + DUMP_DEBUG(&test_map); + if ((unsigned long) test_map.bitmap[node][zone_num] + [pagenum + 2] && + (unsigned long) test_map.bitmap[node][zone_num] + [pagenum + 2][0] == 2UL) + printk(KERN_INFO "Ok.\n"); + else + printk(KERN_INFO "FAILED.\n"); + } + + printk(KERN_INFO "Test memory hotplugging #2 ..."); + { + /* + * Test expanding bitmap length. + */ + unsigned long orig_size; + GET_BIT_AND_UL(&test_map, test_page1); + orig_size = (unsigned long) test_map.bitmap[node] + [zone_num][1]; + resize_zone_bitmap(&test_map, zone, orig_size, + orig_size + 2, 0); + DUMP_DEBUG(&test_map); + pagenum += 2; /* Offset for first test */ + if (test_map.bitmap[node][zone_num][pagenum] && + test_map.bitmap[node][zone_num][pagenum][0] == 2UL && + (unsigned long) test_map.bitmap[node][zone_num][1] == + orig_size + 2) + printk(KERN_INFO "Ok.\n"); + else + printk(KERN_INFO "FAILED ([%d][%d][%d]: %p && %lu == " + "2UL && %p == %lu).\n", + node, zone_num, pagenum, + test_map.bitmap[node][zone_num][pagenum], + test_map.bitmap[node][zone_num][pagenum] ? + test_map.bitmap[node][zone_num][pagenum][0] : 0, + test_map.bitmap[node][zone_num][1], + orig_size + 2); + } + + free_dyn_pageflags(&test_map); + + allocate_dyn_pageflags(&test_map, 0); + + start = jiffies; + + iterations = 25000000 / max_pfn; + + for (i = 0; i < iterations; i++) { + for (pfn = 0; pfn < max_pfn; pfn++) + set_dynpageflag(&test_map, pfn_to_page(pfn)); + for (pfn = 0; pfn < max_pfn; pfn++) + clear_dynpageflag(&test_map, pfn_to_page(pfn)); + } + + end = jiffies; + + free_dyn_pageflags(&test_map); + + printk(KERN_INFO "Dyn: %d iterations of setting & clearing all %lu " + "flags took %lu jiffies.\n", + iterations, max_pfn, end - start); + + start = jiffies; + + for (i = 0; i < iterations; i++) { + for (pfn = 0; pfn < max_pfn; pfn++) + set_bit(7, &(pfn_to_page(pfn))->flags); + for (pfn = 0; pfn < max_pfn; pfn++) + clear_bit(7, &(pfn_to_page(pfn))->flags); + } + + end = jiffies; + + printk(KERN_INFO "Real flags: %d iterations of setting & clearing " + "all %lu flags took %lu jiffies.\n", + iterations, max_pfn, end - start); + + iterations = 25000000; + + start = jiffies; + + for (i = 0; i < iterations; i++) { + set_dynpageflag(&test_map, pfn_to_page(1)); + clear_dynpageflag(&test_map, pfn_to_page(1)); + } + + end = jiffies; + + printk(KERN_INFO "Dyn: %d iterations of setting & clearing all one " + "flag took %lu jiffies.\n", iterations, end - start); + + start = jiffies; + + for (i = 0; i < iterations; i++) { + set_bit(7, &(pfn_to_page(1))->flags); + clear_bit(7, &(pfn_to_page(1))->flags); + } + + end = jiffies; + + printk(KERN_INFO "Real pageflag: %d iterations of setting & clearing " + "all one flag took %lu jiffies.\n", + iterations, end - start); + return 0; +} + +late_initcall(dyn_pageflags_test); +#endif + +static int __init dyn_pageflags_debug_setup(char *str) +{ + printk(KERN_INFO "Dynamic pageflags debugging enabled.\n"); + dyn_pageflags_debug = 1; + return 1; +} + +__setup("dyn_pageflags_debug", dyn_pageflags_debug_setup); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 9512a54..63d698d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -77,6 +77,8 @@ static int __add_zone(struct zone *zone, unsigned long phys_start_pfn) } memmap_init_zone(nr_pages, nid, zone_type, phys_start_pfn, MEMMAP_HOTPLUG); + + dyn_pageflags_hotplug(zone); return 0; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b2838c2..9ca9b5c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1722,6 +1722,26 @@ static unsigned int nr_free_zone_pages(int offset) return sum; } +static unsigned int nr_unallocated_zone_pages(int offset) +{ + /* Just pick one node, since fallback list is circular */ + pg_data_t *pgdat = NODE_DATA(numa_node_id()); + unsigned int sum = 0; + + struct zonelist *zonelist = pgdat->node_zonelists + offset; + struct zone **zonep = zonelist->zones; + struct zone *zone; + + for (zone = *zonep++; zone; zone = *zonep++) { + unsigned long high = zone->pages_high; + unsigned long left = zone_page_state(zone, NR_FREE_PAGES); + if (left > high) + sum += left - high; + } + + return sum; +} + /* * Amount of free RAM allocatable within ZONE_DMA and ZONE_NORMAL */ @@ -1732,6 +1752,15 @@ unsigned int nr_free_buffer_pages(void) EXPORT_SYMBOL_GPL(nr_free_buffer_pages); /* + * Amount of free RAM allocatable within ZONE_DMA and ZONE_NORMAL + */ +unsigned int nr_unallocated_buffer_pages(void) +{ + return nr_unallocated_zone_pages(gfp_zone(GFP_USER)); +} +EXPORT_SYMBOL_GPL(nr_unallocated_buffer_pages); + +/* * Amount of free RAM allocatable within all zones */ unsigned int nr_free_pagecache_pages(void) diff --git a/mm/vmscan.c b/mm/vmscan.c index e5a9597..5e1fccf 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -760,6 +760,28 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, return nr_taken; } +/* return_lru_pages puts a list of pages back on a zone's lru lists. */ + +static void return_lru_pages(struct list_head *page_list, struct zone *zone, + struct pagevec *pvec) +{ + while (!list_empty(page_list)) { + struct page *page = lru_to_page(page_list); + VM_BUG_ON(PageLRU(page)); + SetPageLRU(page); + list_del(&page->lru); + if (PageActive(page)) + add_page_to_active_list(zone, page); + else + add_page_to_inactive_list(zone, page); + if (!pagevec_add(pvec, page)) { + spin_unlock_irq(&zone->lru_lock); + __pagevec_release(pvec); + spin_lock_irq(&zone->lru_lock); + } + } +} + /* * clear_active_flags() is a helper for shrink_active_list(), clearing * any active bits from the pages in the list. @@ -795,7 +817,6 @@ static unsigned long shrink_inactive_list(unsigned long max_scan, lru_add_drain(); spin_lock_irq(&zone->lru_lock); do { - struct page *page; unsigned long nr_taken; unsigned long nr_scan; unsigned long nr_freed; @@ -855,21 +876,7 @@ static unsigned long shrink_inactive_list(unsigned long max_scan, /* * Put back any unfreeable pages. */ - while (!list_empty(&page_list)) { - page = lru_to_page(&page_list); - VM_BUG_ON(PageLRU(page)); - SetPageLRU(page); - list_del(&page->lru); - if (PageActive(page)) - add_page_to_active_list(zone, page); - else - add_page_to_inactive_list(zone, page); - if (!pagevec_add(&pvec, page)) { - spin_unlock_irq(&zone->lru_lock); - __pagevec_release(&pvec); - spin_lock_irq(&zone->lru_lock); - } - } + return_lru_pages(&page_list, zone, &pvec); } while (nr_scanned < max_scan); spin_unlock(&zone->lru_lock); done: @@ -1478,6 +1485,72 @@ out: return nr_reclaimed; } +struct lru_save { + struct zone *zone; + struct list_head active_list; + struct list_head inactive_list; + struct lru_save *next; +}; + +struct lru_save *lru_save_list; + +void unlink_lru_lists(void) +{ + struct zone *zone; + + for_each_zone(zone) { + struct lru_save *this; + unsigned long moved, scanned; + + if (!zone->spanned_pages) + continue; + + this = (struct lru_save *) + kzalloc(sizeof(struct lru_save), GFP_ATOMIC); + + BUG_ON(!this); + + this->next = lru_save_list; + lru_save_list = this; + + this->zone = zone; + + spin_lock_irq(&zone->lru_lock); + INIT_LIST_HEAD(&this->active_list); + INIT_LIST_HEAD(&this->inactive_list); + moved = isolate_lru_pages(zone_page_state(zone, NR_ACTIVE), + &zone->active_list, &this->active_list, + &scanned, 0, ISOLATE_BOTH); + __mod_zone_page_state(zone, NR_ACTIVE, -moved); + moved = isolate_lru_pages(zone_page_state(zone, NR_INACTIVE), + &zone->inactive_list, &this->inactive_list, + &scanned, 0, ISOLATE_BOTH); + __mod_zone_page_state(zone, NR_INACTIVE, -moved); + spin_unlock_irq(&zone->lru_lock); + } +} + +void relink_lru_lists(void) +{ + while (lru_save_list) { + struct lru_save *this = lru_save_list; + struct zone *zone = this->zone; + struct pagevec pvec; + + pagevec_init(&pvec, 1); + + lru_save_list = this->next; + + spin_lock_irq(&zone->lru_lock); + return_lru_pages(&this->active_list, zone, &pvec); + return_lru_pages(&this->inactive_list, zone, &pvec); + spin_unlock_irq(&zone->lru_lock); + pagevec_release(&pvec); + + kfree(this); + } +} + /* * The background pageout daemon, started as a kernel thread * from the init process. @@ -1563,6 +1636,9 @@ void wakeup_kswapd(struct zone *zone, int order) if (!populated_zone(zone)) return; + if (freezer_is_on()) + return; + pgdat = zone->zone_pgdat; if (zone_watermark_ok(zone, order, zone->pages_low, 0, 0)) return; @@ -1576,6 +1652,108 @@ void wakeup_kswapd(struct zone *zone, int order) } #ifdef CONFIG_PM +static unsigned long shrink_ps1_zone(struct zone *zone, + unsigned long total_to_free, struct scan_control sc) +{ + unsigned long freed = 0; + + while (total_to_free > freed) { + unsigned long nr_slab = global_page_state(NR_SLAB_RECLAIMABLE); + struct reclaim_state reclaim_state; + + if (nr_slab > total_to_free) + nr_slab = total_to_free; + + reclaim_state.reclaimed_slab = 0; + shrink_slab(nr_slab, sc.gfp_mask, nr_slab); + if (!reclaim_state.reclaimed_slab) + return freed; + + freed += reclaim_state.reclaimed_slab; + } + + return freed; +} + +unsigned long shrink_ps2_zone(struct zone *zone, unsigned long total_to_free, + struct scan_control sc) +{ + int prio; + unsigned long freed = 0; + if (!populated_zone(zone) || zone_is_all_unreclaimable(zone)) + return 0; + + for (prio = DEF_PRIORITY; prio >= 0; prio--) { + unsigned long to_free, just_freed, orig_size; + unsigned long old_nr_active; + + to_free = min(zone_page_state(zone, NR_ACTIVE) + + zone_page_state(zone, NR_INACTIVE), + total_to_free - freed); + + if (to_free <= 0) + return freed; + + sc.swap_cluster_max = to_free - + zone_page_state(zone, NR_INACTIVE); + + do { + old_nr_active = zone_page_state(zone, NR_ACTIVE); + zone->nr_scan_active = sc.swap_cluster_max - 1; + shrink_active_list(sc.swap_cluster_max, zone, &sc, + prio); + zone->nr_scan_active = 0; + + sc.swap_cluster_max = to_free - zone_page_state(zone, + NR_INACTIVE); + + } while (sc.swap_cluster_max > 0 && + zone_page_state(zone, NR_ACTIVE) > old_nr_active); + + to_free = min(zone_page_state(zone, NR_ACTIVE) + + zone_page_state(zone, NR_INACTIVE), + total_to_free - freed); + + do { + orig_size = zone_page_state(zone, NR_ACTIVE) + + zone_page_state(zone, NR_INACTIVE); + zone->nr_scan_inactive = to_free; + sc.swap_cluster_max = to_free; + shrink_inactive_list(to_free, zone, &sc); + just_freed = (orig_size - + (zone_page_state(zone, NR_ACTIVE) + + zone_page_state(zone, NR_INACTIVE))); + zone->nr_scan_inactive = 0; + freed += just_freed; + } while (just_freed > 0 && freed < total_to_free); + } + + return freed; +} + +void shrink_one_zone(struct zone *zone, unsigned long total_to_free, + int ps_wanted) +{ + unsigned long freed = 0; + struct scan_control sc = { + .gfp_mask = GFP_KERNEL, + .may_swap = 0, + .may_writepage = 1, + .swappiness = vm_swappiness, + }; + + if (total_to_free <= 0) + return; + + if (is_highmem(zone)) + sc.gfp_mask |= __GFP_HIGHMEM; + + if (ps_wanted & 2) + freed = shrink_ps2_zone(zone, total_to_free, sc); + if (ps_wanted & 1) + shrink_ps1_zone(zone, total_to_free - freed, sc); +} + /* * Helper function for shrink_all_memory(). Tries to reclaim 'nr_pages' pages * from LRU lists system-wide, for given pass and priority, and returns the