CREATE INDEX IF NOT EXIST这种创建索引的方式在MySQL中是不支持的,只支持CREATE INDEX,连续两次运行CREATE INDEX ,第二次会报错。
那么,我遇到这么一个问题:希望在一段升级脚本中对一个表建立索引。这段脚本可能会被反复运行,下面的写法是安全的:
if(!pdo_fieldexists('goods', 'cover_content')) {
pdo_query("ALTER TABLE goods ADD `cover_content` text");
CREATE TABLE if not exists `express` (
`id` int(11) NOT NULL AUTO_INCREMENT,
但是,索引,不支持这种先判断后创建的模式,即不支持CREATE INDEX IF NOT EXIST,也没有给出明确接口判断一个索引是否存在。MySQL文档给出的解决方案太复杂了:http://dev.mysql.com/doc/refman/5.0/en/create-index.html
我想到了一个土鳖的办法:
if(!pdo_fieldexists('dummy_table', 'new_column')) {
pdo_query("ALTER TABLE dummy_table ADD `new_column` int");
create index my_index_on_goods xxxxx;
完美搞定!代价是搞了个不必要的列,姑且称为Guard Column吧!如果这种需求很频繁,可以专门搞一个无用的表做这个事情。
为什么MySQL不支持CREATE INDEX IF NOT EXIST呢?周一问问OceanBase是否支持这个东东。有点奇怪。
CREATE INDEX IF NOT EXIST这种创建索引的方式在MySQL中是不支持的,只支持CREATE INDEX,连续两次运行CREATE INDEX ,第二次会报错。那么,我遇到这么一个问题:希望在一段升级脚本中对一个表建立索引。这段脚本可能会被反复运行,下面的写法是安全的:if(!pdo_fieldexists('goods', 'cover_content')
VCLZip Native Delphi Zip/UnZip Component!
(VCLZip Lite: Version 2.23 April 14th, 2002)
(VCLZip Pro: Version 3.10 Buid 1 - November 25th, 2007)
IMPORTANT: If installing the registered version, please be sure to always re-install/rebuild the components (VCLZip and VCLUnZip) to the component pallette (or rebuild the design time package) so that the ThisVersion property and any other new properties will be properly updated. If your application still does not run without the IDE, open up VCLZip's package, click on options and look at the Directories/Conditionals tab. If KPDEMO is defined, remove it and recompile the package.
***IMPORTANT: Please remember do not install these components into a package by the name of either VCLZip or VCLUnZip. You will receive an error if you do.
PLEASE TAKE A LOOK AT THE "WHAT's NEW IN THIS VERSION" LINK IN THE HELP FILE AS IT HAS CONVENIENT LINKS TO ALL OF THE NEW TOPICS.
====================
Version 3.10 Build 1
- Several bug fixes.
- Added support for Delphi 2006, 2007
- Added support for BCB 2006, 2007
- Improved memory performance when working with archives containing extremely high number of compressed files.
====================
Version 3.06 Build 2
Made Delphi 2005 compatible
Other assorted fixes
====================
Version 3.05 Build 1
Fixed a lot of incompatabilities between VCLZip and WinZip
Other assorted fixes
====================
Version 3.04 Build 1
New ZLib methods for optimized compression and decompression of single entities of data in standard ZLib format, without the overhead of the PKZip format. This is excellent for compression of data to be sent across the net, compressing web pages (http compliant compression), blobs, etc.
- ZLibCompressStream
- ZLibDecompressStream
- ZLibCompressBuffer
- ZLibDecompressBuffer
- ZLibCompressString
- ZLibDecompressString
Overloaded TStream Methods for Delphi 4,5, BCB 4, and 5
- UnZipToStream
- UnZipToStreamByIndex
- ZipFromStream
Special OnGetNextTStream Event for Delphi 4,5, BCB 4, and 5
- Allows zipping multiple TStreams in one process
- More efficient than calling ZipFromStream multiple times
Capability to use the latest version of ZLib 1.2.1.
- VCLZip currently uses 1.4.1 by default.
- By defining ZLIB121, VCLZip will use the latest version of ZLib which is included with the registered version.
Some optimization improvements which should show some improvement in zipping and unzipping speed when using TkpStreams with D4, D5, BCB4, and BCB5.
============
Version 3.03 (VCLZip Pro)
- Please test your application thoroughly with this new version of VCLZip Pro. While it has been tested and has even been used in at least two production applications for several months now prior to initial release, there are so many combinations of property settings, environment differences, and ways to use VCLZip that you should always test VCLZip completely in your application before deploying.
*** New Zip64 capabilities, properties, methods and events:
- Uncompressed, Compressed, and Archive file sizes can be up to 2^63-1 bytes in length.
- You can compress up to 2147483647 files into an archive. This is compatible with PKZip's Zip64 format.
- If a file does not extend beyond any of the original limitations (filesizes of 4 gig or 65535 files) then no Zip64 format information is included in the archive.
- property isZip64 - tells you when you are working with a zip file that is using Zip64 format.
Much faster processing due to linking to Zlib object files for compression and decompression routines.
Blocked Zip Files (spanned zip archives split onto hard drive)
- Now completely compatible with PKZip and WinZip split archives file naming format.
- For backwards compatability you can tell VCLZip to use the old VCLZip filenaming format by using the BlockMode property.
- New method OnFileNameForSplitPart called just before each split filepart is created. VCLZip supplies a default implementation of this method so for most purposes you won't need your own.
- method DefaultFileNameForSplitPart - VCLZip calls this internally if you don't define your own OnFileNameForSplitPart. You can also call it from your own OnFileNameForSplitPart if you wish to add some processing to the default behavior.
- property BlockMode - determines whether VCLZip uses PKZip/WinZip standard naming convention or VCLZip classic method.
- method DefaultGetNextDisk - VCLZip calls this internally if you don't define your own OnGetNextDisk. You can also call it from your own OnGetNextDisk event if you wish to add some processing to the default behavior.
- Properties for controlling which files are zipped...
- IncludeHiddenFiles - default False;
- IncludeSysFiles: - default False;
- IncludeReadOnlyFiles: - default True;
- IncludeArchiveFiles: - default True;
- Event OnGetNextStream - Allows you to zip from multiple streams when using the ZipFromStream method. This improves performance since repeated calls to ZipFromStream causes the archive to be updated on each subsequent call.
- property ThisBuild - Tells you the current build. See also ThisVersion
- property OnHandleMessage - Handles interactive messages with VCLZip. There is a default, so you don't need to define your own unless you wish to eliminate interactive messages and handle them on your own. This is helpful if you are using VCLZip as a service or on a webserver for instance.
******** Upgrading existing applications that use VCLZip 2.X **********
For the most part, existing applications will work as-is. Just install VCLZip 3.X and recompile your code. Here are some things to be aware of though...
1) If your app currently creates mmBlock archives (spanned directly to hard drive) and you define your own OnGetNextDisk in VCLZip 2.X, you should move your code from this event that handles mmBlock events to the new event OnFileNameForSplitPart. However, if you simply rely on VCLZip's default OnGetNextDisk then you don't have to worry about this.
2) If your app creates mmBlock archives, the default naming convention has changed to match the PKZip/WinZip standard. If you wish to keep the same naming convention then set BlockMode := mbClassic.
3) OnGetNextDisk and OnPrepareNextDisk events are called for the 1st disk now. VCLZip 2.X only calls these events starting with the 2nd disk.
4) properties CompressedSize[Index], UncompressedSize[Index], ZipSize are now Int64 types.
5) Delphi 4, Delphi 5, BCB 4, and BCB5 are all capable of using the Zip64 format. However they use the TkpHugeStream decendants which act just like TStreams except they handle files/stream sizes larger than 2gig. There is a TkpHugeFileStream and a TkpHugeMemoryStream which should handle 99% of all necessary actions. If you currently work with VCLZip 2.X with TBlobStreams or some other type of streams, you can either define your own TkpBlobStream for instance which inherits from TkpHugeStream, or use the TkpHugeStream.CopyFrom(TStream, Count) and the TkpHugeStream.GetStream: TStream methods to give VCLZip your stream and get it back. Ofcourse when using regular TStream decendants in D4,4,BCB4,and 5, you cannot create Zip64 archives. If you use Delphi 6, 7, or BCB 6, you don't have to worry about any of this as the normal TSTream is used by VCLZip and handles large file/stream sizes.
============
Version 2.23 (VCLZip Lite)
Added the OEMConvert property. Filenames stored in a PKZip compatible archive normally go through an OEM conversion to make them ascii compatible. When opening the zip file the conversion is undone. If you do not plan on having other zip utilities opening up your archives this conversion process is not really necessary. Setting this property to False will eliminate this process. The default value for this property is True for normal PKZip compatability.
Added OnEncrypt and OnDecrypt events. These allow you to replace the standard pkzip encryption with your own. Data is passed to these events a buffer at a time. Use this with care as this is still somewhat experimental and I'm not sure how useful it is yet. You must make all changes within the buffer sent in to you. Treat the entire file as a stream. Byte for byte replacement only. No additional keys can be saved.
Added OnRecursingFile event. Sometimes when using wildcards and recursing directories, there was no reporting of progress. This will be fired each time a file matches as the file list is being built while recursing directories.
Added the EncryptBeforeCompress boolean property. The default for this property is False and if left like this VCLZip will behave like normal. If set to True, VCLZip will encrypt each buffer prior to compressing it instead of afterwards. This will cause files to not be decryptable by normal zip utilities thereby adding a bit of extra security.
Bugs Fixed:
IMPORTANT!!! Behavior of freeing the ArchiveStream (compressed stream) has been modified. VCLZip will now no longer try to free ArchiveStream, you must free it yourself. This was due to a problem where it would be freed automatically if there was a problem with the ArchiveStream when trying to open it as a zip file (possibly corrupt). Best practice is that ArchiveStream should always point toward a TMemoryStream that you create anyway.
Modified the SFX code (the code used to create the SFX stub distributed with VCLZip) so that it handles filenames that have been run through an OEM Conversion. The SFX was losing accented characters. This modification means that if you are creating zip files to be used as SFX's you will want to leave the OEMConvert property mentioned above, set to it's default value of True.
Modified so that when cursor is changed to hourglass by VCLZip, previous cursor is saved correctly instead of just changing it back to default cursor.
Now saves Central Directory Extra Fields correctly.
Fixed the SFX code so that it works properly if you use Copy /B to concatenate a zip file to the stub.
Due to a Delphi strange behavior sometimes path names for directory only entries would become corrupted.
Removed reference to QConsts, replaced with RTLConsts.
Sometimes a GPF would result if a corrupt zip file was opened.
Using a wildcard in pathname added to FilesList did not work.
Using '*.*' as a wildcard in files added to FilesList now is the same as using '*'.
VCLZip will now check for CancelTheOperation during initial building of the fileslist instead of just during compression processing.
Added a final call to OnTotalPercentDone with 100% because this didn't always happen.
Attributes were not getting set correctly for directory-only entries.
Fixed a problem that was not allowing ZipComment's to be added correctly to spanned or blocked zip files. Not the same fix as in 2.22.
Directories (directory-only entries) were not being restored properly unless DoAll was True.
You were unable to delete a directory from which files were recursively zipped until exiting your application.
============
Version 2.22
Now Delphi 6 compatible.
New event called {link=93,OnRecursingFile} which gets called as VCLZip recurses directories searching for files that match a wildcard that is entered in the FilesList. This gets called each time a file matches the wildcard.
Fixed a bug which kept diskettes from being labeled when creating spanned zip files on WIN31.
Fixed a bug which sometimes did not allow zip comments to be added to blocked zip sets.
Fixed a bug which caused VCLZip to not properly handle the IncompleteZip exception on spanned zip sets unless you called ReadZip prior to calling UnZip.
Version 2.21 (Changes are shown in the build stages as they were implemented)
Pre-Release Build 5:
When working with temporary files, VCLZip will now rename, instead of copy, the temp file if the destination is on the same drive. This will speed up the adding of files to an existing zip file when the resulting zip file is very large.
Pre-Release Build 4:
New event called OnPrepareNextDisk which is an event that will allow you, when creating spanned zip files across diskettes, to do things like format a diskette that has just been inserted, or to add or delete files from the diskette before continuing with the zipping process.
Fixed a problem that was causing the CancelTheOperation Method to not work properly.
Pre-Release Build 3:
Fixed bug which caused VCLZip to miscalculate space needed for zfc file if wildcards are put into the FilesList.
Fixed bug so you could have FilePercentDone without needing TotalPercentDone when creating spanned zip files
Fixed so relative_offset set correctly for spanned zips. Side effect of removing needless write of header.
Added code to read local fileheaders if exception thrown when reading a central fileheader.
Fixed problem where directories couldn't be created from directory entries because the fullpath wasn't known yet. Result of having moved this code to earlier.
Fixed typo in creation of LOC header values which could cause error if reading local headers.
Changed so Zip Comment starting position is calculated based on end of central record instead of end of file.
Pre-Release Build 2:
IMPORTANT: Changed default for FileOpenMode back to fmShareDenyNone as it had been for all but version 2.20.
Fixed a problem where drivepart (i.e. C:\) was not being stripped when saving relative paths.
Added a BufferedStreamSize property which can increase the speed of creating zips to floppy (and other slow media) dramatically. The new default for this should increase the speed by as much as 3 times, but you can now tweak this especially for your application!
Added an ImproperZip property which gets set when VCLZip detects an inconsistency with the zip. This can be useful for detecting when VCLZip was able to open the zip in spite of an inconsistency found. There was no way to know this in the past.
Fixed a problem where zip comments in zfc files were not being read correctly.
Added a setZipSignatures procedure which allows you to modify the signatures of your zip file. This will cause other zip utilities to not be able to recognize or read your zip files created with VCLZip. Useful if you want to add further security to your zip files.
Pre-Release Build 1:
Some zip files would not open correctly, throwing an incomplete zip file exception due to an erroneous "extra field length" identifier in headers of some compressed files. These zip files are rare, but a very few people seemed to have several of them. This problem would not affect zip files created by VCLZip, and this problem should only occur in VCLZip 2.20, not in any previous version.
If you had Range Checking turned on, VCLZip would get a range check error when using a wildcard that ended with a * as in 'somefile.*'.
Under certain circumstances, drive information would not be stripped from path information if zipping recursively (including subdirectories)
"Retrying" to zip a file that could not be opened using the OnSkippingFile event would not always work correctly.
Creating spanned zip set to floppy should be faster now due to removing a needless header write to disk for each file.
VCLZip would not compile correctly with MAKESMALL defined.
Added code to make VCLZip work with BCB5. Haven't tested this yet though since I don't have BCB5 myself yet.
Added readonly boolean ImproperZip property which will be set to True when some sort of problem is found when opening the zip file, even if recoverable. This property will be enhanced and refined in the future.
If KeepZipOpen is set to True, when putting in the wrong disk in a spanned zip set, VCLZip would not always properly close the file on the old diskette before trying to open the file on the next diskette.
Added ECantWriteUCF exception which will be thrown if VCLZip runs out of room to write the uncompressed file when unzipping.
Timestamp was not being set properly when unzipping readonly files. Moved setting of the timestamp to before the attributes get set.
============
Version 2.20
Changes have been made in the following areas:
--Performance
There are a few code optimizations that should speed up the zipping process slightly.
--Spanned Zip Files
A new feature, turned on with the SaveZipInfoOnFirstDisk allows VCLZip to create and read spanned zip files starting with the first disk instead of the normally required last disk of the spanned disk set by saving a Zip Configuration File on the first disk. This feature can be used even if creating the spanned zip file directly to your hard drive.
A new property, SaveOnFirstDisk, allows you to save room on the first disk when creating a spanned zip file, to allow room for other files, such as setup programs, data files, or a Zip Configuration File.
Spanned zip files can now be directed toward disks greater than 2 gig in size as long as you are using Delphi 5 or BCB 4.
--UnZipping
The new Selected indexed property offers another way to flag files to be unzipped. Files that have the Selected property set to True can be unzipped using the UnZipSelected method. The Selected property will be cleared (set to False) for each file as it is unzipped, but you can also call the ClearSelected method to clear them all. At anytime the NumSelected property can be checked to see how many files have been selected.
Also, the UnZipToBufferByIndex and UnZipToStreamByIndex methods allow you to unzip files specified by their index instead of by name or wildcard.
The BufferLength property allows buffered output (buffer smaller than the total uncompressed filesize) when unzipping directly to memory (see UnZipToBuffer and UnZipToBufferByIndex). This will cause the OnGetNextBuffer Event to be called everytime BufferLength bytes have been output by VCLZip.
Modified to work in all ways with zip files that have "extra fields" in their headers. These tend to be quite rare, but they do show up from time to time.
--Zipping
Added a property called FileOpenMode which allows you to define the file open mode for files when they are opened to be zipped.
Added a Retry parameter to the OnSkippingFile Event that can be used to re-attempt to open a file for zipping that is open by another process. This gives the chance to close the file and continue with the zipping process rather than having to start over again.
Added a ENotEnoughRoom exception which will be thrown if there is not enough room to write to the archive, i.e. out of disk space.
The new OnUpdate Event gets fired when updating or freshening an existing archive. It is triggered for each file that already exists in the archive as it is either replaced or kept in the updated archive.
The AddDirEntriesOnRecurse will cause separate directory entries to be included in archives when doing recursive zips through subdirectories.
--Integrity Checking
A new method, CheckArchive, will perform an integrity check on all files in an archive. This is much faster than using FileIsOK on each file if testing all files in an archive with VERY MANY files.
Further improved checking for corrupted zip files when opening zip files.
--Encryption
The following new properties and methods allow lower level work with password encrypted archives:
DecryptHeader Gets the decryption header for a particular compressed file in an archive
GetDecryptHeaderPtr Same as DecryptHeader but easier to use in BCB.
DecryptHeaderByte Method Tests a password against the decryption header found in the DecryptHeader property.
GetDecryptHeaderByteByPtr Same as DecryptHeaderByte but easier to use in BCB.
--Self Extracting Executables
Changes were made to the ZIPSFX32.BIN stub itself:
- Modified to work with zip files containing "extra fields" in their headers.
- Modified to change mouse cursor to an hour glass during processing.
- Check for correct file size is now done automatically
- Now uses the end of central and central headers to find the first local header.
- Added a progress meter
- Better checking for corrupted zip files.
- Added an information window that can optionally be shown when the sfx is initially started up.
- Added an AutoRun option to make the sfx stub run automatially when double clicked with no other interaction from the user.
For the new modified sfx stub, ZIPSFX32.BIN, instead of using kpSFXOpt, you should now use the TSfxConfig component to set the options for the sfx stub.
The new sfx can be found in the sfx\ subdirectory as usual and is called ZIPSFX32.BIN and the original sfx can be found in the same subdirectory except it is now called ORGSFX32.bin. Just rename it if you prefer that one (use KPSFXOPT instead of TSfxConfig with the old stub).
--Miscellaneous
The installation is now easier, atleast for first time installers of the source code. The .DPK files for Delphi and .CPP files for BCB are now included. Now these files simply have to be compiled and that's it. There is a separate option in the installation for installing to the different versions of Delphi and BCB.
Added a property called FlushFilesOnClose which will cause all files opened for write by VCLZip to have their disk buffers flushed to disk when closed.
Added the capability to delete Selected files from an archive using the DeleteEntries Method.
The behavior of the OnInCompleteZip Event has been greatly improved. You can now use this event to ask the user to insert the last disk of a spanned disk set rather than having to handle this situation from outside VCLZip.
The register procedures were changed so that the components now get installed to the "VCLZip" tab on the palette. I found that for all but Delphi 1 I had to actually manually move the components to the "VCLZip" tab. You may find that you have to do this too if you have already installed VCLZip before.
The components now use new bitmaps in place of the old ones on the component palette.
Separated many compiler defines into a new file called KPDEFS.INC.
====================================
Version 2.18:
1) Thanks to the hard work of a fellow registered user, added the capability to remove all dependencies on the Dialogs, Forms, Controls, and FileCtrl units by defining the conditional MAKESMALL, which results in a smaller footprint. This can be quite useful when putting VCLZip into a DLL for instance. In order to make this work, go into your Project | Options and select the Directories/Conditionals tab and enter MAKESMALL in the conditional defines text box. In Delphi you can add this conditinal define to the project options of your application that uses VCLZip and then do a "build all". In BCB you will have to add this to the project options of the package that contains VCLZip and then rebuild the package.
If you define MAKESMALL, the only things you lose are:
a) ZIP file open dialog box that appears when the ZipName is set to "?"
b) Select Directory dialog box that appears when the DestDir is set to "?"
c) Changing the cursor to an hour glass during some operations.
d) No long filename support in Delphi 1
2) Made VCLZip completely BCB4 compatible.
3) Added some exception handling to KPUNZIPP and KPINFLT, mainly to handle unexpected situations when wrong passwords are entered. This fixes the problem with PRP, the password recovery program.
4) For Borland C++ Builder, changed any COMP types to double, getting rid of the compiler warnings for unsupported comp type. This affects the OnStartZipInfo and OnStartUnZipInfo events, so you'll have to change the comp parameter to double in these events if you use them (in both your header files and in the CPP files).
5) Modified OnStartUnZip event so that FName (the filename of the file that is about to be unzipped along with complete path) is now a VAR parameter and can be modified. This allows you to change the path and name of a file that is about to be unzipped. This is especially helpfull in applications like Install Programs.
NOTE: You will need to change your current code to add the VAR to the event definition and implementation if you already use this event in your application. (In BCB, add a & just before the parameter instead of VAR)
6) Moved many type definitions to VCLUNZIP.PAS so that kpZipObj won't have to be included in your USES list.
7) Fixed bug that caused GPF when setting Zip Comment to '' (empty string).
8) Moved strings in VCLZip/VCLUnZip into a string table, making the code size a little smaller as well as making it much easier to localize string information. However you have the option of not using the new string table, for whatever reason, by defining NO_RES in your project options (in the conditional defines text box on the Directories/Conditionals tab).
9) Removed the need for several files. No longer included are kpstrm.res, kpstrm.rc, kpsconst.res, kpsconst.rc, kpstres.pas, and for Delphi 1, kpdrvs.pas. In some cases the need for these files was eliminated and in other cases just rolled into the newly included kpzcnst.rc, kpzcnst.pas, and kpzcnst.res. Definining NO_RES in your project options will elimiate the need for these new files but will make your code size slightly larger and you won't be able to localize your application without changing VCLZip source code.
10) Modified the OnFilePercentDone and OnTotalPercentDone progress events to work better when creating spanned disk sets and blocked zip sets. They no longer report 100% when the compressed file still has to be copied to disk.
11) Added the ReplaceReadOnly property. Setting this to true will allow files with the ReadOnly attribute to be replaced during the unzip process.
12) Added the ifNewer and ifOlder options to the OverwriteMode property. (This had somehow made it into the help file but not into VCLUnZip)
13) Added the SFXToZip method which will convert an SFX file to a regular zip file. The header pointers will be properly adjusted during the conversion.
14) Fixed a problem where the OnGetNextDisk event would always revert to the DefaultGetNextDisk method instead of what you entered into the Object Inspector each time your project was re-opened.
15) Fixed a bug that caused CRC errors when unzipping files from spanned disk sets if they were STORED (no compression) and spanned across disks.
16) Added the OnZipComplete and OnUnZipComplete events. If defined, these will fire at the very end of a zip or unzip operation (after all files have been processed, not after each file). These events will rarely be used since, normally you will be able to do the same thing at the point that the call to Zip or UnZip returns, but these events can be useful when using VCLZip in threads where in certain circumstances the return from the Zip or UnZip methods are not seen.
17) Creation of SFX files has never been easier!!! The addition of the MakeNewSFX method allows you to create Self Extracting Executables without the need to create a zip file first. The files that you specify in the FilesList property will be zipped, using all the normal VCLZip property settings, and the SFX will be created, all in one step! In addition, you can create configurable SFX files using this method, and you can do this especially easy by adding the new unit kpSFXOpt to your application's USES list and using the new 32bit SFX stub that is now distributed with VCLZip. This allows you to easily set things like SFX Dialog caption, default target extraction directory, file to launch after extraction, etc.
18) Fixed a memory leak that only affects applications using VCLZip that are compiled with Delphi 2, and that use wildcard specifications in the FilesList property.
Version 2.17a:
1) Fixed a bug that was keeping VCLZip from reading truncated zip files or sfx files that did not have their headers adjusted.
2) Fixed a bug that was causing a directory to be created on the C drive when doing integrity checking with the FileIsOK property.
3) Added {$V-} to kpZipObj.PAS
4) Moved two AssignTo methods to public instead of private in kpZipObj.PAS
Version 2.17:
1) Added Memory zipping and unzipping capabilities through the UnZipToBuffer and ZipFromBuffer methods. See the documentation for these methods in the Help File for more information.
2) New FileIsOK Property allows you to check for the integrity of individual files within an archive without actually unzipping the file.
3) Fixed a bug that kept checking of volume labels from working on WIN31 when working with spanned disk sets.
4) Removed all references to ChDirectory so that VCLZip will be more thread safe allowing separate instances of VCLZip in separate threads to be performing zip/unzip operations at the same time.
5) A new public property PreserveStubs allows you to make modifications to sfx archives and have the archive remain an SFX rather than revert back to a normal zip file.
6) Added a default OnGetNextDisk event. If one is not defined, then the default event will be called when the situation arises that a new disk is needed when zipping or unzipping a spanned or blocked zip archive.
7) Added more power to the wildcard capabilities. Now you can qualify the * wildcard character, for instance:
* would satisfy any number of contiguous characters as long as they are all a thru e.
* would satisfy any number of contiguous characters as long as none of them were a thru e.
This allows you to do things like include files in specific direcories into your ExcludeList. For instance:
VCLZip1.ExcludeList.Add('c:\test\*.txt')
would exclude the zipping of all .txt files in the test directory but not in any subdirectories.
8) Fixed other minor bugs and made other code enhancements.
Version 2.16:
***Please be aware that if you currently use the OnSkippingFile event in any of your applications, version 2.16 will require a small modification as this event has an added parameter and one of the current parameters is used a little differently when being called by the zip operation. Please see the help file for more information.
1) The OnSkippingFile Event has been changed slightly, adding a parameter for the filename.
2) OnSkippingFile is now called when a file to be zipped is skipped because it is locked by another application. See the Help File for more information.
3) Fixed a bug with the Exclude and NoCompressList where they were ignoring entries with anything before the extention (i.e. 'somefile.*' as opposed to '*.zip') if you were saving directory information.
4) Fixed a bug that caused an error if you added a wildcard with a non-existent directory to the FilesList.
5) A few other minor bug fixes.
Modifications for 2.15 include:
1) PackLevel can now be set to 0 (zero) which means no compression at all (STORED only).
2) New property ExcludeList is a new stringlist that you can add filenames and wildcards to in order to specify files that you do not wish to be included in an archive.
3) New property NoCompressList is a new stringlist that you can add filenames and wildcards to in order to specify files that you wish to be STORED with a PackLevel of 0 (zero), no compression.
4) All compiler warnings and hints were removed.
Modifications for 2.14 include:
1) Delphi 4 compatability.
2) Added ability to use complex wildcards when specifying which files are to be zipped. This includes wildcard characters not only in the filename but also in the pathname. This allows you to specify directories using wildcards, for instance:
VCLZip1.FilesList.add('c:\test\w*\mycode*.pas');
would get all PAS files beginning with mycode in subdirectories under TEST that begin with the letter w. Wilcards may be much more complex than this. Please see the help file for more information.
3) Added the ability to override the RECURSE property setting when specifying files to be zipped. By adding the following characters to the beginning of the filenames being added, you can override whatever the current setting is for the RECURSE property:
'>' will force recursion into subdirectories
'|' will force NO-recursion
For instance:
VCLZip1.FilesList.add('>c:\windows\*.ini');
will get all .ini files in and below the windows directory reguardless of what the recurse property setting is.
VCLZip1.FilesList.add('|c:\windows\sys*\*.dll');
will get all .dll files in subdirectories of the windows directories that start with 'sys' but will not recurse into any directories below the sys* directories.
4) The [ and ] characters previously used as special wildcard characters have been changed to since [ and ] are valid filename characters. If you still need to use the previous characters for backward compatability, I can show registered users how to easily modify a couple of constants in the source code in order to go back to the old style. See "Using Wildcards" in the help file for more information.
5) A few bug fixes.
Modifications for 2.13 include:
1) New property ResetArchiveBitOnZip causes each file's archive bit to be turned off after being zipped.
2) New Property SkipIfArchiveBitNotSet causes files who's archive bit is not set to be skipped during zipping operations.
3) A few modifications were made to allow more compatibility with BCB 1.
4) Cleaned up the Help File some.
5) KWF file now works for Delphi 1 and Delphi 2 again. Still can't get context sensitive help in Delphi 3.
6) Cleaned up some of the code that was causing compiler warnings and hints.
Modifications for 2.12 include:
1) Added a TempPath property to allow the temporary files path to be different from the Windows default.
2) Modified VCLZip so that any temporary files that are created receive a unique temporary filename so as not to clash with any other files in the temporary directory. This also allows working with zip files residing in the temporary directory.
3) Fixed a bug in the relative path feature.
4) Fixed a bug that caused a "list out of bounds" error if a file in the FilesList did not actually exist.
Modifications for 2.11 include:
1) Fixed password encryption bug for 16 bit.
2) Fixed "invalid pointer operation" when closing application bug.
3) Fixed path device truncation bug which caused inability to modify existing archives in 16 bit.
4) Fixed inability to cancel during wilcard expansion bug.
5) Added capability to better handle corrupted timestamps.
6) Added capability to open and work with SFX files that were created with the COPY/B method (header files not adjusted).
7) Other small bug fixes.
I'm still working on a bug which causes a GPF when continually unzipping the same file thousands to millions of times. This mainly affects programs like the Password Recovery Program (PRP) which uses the brute force method of searching for an archive's password.
Modifications for 2.10 include:
1) Capability for 16bit VCLZip to store long file/path names when running on a 32bit OS.
2) New property (Store83Names) which allows you to force DOS 8.3 file and path names to be stored.
3) Better UNC path support.
4) Fixed a bug to allow files to be added to an empty archive.
Modifications for 2.03 include:
1) Volume labels now get written correctly to spanned disk sets in Delphi 1 for all versions of Windows.
2) Delphi 1 VCLZip now correctly recognizes when it is running on Windows NT.
3) Fixed a problem with zipping files in the root directory when StorePaths = True.
4) File and Zip Comments are now read correctly from spanned/blocked zip archives.
5) Fixed a buf that was causing "Duplicate Object" errors.
Modifications for 2.02 include:
1) Fix for file comments which were supposed to be fixed in version 2.01 but weren't.
2) Fix for stream zipping. Version 2.01 would not create a new archive if using a stream. (The Stream Demo now allows creating new zip files to streams too)
3) A few other minor modifications to further solidify the code.
4) A modification to the Zip Utility Demo which allows unzipping from Blocked zip files as if they were single zip files.
5) Added a read-only, published ThisVersion property which reflects the version of the VCLZip/VCLUnZip that you are currently working with.
Modifications for 2.01 include:
1) Fixes for exceptions that were caused when CANCELING a zip or unzip of a spanned zip file.
2) Fix for a possible problem when zipping or unzipping a spanned zip file when one or more of the compressed files resided on more than 2 of the spanned parts.
3) Fix for file comments which were broken in version 2.00.
Additional features for version 2.00 include:
1) Modify/Add internal file details (filename, pathname, timestamp, comment) for any file while zipping, in the OnStartZip event.
2) Add an Archive Comment while zipping in the OnStartZipInfo event.
3) Delphi 1 compatiblity for VCLZip.
4) Stream to Stream Zipping - Archives themselves can now be TStreams!
5) New Relative Path Information option.
6) Unzip archives that weren't zipped with the Relative Path option turned on as if they had been by determining how much path information to use with the
Rootpath property.
7) Modify timestamps for files in existing archives (you could already modify filenames and pathnames for files in existing archives)
8) The OnBadPassword event now allows you to supply a new password and try the same file again when unzipping.
9) Source code has been cleaned up so that it will compile under Borland C++ Builder with no modifications.
Also some bugs were fixed, most importantly:
1) An empty file, that had been compressed into an archive would cause any file added to the archive to cause the archive to approximately double in size. Any archives containing empty files are not corrupted, they are OK. This was simply a fix to the way the archive was processed.
2) After creating an SFX file, you had to close the zip file before you could modify it in any way, otherwise a stream read error was encountered.
See the Help file for more information on new features.
This zip file is part of a self contained installation program. Just run it and the installation program will begin.
Contact vclzip@bigfoot.com for further information
Thanks!
Kevin Boylan
Overview 1
Lesson 1: Concepts – Locks and Lock Manager 3
Lesson 2: Concepts – Batch and Transaction 31
Lesson 3: Concepts – Locks and Applications 51
Lesson 4: Information Collection and Analysis 63
Lesson 5: Concepts – Formulating and Implementing Resolution 81
Module 4: Troubleshooting Locking and Blocking
At the end of this module, you will be able to:
Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.
Describe the various transaction types and how transactions differ from batches.
Describe how to troubleshoot blocking and locking issues.
Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.
Formulate hypothesis to resolve locking and blocking issues.
This lesson outlines some of the common causes that contribute to the perception of a slow server.
What You Will Learn
After completing this lesson, you will be able to:
Describe locking architecture used by SQL Server.
Identify the various lock modes used by SQL Server.
Discuss lock compatibility and concurrent access.
Identify different types of lock resources.
Discuss dynamic locking and lock escalation.
Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects.
Recommended Reading
Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney
SOX000821700049 – SQL 7.0 How to interpret lock resource Ids
SOX000925700237 – TITLE: Lock escalation in SQL 7.0
SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction
Locking Concepts
Delivery Tip
Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file.
Transactions in SQL Server provide the ACID properties:
Atomicity
A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation.
Consistency
An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent.
Isolation
Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes.
Durability
After a transaction commits, its effects will persist even if there are system failures.
Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation.
Definition of Dependency:
A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2.
Bad Dependencies
These include lost updates, dirty reads, non-repeatable reads, and phantoms.
ANSI SQL Isolation Levels
An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes.
Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms.
By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement.
To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example:
USE pubs
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
DBCC USEROPTIONS
Multigranular Locking
Multigranular Locking
In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem.
Intent Lock
Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode.
An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table.
Lock Mode
The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table:
SELECT * FROM master.dbo.spt_values WHERE type = N'L'
However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode.
Lock Compatibility
These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level.
SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX.
Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks.
Locking Resources
Delivery Tip
Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides.
SQL Server can lock these resources:
Item Description
DB A database.
File A database file
Index An entire index of a table.
Table An entire table, including all data and indexes.
Extent A contiguous group of data pages or index pages.
Page An 8-KB data page or index page.
Key Row lock within an index.
Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions.
RID A Row Identifier. Used to individually lock a single row within a table.
Application A lock resource defined by an application.
The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked.
Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output:
Database Lock Sub-Resources:
Full Database Lock (default)
[BULK-OP-DB] – Bulk Operation Lock for Database
[BULK-OP-LOG] – Bulk Operation Lock for Log
Table Lock Sub-Resources:
Full Table Lock (default)
[UPD-STATS] – Update statistics Lock
[COMPILE] – Compile Lock
Index Lock sub-Resources:
Full Index Lock (default)
[INDEX_ID] – Index ID Lock
[INDEX_NAME] – Index Name Lock
[BULK_ALLOC] – Bulk Allocation Lock
[DEFRAG] – Defragmentation Lock
For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids
Lock Resource Block
The resource type has the following resource block format:
Resource Type (Code) Content
DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0
File (3) Data 1: File ID; Data 2: 0; Data 3: 0
Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID
Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0.
Page (6) Data 1: Page Number; Data 3: 0.
Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key
Extent (8) Data 1: Extent ID; Data 3: 0.
RID (9) Data 1: RID; Data 3: 0.
Application (10) Data 1: Application resource name
The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value:
0x000705001F83D775010002014F0BEC4E
With byte swapping within each field, this can be decoded as:
Byte 0: Flag – 0x00
Byte 1: Resource Type – 0x07 (Key)
Byte 2-3: DBID – 0x0005
Byte 4-7: ObjectID – 0x 75D7831F (1977058079)
Byte 8-9: IndexID – 0x0001
Byte 10-16: Hash Key value – 0x 02014F0BEC4E
For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806.
Key Range Locking
Key Range Locking
To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as
WHERE salary BETWEEN 30000 AND 50000
SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans.
Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level.
The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification.
Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE:
SELECT *
FROM members
WHERE first_name between ‘Al’ and ‘Carl’
If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause.
Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table.
Impact
Key-range locking ensures that these scenarios are SERIALIZABLE:
Range scan query
Singleton fetch of nonexistent row
Delete operation
Insert operation
However, the following conditions must be satisfied before key-range locking can occur:
The transaction-isolation level must be set to SERIALIZABLE.
The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data.
Key Range Lock Mode
Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here.
Dynamic Locking
When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting).
Locking Decision
The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock.
A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed.
Lock Escalation
When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again.
The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks.
There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation.
Important
Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system.
For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0”
Lock Timeout
Application Lock Timeout
An application can set lock timeout for a session with the SET option:
SET LOCK_TIMEOUT N
where N is a number of milliseconds.
A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout.
After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only
Internal Lock Timeout
At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero.
Lock Duration
Lock Mode and Transaction Isolation Level
For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query:
use northwind
dbcc traceon(3604, 1200, 1211) -- turn on lock tracing
-- and disable escalation
set transaction isolation level repeatable read
begin tran
update dbo.[order details] set discount = convert (real, discount) where discount = 0.0
exec sp_lock
Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end.
Warning
Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only.
Lock Ownership
Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held.
For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed.
Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock.
Locking – Read Committed Scan
Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions.
Locking – Repeatable Read Scan
Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions.
Locking – Serializable Read Scan
Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted.
Prefetch and Isolation Level
Prefetch and Locking Behavior
The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously.
For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level.
With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower.
Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output.
For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”.
Other Locking Mechanism
Lock manager does not manage latches and spinlocks.
Latches
Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command.
Spinlock
A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired.
The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:
The old thread’s state must be saved
The new thread’s state must be reloaded
The data stored in the L1 and L2 cache are useless to the processor
On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately.
Lesson 2: Concepts – Batch and Transaction
This lesson outlines some of the common causes that contribute to the perception of a slow server.
What You Will Learn
After completing this lesson, you will be able to:
Review batch processing and error checking.
Review explicit, implicit and autocommit transactions and transaction nesting level.
Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.
Discuss various transaction isolation level and their impact on locking.
Discuss the difference between aborting a statement, a transaction, and a batch.
Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling.
Recommended Reading
Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney
Batch Definition
SQL Profiler Statements and Batches
To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.
Try This: Using SQL Profiler to Analyze Batch
1. Log on to a server with Query Analyzer
2. Startup the SQL Profiler against the same server
3. Start a trace using the “StandardSQLProfiler” template
4. Execute the following using Query Analyzer:
SELECT @@VERSION
SELECT @@SPID
The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch.
5. Now execute the following using Query Analyzer
{call sp_who()}
What shows up?
The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion.
Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above.
Looking at the output, not only can you see the batch markers but each statement as executed within the batch.
Autocommit, Explicit, and Implicit Transaction
Autocommit Transaction Mode (Default)
Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library.
A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode.
Explicit Transaction Mode
An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement.
Implicit Transaction Mode
SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.)
See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm).
Transaction Nesting
Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed.
Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction.
If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction.
The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column.
See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information.
Statement, Transaction, and Batch Abort
One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions.
Statement Abort
Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure.
Let us look closely at the following trigger:
CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT
BEGIN
SELECT 1/0
SELECT 'Next command in trigger'
To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request.
When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement.
Batch and Transaction Abort
On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued.
On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect.
Conversely, submitting a similar set of statements in a standalone batch can result in different behavior.
SELECT 1/0
SELECT 'Next command in batch'
Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly.
Aborting and Set Options
ARITHABORT
If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation.
When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error.
XACT_ABORT
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT.
For example:
CREATE TABLE t1 (a int PRIMARY KEY)
CREATE TABLE t2 (a int REFERENCES t1(a))
INSERT INTO t1 VALUES (1)
INSERT INTO t1 VALUES (3)
INSERT INTO t1 VALUES (4)
INSERT INTO t1 VALUES (6)
SET XACT_ABORT OFF
BEGIN TRAN
INSERT INTO t2 VALUES (1)
INSERT INTO t2 VALUES (2) /* Foreign key error */
INSERT INTO t2 VALUES (3)
COMMIT TRAN
SELECT 'Continue running batch 1...'
SET XACT_ABORT ON
BEGIN TRAN
INSERT INTO t2 VALUES (4)
INSERT INTO t2 VALUES (5) /* Foreign key error */
INSERT INTO t2 VALUES (6)
COMMIT TRAN
SELECT 'Continue running batch 2...'
/* Select shows only keys 1 and 3 added.
Key 2 insert failed and was rolled back, but
XACT_ABORT was OFF and rest of transaction
succeeded.
Key 5 insert error with XACT_ABORT ON caused
all of the second transaction to roll back.
Also note that 'Continue running batch 2...' is not
Returned to indicate that the batch is aborted.
SELECT *
FROM t2
DROP TABLE t2
DROP TABLE t1
Compile and Run-time Errors
Compile Errors
Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched.
For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed.
Optimization Errors
Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan.
Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan.
Runtime Errors
Runtime errors are those that are encountered during the execution of the query. Consider the following batch:
SELECT * FROM pubs.dbo.titles
UPDATE pubs.dbo.authors SET au_lname = au_lname
SELECT * FROM foo
UPDATE pubs.dbo.authors SET au_lname = au_lname
If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned.
Error: 208, Level 16, State 1, Line 1
Invalid object name 'foo'.
Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution.
In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time.
When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database:
Create table tblTest(iID int)
create trigger trgInsert on tblTest for INSERT as
begin
select * from authors
select * from authors2
select * from titles
begin tran
select 'Before'
insert into tblTest values(1)
select 'After'
select @@TRANCOUNT
When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this:
begin tran
select 'Before'
select * from authors2
select 'After'
select @@TRANCOUNT
One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0):
create table tblData(dtData datetime)
select 1
insert into tblData values(12/13/99)
On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted.
Error: 206, Level 16, State 2, Line 2
Operand type clash: int is incompatible with datetime
On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned.
To correct the problem on either version is to wrap the date string with quotes.
See Bug #56118 (sqlbug_70) for more details about this situation.
Another example of a runtime error is a 605 message.
Error: 605
Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'.
A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary.
Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level.
Transaction Error Checking
Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately.
Error Handling
Raiserror Details
Raiserror seems to be a source of confusion but is really rather simple.
Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved).
Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection.
Consider the following batch:
use pubs
begin tran
update authors set au_lname = 'smith'
raiserror ('This is bad', 19, 1) with log
select @@trancount
With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken.
Important
Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code.
Transactions and Triggers (1 of 2)
Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger.
For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm)
IMPLICIT_TRANSACTIONS ON Behavior
The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry.
However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire.
You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did.
Transactions and Triggers (2 of 2)
It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute.
If you modify the example, this following RAISERROR statement will set @@ERROR to 50000:
CREATE TRIGGER trgTest on tblTest for INSERT
BEGIN
ROLLBACK
INSERT INTO tblAudit VALUES (1)
RAISERROR('This is bad', 14,1)
However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0.
Carrying Forward an Active/Open Transaction
It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE.
Warning
It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger.
Transaction Count
The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter.
Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert).
In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active.
Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution.
If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute.
Warning
Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level.
When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1.
Examples
The following examples are based on this table:
create table tbl50000Insert (iID int NOT NULL)
If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements.
First Example
In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued.
print('Trigger issues rollback - cancels batch')
create trigger trg50000Insert on tbl50000Insert for INSERT as
begin
select 'Inserted', * from inserted
rollback tran
select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT'
create trigger trg50000Insert2 on tbl50000Insert for INSERT as
begin
select 'In Trigger2'
select 'Trigger 2 Inserted', * from inserted
insert into tbl50000Insert values(1)
print('---------------------- In same batch')
select * from tbl50000Insert
-- Cleanup
drop trigger trg50000Insert
drop trigger trg50000Insert2
delete from tbl50000Insert
Second Example
The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back.
print('Trigger issues rollback - increases tran count to continue batch')
create trigger trg50000Insert on tbl50000Insert for INSERT as
begin
select 'Inserted', * from inserted
rollback tran
begin tran
create trigger trg50000Insert2 on tbl50000Insert for INSERT as
begin
select 'In Trigger2'
select 'Trigger 2 Inserted', * from inserted
insert into tbl50000Insert values(2)
print('---------------------- In same batch')
select * from tbl50000Insert
-- Cleanup
drop trigger trg50000Insert
drop trigger trg50000Insert2
delete from tbl50000Insert
Third Example
In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run.
print('Trigger issues rollback - uses raiserror to set @@ERROR')
create trigger trg50000Insert on tbl50000Insert for INSERT as
begin
select 'Inserted', * from inserted
rollback tran
begin tran -- Increase @@trancount to allow
-- batch to continue
select @@trancount as ‘Trancount’
raiserror('This is from the trigger', 14,1)
insert into tbl50000Insert values(3)
select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount'
-- Cleanup
drop trigger trg50000Insert
delete from tbl50000Insert
Fourth Example
For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired.
print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch')
create trigger trg50000Insert on tbl50000Insert for INSERT as
begin
select 'Inserted', * from inserted
rollback
begin tran -- Increase @@trancount to
-- allow batch to continue
select @@TRANCOUNT as 'Trancount'
raiserror('This is from the trigger', 14,1)
create trigger trg50000Insert2 on tbl50000Insert for INSERT as
begin
select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount'
insert into tbl50000Insert values(4)
select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount'
-- Cleanup
drop trigger trg50000Insert
drop trigger trg50000Insert2
delete from tbl50000Insert
Lesson 3: Concepts – Locks and Applications
This lesson outlines some of the common causes that contribute to the perception of a slow server.
What You Will Learn
After completing this lesson, you will be able to:
Explain how lock hints are used and their impact.
Discuss the effect on locking when an application uses Microsoft Transaction Server.
Identify the different kinds of deadlocks including distributed deadlock.
Recommended Reading
Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney
Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney
Q239753 – Deadlock Situation Not Detected by SQL Server
Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim
Locking Hints
UPDLOCK
If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it.
READPAST
READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5.
READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks.
This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up.
Caution
The READPAST hint is not compatible with HOLDLOCK.
Try This: Using Locking Hints
1. Open a Query Window and connect to the pubs database.
2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection):
BEGIN TRANSACTION -- Conn 1
UPDATE titles
SET price = price * 0.9
WHERE title_id = 'BU1032'
3. Open a second connection and execute the following statements:
SELECT @@lock_timeout -- Conn 2
SELECT * FROM titles
SELECT * FROM authors
4. Open a third connection and execute the following statements:
SET LOCK_TIMEOUT 0 -- Conn 3
SELECT * FROM titles
SELECT * FROM authors
5. Open a fourth connection and execute the following statement:
SELECT * FROM titles (READPAST) -- Conn 4
WHERE title_ID < 'C'
SELECT * FROM authors
How many records were returned?
6. Open a fifth connection and execute the following statement:
SELECT * FROM titles (NOLOCK) -- Conn 5
WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds.
Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process.
Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself.
Deadlock Resolution
How is a deadlock resolved?
SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue.
Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction.
Symptoms of deadlocking
Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog.
If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:
Clients complain of mysteriously canceled queries when using certain features of an application.
May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur.
Triggers and Deadlock
Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim.
Warning
Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging.
See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information.
Distributed Deadlock – Scenario 1
Distributed Deadlocks
The term distributed deadlock is ambiguous. There are many types of distributed deadlocks.
Scenario 1
Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes.
SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions.
This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”.
Distributed Deadlock – Scenario 2
Scenario 2
Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60.
This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.”
SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle.
Distributed Deadlock – Scenario 3
Scenario 3
Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60.
No version of SQL Server is currently designed to detect this distributed deadlock.
This lesson outlines some of the common causes that contribute to the perception of a slow server.
What You Will Learn
After completing this lesson, you will be able to:
Identify specific information needed for troubleshooting issues.
Locate and collect information needed for troubleshooting issues.
Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.
Review information collected from master.dbo.sysprocesses table.
Review information collected from master.dbo.syslockinfo table.
Review output of sp_who, sp_who2, sp_lock.
Analyze Profiler log for query usage pattern.
Review output of trace flags to help troubleshoot deadlocks.
Recommended Reading
Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields
Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0
Q271509 - INF: How to Monitor SQL Server 2000 Blocking
Q251004 - How to Monitor SQL Server 7.0 Blocking
Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem
Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler
Locking and Blocking
Try This: Examine Blocked Processes
1. Open a Query Window and connect to the pubs database.
Execute the following statements:
BEGIN TRAN -- connection 1
UPDATE titles
SET price = price + 1
2. Open another connection and execute the following statement:
SELECT * FROM titles-- connection 2
3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3)
4. In the same connection, execute the following:
SELECT spid, cmd, waittype FROM master..sysprocesses
WHERE waittype 0 -- connection 3
5. Do not close any of the connections!
What was the wait type of the blocked process?
Try This: Look at locks held
Assumes all your connections are still open from the previous exercise.
• Execute sp_lock -- Connection 3
What locks is the process from the previous example holding?
Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction.
Collecting Information
See Module 2 for more about how to gather this information using various tools.
Recognizing Blocking Problems
How to Recognize Blocking Problems
Users complain about poor performance at a certain time of day, or after a certain number of users connect.
SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.
More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.
Possibl
Overview 1
Lesson 1: Gathering and Evaluating Core Information 3
Lesson 2: Using Performance Monitor 19
Lesson 3: Using SQL Profiler 35
Lesson 4: Using Index Tuning Wizard 57
Lesson 5: Using Other System Tools 67
Module 2: Tools – Exploring the Conditions
At the end of this module, you will be able to:
List the basic set of information to collect to help narrow down and identify the problem.
Explore and confirm conditions or messages presented or already known.
Describe Performance Monitor and how to use it to troubleshoot performance issues.
Analyze a Performance Monitor log to identify potential performance bottlenecks.
List DBCC commands relevant to performance troubleshooting and describe how they are used.
Analyze DBCC command output to identify potential performance bottlenecks.
List Trace Flags relevant to performance troubleshooting and describe how they are used.
Analyze Trace Flags output, if any, to identify potential performance bottlenecks.
List Profiler Events and their respective data columns relevant to performance troubleshooting and describe how they are used.
Choose and log the necessary events to troubleshoot performance issues.
Analyze Profiler Log to identify potential performance bottlenecks.
Describe the Index Tuning Wizard’s architecture.
List the command arguments for the ITWiz executable.
Discuss considerations when using the Index Tuning Wizard.
List other tools and commands relevant to performance troubleshooting
Describe what information can be collected using TList, Pviewer, Pstat, and Vmstat.
Analyze information collected using TList, Pviewer, Pstat, and Vmstat.
Explain why scripts are used to collect information.
Discuss examples of how scripts can be used.
Describe the use of Microsoft® SQL Server™ Profiler to replay trace and simulate stress.
Describe the use of OStress tool to simulate stress.
List external load simulation tools.
Lesson 1: Gathering a
Lesson 1: Index Concepts 3
Lesson 2: Concepts – Statistics 29
Lesson 3: Concepts – Query Optimization 37
Lesson 4: Information Collection and Analysis 61
Lesson 5: Formulating and Implementing Resolution 75
Module 6: Troubleshooting Query Performance
Sqlines简介
Sqlines是一款开源软件,支持多种数据库之间的SQL语句语法的的转换,openGauss将此工具修改适配,新增了openGauss数据库选项,目前可以支持PostgreSQL、MySQL、Oracle向openGauss的SQL语法转换。
如何获取和使用
1、在社区下载代码到任意位置:openGauss/openGauss-tools-sqlines (gitee.com)
2、进入代码根目录下, 执行脚本编译安装sqlines:
[user@openGauss33 sq
MySQL 提供了三种创建索引的方法:
1) 使用 CREATE INDEX 语句
可以使用专门用于创建索引的 CREATE INDEX 语句在一个已有的表上创建索引,但该语句不能创建主键。
语法格式:
CREATE <索引名> ON <表名> (<列名> [<长度>] [ ASC | DESC])
语法说明如下:
* Read the JavaScript cookies tutorial at:
* [url]http://www.netspade.com/articles/javascript/cookies.xml[/url]
* Sets a Cookie with the given name and value.
* name Name of the cookie
* value Value of the cookie
* [expires] Expiration date of the cookie (default: end of current session)
* [path] Path where the cookie is valid (default: path of calling document)
* [domain] Domain where the cookie is valid
* (default: domain of calling document)
* [secure] Boolean value indicating if the cookie transmission requires a
* secure transmission
function setCookie(name, value, expires, path, domain, secure)
[removed]= name + "=" + escape(value) +
((expires) ? "; expires=" + expires.toGMTString() : "") +
((path) ? "; path=" + path : "") +
((domain) ? "; domain=" + domain : "") +
((secure) ? "; secure" : "");
* Gets the value of the specified cookie.
* name Name of the desired cookie.
* Returns a string containing value of specified cookie,
* or null if cookie does not exist.
function getCookie(name)
var dc = [removed];
var prefix = name + "=";
var begin = dc.indexOf("; " + prefix);
if (begin == -1)
begin = dc.indexOf(prefix);
if (begin != 0) return null;
begin += 2;
var end = [removed].indexOf(";", begin);
if (end == -1)
end = dc.length;
return unescape(dc.substring(begin + prefix.length, end));
* Deletes the specified cookie.
* name name of the cookie
* [path] path of the cookie (must be same as path used to create cookie)
* [domain] domain of the cookie (must be same as domain used to create cookie)
function deleteCookie(name, path, domain)
if (getCookie(name))
[removed] = name + "=" +
((path) ? "; path=" + path : "") +
((domain) ? "; domain=" + domain : "") +
"; expires=Thu, 01-Jan-70 00:00:01 GMT";
--------------------------------------------------------------------------------
function setCookies(name,value)
var Days = 30; //此 cookie 将被保存 30 天
var exp = new Date(); //new Date("December 31, 9998");
exp.setTime(exp.getTime() + Days*24*60*60*1000);
[removed] = name + "="+ escape(value) +";expires="+ exp.toGMTString();
function getCookies(name)
var arr = [removed].match(new RegExp("(^| )"+name+"=([^;]*)(;|$)"));
if(arr != null) return unescape(arr[2]); return null;
function delCookies(name)
var exp = new Date();
exp.setTime(exp.getTime() - 1);
var cval=getCookie(name);
if(cval!=null) [removed]=name +"="+cval+";expires="+exp.toGMTString();
--------------------------------------------------------------------------------
function getCookie(name)
var dc = [removed];
var prefix = name + "=";
var begin = dc.indexOf("; " + prefix);
if (begin == -1)
begin = dc.indexOf(prefix);
if (begin != 0) return null;
begin += 2;
var end = [removed].indexOf(";", begin);
if (end == -1)
end = dc.length;
return unescape(dc.substring(begin + prefix.length, end));
这个应该不行吧.
=======================================
//另一个读取COOKIE的方法
function GetCookieData(sL)
var sRet="";
var sC=""+[removed];
if(sC.length>0)
var aC=sC.split(";",100);
var iC=aC.length;
for(var i=0;i<iC;i++)
if(aC[i].indexOf(sL+"=")!=-1)
var aRet=aC[i].split("=");
sRet=unescape(aRet[1]);
break;
return sRet;
SQUASHFS 2.2 - A squashed read-only filesystem for Linux
Released under the GPL licence (version 2 or later).
Welcome to Squashfs version 2.2-r2. Please see the CHANGES file for details
of changes.
Squashfs is a highly compressed read-only filesystem for Linux.
It uses zlib compression to compress both files, inodes and directories.
Inodes in the system are very small and all blocks are packed to minimise
data overhead. Block sizes greater than 4K are supported up to a maximum
of 64K.
Squashfs is intended for general read-only filesystem use, for archival
use (i.e. in cases where a .tar.gz file may be used), and in constrained
block device/memory systems (e.g. embedded systems) where low overhead is
needed.
1. SQUASHFS OVERVIEW
--------------------
1. Data, inodes and directories are compressed.
2. Squashfs stores full uid/gids (32 bits), and file creation time.
3. Files up to 2^32 bytes are supported. Filesystems can be up to
2^32 bytes.
4. Inode and directory data are highly compacted, and packed on byte
boundaries. Each compressed inode is on average 8 bytes in length
(the exact length varies on file type, i.e. regular file, directory,
symbolic link, and block/char device inodes have different sizes).
5. Squashfs can use block sizes up to 64K (the default size is 64K).
Using 64K blocks achieves greater compression ratios than the normal
4K block size.
6. File duplicates are detected and removed.
7. Both big and little endian architectures are supported. Squashfs can
mount filesystems created on different byte order machines.
2. USING SQUASHFS
-----------------
Squashfs filesystems should be mounted with 'mount' with the filesystem type
'squashfs'. If the filesystem is on a block device, the filesystem can be
mounted directly, e.g.
%mount -t squashfs /dev/sda1 /mnt
Will mount the squashfs filesystem on "/dev/sda1" under the directory "/mnt".
If the squashfs filesystem has been written to a file, the loopback device
can be used to mount it (loopback support must be in the kernel), e.g.
%mount -t squashfs image /mnt -o loop
Will mount the squashfs filesystem in the file "image" under
the directory "/mnt".
3. MKSQUASHFS
-------------
3.1 Mksquashfs options and overview.
------------------------------------
As squashfs is a read-only filesystem, the mksquashfs program must be used to
create populated squashfs filesystems.
SYNTAX:mksquashfs source1 source2 ... dest [options] [-e list of exclude
dirs/files]
Options are
-version print version, licence and copyright message
-info print files written to filesystem
-b <block_size> set data block to <block_size>. Default 65536 bytes
-2.0 create a 2.0 filesystem
-noI do not compress inode table
-noD do not compress data blocks
-noF do not compress fragment blocks
-no-fragments do not use fragments
-always-use-fragments use fragment blocks for files larger than block size
-no-duplicates do not perform duplicate checking
-noappend do not append to existing filesystem
-keep-as-directory if one source directory is specified, create a root
directory containing that directory, rather than the
contents of the directory
-root-becomes <name> when appending source files/directories, make the
original root become a subdirectory in the new root
called <name>, rather than adding the new source items
to the original root
-all-root make all files owned by root
-force-uid uid set all file uids to uid
-force-gid gid set all file gids to gid
-le create a little endian filesystem
-be create a big endian filesystem
-nopad do not pad filesystem to a multiple of 4K
-check_data add checkdata for greater filesystem checks
-root-owned alternative name for -all-root
-noInodeCompression alternative name for -noI
-noDataCompression alternative name for -noD
-noFragmentCompression alternative name for -noF
-sort <sort_file> sort files according to priorities in <sort_file>. One
file or dir with priority per line. Priority -32768 to
32767, default priority 0
-ef <exclude_file> list of exclude dirs/files. One per line
Source1 source2 ... are the source directories/files containing the
files/directories that will form the squashfs filesystem. If a single
directory is specified (i.e. mksquashfs source output_fs) the squashfs
filesystem will consist of that directory, with the top-level root
directory corresponding to the source directory.
If multiple source directories or files are specified, mksquashfs will merge
the specified sources into a single filesystem, with the root directory
containing each of the source files/directories. The name of each directory
entry will be the basename of the source path. If more than one source
entry maps to the same name, the conflicts are named xxx_1, xxx_2, etc. where
xxx is the original name.
To make this clear, take two example directories. Source directory
"/home/phillip/test" contains "file1", "file2" and "dir1".
Source directory "goodies" contains "goodies1", "goodies2" and "goodies3".
usage example 1:
%mksquashfs /home/phillip/test output_fs
This will generate a squashfs filesystem with root entries
"file1", "file2" and "dir1".
example 2:
%mksquashfs /home/phillip/test goodies output_fs
This will create a squashfs filesystem with the root containing
entries "test" and "goodies" corresponding to the source
directories "/home/phillip/test" and "goodies".
example 3:
%mksquashfs /home/phillip/test goodies test output_fs
This is the same as the previous example, except a third
source directory "test" has been specified. This conflicts
with the first directory named "test" and will be renamed "test_1".
Multiple sources allow filesystems to be generated without needing to
copy all source files into a common directory. This simplifies creating
filesystems.
The -keep-as-directory option can be used when only one source directory
is specified, and you wish the root to contain that directory, rather than
the contents of the directory. For example:
example 4:
%mksquashfs /home/phillip/test output_fs -keep-as-directory
This is the same as example 1, except for -keep-as-directory.
This will generate a root directory containing directory "test",
rather than the "test" directory contents "file1", "file2" and "dir1".
The Dest argument is the destination where the squashfs filesystem will be
written. This can either be a conventional file or a block device. If the file
doesn't exist it will be created, if it does exist and a squashfs
filesystem exists on it, mksquashfs will append. The -noappend option will
write a new filesystem irrespective of whether an existing filesystem is
present.
3.2 Changing compression defaults used in mksquashfs
----------------------------------------------------
There are a large number of options that can be used to control the
compression in mksquashfs. By and large the defaults are the most
optimum settings and should only be changed in exceptional circumstances!
The -noI, -noD and -noF options (also -noInodeCompression, -noDataCompression
and -noFragmentCompression) can be used to force mksquashfs to not compress
inodes/directories, data and fragments respectively. Giving all options
generates an uncompressed filesystem.
The -no-fragments tells mksquashfs to not generate fragment blocks, and rather
generate a filesystem similar to a Squashfs 1.x filesystem. It will of course
still be a Squashfs 2.0 filesystem but without fragments, and so it won't be
mountable on a Squashfs 1.x system.
The -always-use-fragments option tells mksquashfs to always generate
fragments for files irrespective of the file length. By default only small
files less than the block size are packed into fragment blocks. The ends of
files which do not fit fully into a block, are NOT by default packed into
fragments. To illustrate this, a 100K file has an initial 64K block and a 36K
remainder. This 36K remainder is not packed into a fragment by default. This
is because to do so leads to a 10 - 20% drop in sequential I/O performance, as a
disk head seek is needed to seek to the initial file data and another disk seek
is need to seek to the fragment block. Specify this option if you want file
remainders to be packed into fragment blocks. Doing so may increase the
compression obtained BUT at the expense of I/O speed.
The -no-duplicates option tells mksquashfs to not check the files being
added to the filesystem for duplicates. This can result in quicker filesystem
generation and appending although obviously compression will suffer badly if
there is a lot of duplicate files.
The -b option allows the block size to be selected, this can be either
4096, 8192, 16384, 32768 or 65536 bytes.
3.3 Specifying the UIDs/GIDs used in the filesystem
---------------------------------------------------
By default files in the generated filesystem inherit the UID and GID ownership
of the original file. However, mksquashfs provides a number of options which
can be used to override the ownership.
The options -all-root and -root-owned (both do exactly the same thing) force all
file uids/gids in the generated Squashfs filesystem to be root. This allows
root owned filesystems to be built without root access on the host machine.
The "-force-uid uid" option forces all files in the generated Squashfs
filesystem to be owned by the specified uid. The uid can be specified either by
name (i.e. "root") or by number.
The "-force-gid gid" option forces all files in the generated Squashfs
filesystem to be group owned by the specified gid. The gid can be specified
either by name (i.e. "root") or by number.
3.4 Excluding files from the filesystem
---------------------------------------
The -e and -ef options allow files/directories to be specified which are
excluded from the output filesystem. The -e option takes the exclude
files/directories from the command line, the -ef option takes the
exlude files/directories from the specified exclude file, one file/directory
per line. If an exclude file/directory is absolute (i.e. prefixed with /, ../,
or ./) the entry is treated as absolute, however, if an exclude file/directory
is relative, it is treated as being relative to each of the sources in turn,
%mksquashfs /tmp/source1 source2 output_fs -e ex1 /tmp/source1/ex2 out/ex3
Will generate exclude files /tmp/source1/ex2, /tmp/source1/ex1, source2/ex1,
/tmp/source1/out/ex3 and source2/out/ex3.
The -e and -ef exclude options are usefully used in archiving the entire
filesystem, where it is wished to avoid archiving /proc, and the filesystem
being generated, i.e.
%mksquashfs / /tmp/root.sqsh -e proc /tmp/root.sqsh
Multiple -ef options can be specified on the command line, and the -ef
option can be used in conjuction with the -e option.
3.5 Appending to squashfs filesystems
-------------------------------------
Running squashfs with the destination directory containing an existing
filesystem will add the source items to the existing filesystem. By default,
the source items are added to the existing root directory.
To make this clear... An existing filesystem "image" contains root entries
"old1", and "old2". Source directory "/home/phillip/test" contains "file1",
"file2" and "dir1".
example 1:
%mksquashfs /home/phillip/test image
Will create a new "image" with root entries "old1", "old2", "file1", "file2" and
"dir1"
example 2:
%mksquashfs /home/phillip/test image -keep-as-directory
Will create a new "image" with root entries "old1", "old2", and "test".
As shown in the previous section, for single source directories
'-keep-as-directory' adds the source directory rather than the
contents of the directory.
example 3:
%mksquashfs /home/phillip/test image -keep-as-directory -root-becomes
original-root
Will create a new "image" with root entries "original-root", and "test". The
'-root-becomes' option specifies that the original root becomes a subdirectory
in the new root, with the specified name.
The append option with file duplicate detection, means squashfs can be
used as a simple versioning archiving filesystem. A squashfs filesystem can
be created with for example the linux-2.4.19 source. Appending the linux-2.4.20
source will create a filesystem with the two source trees, but only the
changed files will take extra room, the unchanged files will be detected as
duplicates.
3.6 Miscellaneous options
-------------------------
The -info option displays the files/directories as they are compressed and
added to the filesystem. The original uncompressed size of each file
is printed, along with DUPLICATE if the file is a duplicate of a
file in the filesystem.
The -le and -be options can be used to force mksquashfs to generate a little
endian or big endian filesystem. Normally mksquashfs will generate a
filesystem in the host byte order. Squashfs, for portability, will
mount different ordered filesystems (i.e. it can mount big endian filesystems
running on a little endian machine), but these options can be used for
greater optimisation.
The -nopad option informs mksquashfs to not pad the filesystem to a 4K multiple.
This is performed by default to enable the output filesystem file to be mounted
by loopback, which requires files to be a 4K multiple. If the filesystem is
being written to a block device, or is to be stored in a bootimage, the extra
pad bytes are not needed.
4. FILESYSTEM LAYOUT
--------------------
Brief filesystem design notes follow for the original 1.x filesystem
layout. A description of the 2.0 filesystem layout will be written sometime!
A squashfs filesystem consists of five parts, packed together on a byte
alignment:
---------------
| superblock |
|---------------|
| data |
| blocks |
|---------------|
| inodes |
|---------------|
| directories |
|---------------|
| uid/gid |
| lookup table |
---------------
Compressed data blocks are written to the filesystem as files are read from
the source directory, and checked for duplicates. Once all file data has been
written the completed inode, directory and uid/gid lookup tables are written.
4.1 Metadata
------------
Metadata (inodes and directories) are compressed in 8Kbyte blocks. Each
compressed block is prefixed by a two byte length, the top bit is set if the
block is uncompressed. A block will be uncompressed if the -noI option is set,
or if the compressed block was larger than the uncompressed block.
Inodes are packed into the metadata blocks, and are not aligned to block
boundaries, therefore inodes overlap compressed blocks. An inode is
identified by a two field tuple <start address of compressed block : offset
into de-compressed block>.
Inode contents vary depending on the file type. The base inode consists of:
base inode:
Inode type
uid index
gid index
The inode type is 4 bits in size, and the mode is 12 bits.
The uid and gid indexes are 4 bits in length. Ordinarily, this will allow 16
unique indexes into the uid table. To minimise overhead, the uid index is
used in conjunction with the spare bit in the file type to form a 48 entry
index as follows:
inode type 1 - 5: uid index = uid
inode type 5 -10: uid index = 16 + uid
inode type 11 - 15: uid index = 32 + uid
In this way 48 unique uids are supported using 4 bits, minimising data inode
overhead. The 4 bit gid index is used to index into a 15 entry gid table.
Gid index 15 is used to indicate that the gid is the same as the uid.
This prevents the 15 entry gid table filling up with the common case where
the uid/gid is the same.
The data contents of symbolic links are stored immediately after the symbolic
link inode, inside the inode table. This allows the normally small symbolic
link to be compressed as part of the inode table, achieving much greater
compression than if the symbolic link was compressed individually.
Similarly, the block index for regular files is stored immediately after the
regular file inode. The block index is a list of block lengths (two bytes
each), rather than block addresses, saving two bytes per block. The block
address for a given block is computed by the summation of the previous
block lengths. This takes advantage of the fact that the blocks making up a
file are stored contiguously in the filesystem. The top bit of each block
length is set if the block is uncompressed, either because the -noD option is
set, or if the compressed block was larger than the uncompressed block.
4.2 Directories
---------------
Like inodes, directories are packed into the metadata blocks, and are not
aligned on block boundaries, therefore directories can overlap compressed
blocks. A directory is, again, identified by a two field tuple
<start address of compressed block containing directory start : offset
into de-compressed block>.
Directories are organised in a slightly complex way, and are not simply
a list of file names and inode tuples. The organisation takes advantage of the
observation that in most cases, the inodes of the files in the directory
will be in the same compressed metadata block, and therefore, the
inode tuples will have the same start block.
Directories are therefore organised in a two level list, a directory
header containing the shared start block value, and a sequence of
directory entries, each of which share the shared start block. A
new directory header is written once/if the inode start block
changes. The directory header/directory entry list is repeated as many times
as necessary. The organisation is as follows:
directory_header:
count (8 bits)
inode start block (24 bits)
directory entry: * count
inode offset (13 bits)
inode type (3 bits)
filename size (8 bits)
filename
This organisation saves on average 3 bytes per filename.
4.3 File data
-------------
File data is compressed on a block by block basis and written to the
filesystem. The filesystem supports up to 32K blocks, which achieves
greater compression ratios than the Linux 4K page size.
The disadvantage with using greater than 4K blocks (and the reason why
most filesystems do not), is that the VFS reads data in 4K pages.
The filesystem reads and decompresses a larger block containing that page
(e.g. 32K). However, only 4K can be returned to the VFS, resulting in a
very inefficient filesystem, as 28K must be thrown away. Squashfs,
solves this problem by explicitly pushing the extra pages into the page
cache.
5. AUTHOR INFO
--------------
Squashfs was written by Phillip Lougher, email phillip@lougher.demon.co.uk,
in Chepstow, Wales, UK. If you like the program, or have any problems,
then please email me, as it's nice to get feedback!
Release date: December 11, 2006
Known bugs: none
Fixes/features added from previous release:
a) added support for multiple filters per process in VsDrvr.dll
b) updated manual
Previous release 1.3.5
Release date: October 11th, 2005
Known bugs: none
Fixes/features added from previous release
a) added VsSetWavelengthStep and VsGetWavelengthStep functions
b) added VsSetWavelengthWavesConfirm() function
c) fixed error-handling of VsSetWavelength()
In earlier revisions, the error status light was cleared after a VsSetWavelength() call failed, so the user did not see the light turn red to alert that an error had occurred. This has been fixed in 1.35 so the error light remains lit, and an error code is returned.
d) added range-check to VsDefinePalette()
Previous revisions did not range-check the palette index number, and hard crashes could be produced if out-of-range values were supplied to this routine.
Previous release 1.33b
Release date: February 9, 2005
Known bugs: none
Fixes/features changed from previous release:
a) Fixed installer: programmers?guide (vsdrvr.pdf) installed when SDK is selected.
Previous release 1.33a
Release date: January 10th, 2005
Known bugs:
i) SDK programmers?guide is not installed even if SDK is selected.
Fixes/features added from previous release
a) VsDrvr.dll fixed handling of COMx ports that do not support 460kb
The autobaud sequence tries a variety of baud rates, some of which are not supported by RS-232 interfaces (but are supported on USB virtual COM ports). This was not handled properly, so if a call was made to VsOpen when no VariSpec was present, but a later call was made when a filter was present, the latter would fail.
b) VsGui added check of which COMx ports are present on computer
This program now filters its COMx list and only shows ports which actually exist; it used to show COM1 ?COM8 even if not all of these were present.
c) VsGui added automatic filter detection on Configure dialog
This checks all ports in turn, and reports the first detected filter. The search order is determined by the order in which the computer lists ports in the Registry.
d) VsGui changed to recognize filters as present while initializing
In prior revisions, VsGui would not report no filter found if a filter was present but still going through its power-up initialization. Now, a message box is posted to indicate that a filter was found, and the program checks whether initialization is complete, at 1 second intervals. When the filter is done initializing, the VsGui controls become active and report the filter information (serial number, wavelength range, etc).
e) VsGui added filter status item to Configure dialog
Adjacent the COMx combo box, there is now a text field that indicates filter status as 揘ot found? 揑nitializing? or 揜eady? This field is updated whenever the combo box selection is changed.
Previous release 1.32
Release date: July 27th, 2004
Known bugs: COMx port described above as 1.33 fix item a)
Fixes/features added from previous release
a) VsGui added a sweep feature to enable cycling the filter
The wavelength start, stop, and step are adjustable. Cycling can be done a fixed number of times or indefinitely.
Previous release 1.30
Release date: June 23rd, 2004
Known bugs: none
Fixes/features added from previous release
a) New commands VsSetWaveplateAndWaves(), VsGetWaveplateAndWaves(), VsGetWaveplateLimits(), and VsGetWaveplateStages() were added for support of variable retarder models.
b) New commands VsSetRetries() and VsSetLatencyMs() were added for control of serial port latency and automatic retry in case of error.
c) New commands VsSetMode() and VsGetMode() were added for control of the VariSpec filter抯 triggering and sweep modes
d) New command VsGetSettleMs() was added to learn optics settling time
e) New commands VsIsDiagnostic() and VsIsEngagedInBeam() were added. These are reserved for CRI use and are not supported for use by end users.
f) The command syntax and functionality of the VsSendCommand() function was changed - see description of this command for details
g) The VsGui program was modified to add sweep function, and the associated files were added to the file manifest.
The new functions are assigned higher ordinal numbers than the earlier commands, so the ordinal numbers assigned to routines in the earlier VsDrvr routines are preserved. This means one may use the new VsDrvr.dll file with applications that were developed and linked with the earlier release, without any need to recompile or relink the application.
Of course, to use the new functions one must link the application code with the new .lib file containing these functions.
Previous release: 1.20
Release date December 3rd, 2003
Known bugs:
a) there is a conflict when one uses the implicit palette to set wavelengths, and also defines palette states explicitly using the VsDefinePalette() function. When the explicitly set palette state overwrites a palette state implicitly associated with a certain wavelength, that wavelength will not be accurately set when one issues the VsSetWavelength() command. This is fixed in release 1.30
Fixes/features added from previous release
a) fixes bug with implicit palette in September 8 release
b) incorporates implicit retry for command send/reply if error in transmission
c) recognizes filters with serial numbers > 60000 (normally VariLC numbers)
d) supports binary transfer of >127 bytes
Previous release 1.11
Release date September 8, 2003
Known bugs
a) implicit palette can fail to create palette entry, causing tuning error
b) VsSendBinary() fails if 128 chars or more sent (signed char error)
Fixes/features added from previous release
a) included VsIsPresent() function omitted from function list of 1.10 release
Previous release 1.10
Release date: August 28th, 2003
Known bugs:
a) VsIsPresent function not included ?generates 搖nresolved external?at link-time
Fixes/features added from previous release:
b) added command VsEnableImplicitPalette() to code and documentation
added command VsConnect() to code and documentation
added command VsClose() to code and documentation
added local variable to avoid unnecessary querying of diagnostic status
documented that command VsConnect() will not be supported in future
documented that command VsDisconnect() will not be supported in future
documented that command VsIsConnected() will not be supported in future
changed to Windows Installer from previous ZIP file
added table summary of commands to this manual
Previous release 1.00
Release date: November 5th, 2002
Known bugs:
a) none
Fixes/features added from previous release
b) n/a ?initial releaseDescription
This package provides a set of functions to control the VariSpec filter, which may be called from C or C++ programs. It incorporates all aspects of the filter communication, including low-level serial routines. With these routines, one can address the filter as a virtual object, with little need for detailed understanding of its behavior. This simplifies the programming task for those who want to integrate the VariSpec into larger software packages.
File manifest
All files are contained in a single installer file which includes the following:
vsdrvr.h declaration file
vsdrvr.lib library stub file
vsdrvr.dll run-time library
vsdrvr_r1p30.pdf (this file) release notes and programmer抯 guide
{sample program using VsDrvr package}
registryAccess.cpp
registryAccess.h
resource.h
stdafx.h
VsConfigDlg.cpp
VsConfigfDlg.h
VsGui.cpp
VsGui.h
VsGui.mak
VsGui.rc
VsGuiDlg.cpp
VsGuiDlg.h
VsSweep.cpp
VsSweep.h
Development cycle
In order to use the DLL, one should take the following steps:
a) Add #include 搗sdrvr.h?statements to all files that access the VariSpec software
b) Add vsdrvr.lib to the list of modules searched by the linker
c) Place a copy of vsdrvr.dll in either the folder that includes the executable code for the program being developed; or, preferably, in the windows system folder.
Failures in step a) will lead to compiler errors; in step b) to linker errors; in step c) to a run-time error message that 揳 required .DLL file, vsdrvr.dll, was not found?
VariSpec filter configuration
The VariSpec filter communicates via ASCII commands sent over an RS-232 interface or USB. The RS232 can operate at 9600 or 19,200 baud, while the USB appears as a virtual COMx device. While it appears to be present at either 9600 baud or 115.2 kbaud , the actual data transmission occurs at 12 MBaud over the USB.
Each command is terminated with an end-of-line terminator which can be either a carriage-return <c/r> or line feed <l/f>.
For RS-232 models, the baud rate and terminator character are selected using DIP switches inside the VariSpec electronics module. Default settings are 9600 baud, and the <c/r> character (denoted 慭r?in the C language).
For USB devices, the terminator is always <c/r>.
For latest information, or to determine how to alter the settings from the factory defaults, consult the VariSpec manual.
Timing and latency
The VariSpec filter takes a finite time to process commands, which adds an additional delay to that imposed by simple communication delays. In general, the time to process a given command is short except for the following operations:
?filter initialization
?wavelength selection
?palette definition
The first of these is quite lengthy (30 seconds or more) because it involves measurements and exercising of the liquid crystal optics. The latter two are much faster but still can take a significant amount of time (up to 300 ms) on the older RS-232 electronics due to the computations involved. On the newer, USB electronics, the latter two functions are completed in less than 5 ms.
For this reason, the functions that handle these actions offer the option of waiting until the action is complete before returning (so-called synchronous operation); although they can be called in an asynchronous mode where the function returns as soon as all commands have been sent to the VariSpec, without waiting for them to run to completion.
Another option is to use implicit palette tables. If this is enabled, by calling the VsEnableImplicitPalette() function, the driver will define the settings for a given wavelength once, then saves the results within the VariSpec for faster access next time that wavelength is used. Subsequent access times are essentially instantaneous, until either all of the 128 palette states are in use, or the palette is cleared via the VsClearPalette() command.
The VsIsReady() function can be used to determine whether a filter is done processing all commands. Ideally, one should check VsIsReady() using a timer or the like to wait efficiently, so that the host PC is free to do other tasks while waiting for the VariSpec.
The VariSpec always processes each command to completion before starting on the next command, and it has a 256 byte input buffer, so there is no problem issuing several commands at once; they will all be executed, and in the order given.
This also indicates another way to coordinate one抯 program with the VariSpec filter: one can issue any of the VsGetxxx() functions, which query the filter. Since these do not return until the filter has responded, one may be sure that there are no pending commands when the VsGetxxx() function completes.
The VsDrvr package provides for automatic re-try of commands up to 3 times, in the event that communications are garbled, and will wait up to 2 seconds for completion of serial commands. The number of retries can be set from 0 to 10, and the latency adjusted, if desired. However, there should be no need to do so. The hardware and software have been tested and observed to execute several million commands without a single communications error, so in practice the need for the retry protocol is very slight. Communication speed is not improved by reducing the latency, since commands proceed when all characters are received, and the latency time to time-out is only relevent when there is a communications lapse ?and as noted, these are very unlikely so the performance burden of retries should not be a practical consideration.
Multiple Filters and Multiple Processes
These routines only permit one VariSpec per process, and one process per VariSpec. So, these routines cannot control multiple filters at once from a single process; nor can several active processes seek to control the same filter at the same time.
The VsDrvr package anticipates a future upgrade to enable control of multiple filters per process, so it makes use of an integer handle to identify which VariSpec is being controlled, even though (for now) only a single filter can be active. This handle is checked, and the correct handle must be used in all calls.
Program flow and sequence
Typical programs should use the following API calls
(all applications, upon initiating link to the filter)
?call VsOpen() to establish communications link (required)
?call VsIsPresent() to confirm a filter is actually present
?call VsIsReady() in case filter is still doing power-up sequence
<wait until no longer busy>
?call VsGetFilterIdentity() to learn wavelength limits and serial number if needed
(if setting wavelengths via implicit palettes; recommended especially with older filters)
?call VsEnableImplicitPalettes()
(to set wavelengths, either directly or via implicit palettes)
?call VsSetWavelength() and VsGetWavelength() to select and retrieve tuning
(if setting wavelengths by means of palettes, and managing palettes explicity)
?call VsDefinePaletteEntry() and VsClearPalette() to define palette entries
?call VsSetPalette() and VsGetPalette() to select and retrieve palette state
(all applications, when done with the filter)
?call VsClose() to release the communications link (required)
Sample program
Source code for a sample program, VsGui, is provided, which illustrates how to control a VariSpec filter using the VsDrvr package. All filter control code lives in the VsGuiDlg.cpp module, specifically in the Connect(), RequestToSetWavelength(), and VsWriteTimerProc() functions. The latter two use a system timer to decouple the GUI from the actual filter control, for more responsive feedback to the user. Such an approach is unnecessary if palettes are used, which is preferable when one wishes the best real-time performance. See the VariSpec manual for further information.
Auxiliary commands
Certain commands are normally only used at the factory when filters are being built and configured, or in specialized configurations. These appear after the normal command set in the listing below.
Obsolescent commands
The VsConnect(), VsIsConnected(), and VsDisconnect() functions are obsolescent. They are supported in this release, but will not necessarily exist in releases after 1.3x.
As they are obsolescent, they are not recommended for new code. These function calls are not documented further in this manual.Summary of commands
Normal Commands
VsClearError(vsHnd)
VsClearPalette(vsHnd)
VsClearPendingCommands(vsHnd)
VsClose(vsHnd)
VsDefinePalette(vsHnd, palEntry, wl)
VsEnableImplicitPalette(vsHnd, isEnabled)
VsGetError(vsHnd, *pErr)
VsGetFilterIdentity(vsHnd, *pVer, *pSerno, *pminWl, *pmaxWl)
VsGetMode(vsHnd, int *pMode)
VsGetPalette(vsHnd, *ppalEntryNo)
VsGetSettleMs(vsHnd, *psettleMs)
VsGetTemperature(vsHnd, *pTemperature)
VsGetWavelength(vsHnd, *pwl)
VsGetWavelengthAndWaves(vsHnd, double *pWl, double *pwaves)
VsGetWaveplateLimits(vsHnd, double *pminWaves, double *pmaxWaves)
VsGetWaveplateStages(vsHnd, int *pnStages)
VsIsPresent(vsHnd)
VsIsReady(vsHnd)
VsOpen(*pvsHnd, portName, *pErrorCode)
VsSetLatencyMs(vsHnd, nLatencyMs)
VsSetMode(vsHnd, mode)
VsSetPalette(vsHnd, palEntry)
VsSetRetries(vsHnd, nRetries)
VsSetWavelength(vsHnd, wl, confirm)
VsSetWavelengthAndWaves(vsHnd, wl, waveplateVector)
Auxiliary commands
VsGetAllDrive(vsHnd, *pStages, drive[])
VsGetNstages(vsHnd, *pStages)
VsGetPendingReply(vsHnd, reply, nChars, *pQuit, firstMs, subsequentMs)
VsGetReply(vsHnd, reply, nChars, waitMs)
VsIsDiagnostic(vsHnd)
VsIsEngagedInBeam(vsHnd)
VsSendBinary(vsHnd, bin[], nChars, clearEcho)
VsSendCommand(vsHnd, cmd, sendEolChar)
VsSetStageDrive(vsHnd, stage, drive)
VsThermistorCounts(vsHnd, *pCounts)
Alphabetical list of function calls
Syntax
Throughout this manual, the following conventions are used:
VSDRVR_API Int32 VsOpen(
VS_HANDLE *vsHnd,
LPCSTR port,
Int32 *pErrorCode
Bold text is used for function names
Italics indicate variables whose names (or values) are supplied by the user in their code
Name-mangling
The declaration file vsdrvr.h includes statements that render the API names accurately in a C++ environment, i.e. free of the name-mangling decoration suffix that is normally added by C++ compilers. Thus the functions can be called freely from either C or C++ programs, using the names exactly as shown in this manual or in the VsDrvr.h file.
Call and argument declarations
The call protocol type, VSDRVR_API, is declared in vsdrvr.h, as are the types Int32 and VS_HANDLE.
Errors
All functions return an Int32 status value, which is TRUE if the routine completed successfully and FALSE if there was an error.
If there is an error in the VsOpen() function, the error is returned in *pErrorCode.
If there is an error in communicating with a filter after a successful VsOpen(), one should use the VsGetError() function to obtain the specific error code involved. This function returns VSD_ERR_NOERROR if there is no error pending.
Main and auxiliary functions
The next section provides a description of the main functions, in alphabetic order; followed by the auxiliary functions, also in alphabetical order. In normal use, one will probably have no need for the auxiliary functions, but this list is provided for completeness.
VSDRVR_API Int32 VsClearError(
VS_HANDLE vsHnd
Arguments:
vsHnd handle value returned by VsOpen()
Purpose: this function clears any pending error on the VariSpec. This resets the error LED on the filter, and sets the pending error to VS_ERR_NOERROR.
Returns: TRUE if successful, FALSE otherwise
Notes: noneVSDRVR_API Int32 VsClearPalette(
VS_HANDLE vsHnd
Arguments:
vsHnd handle value returned by VsOpen()
Function: clears all elements of the current filter palette and renders the current palette element undefined.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsClearPendingCommands(
VS_HANDLE vsHnd
Arguments:
vsHnd handle value returned by VsOpen()
Function: clears all pending commands including any presently in-process
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsClose(
VS_HANDLE vsHnd
Arguments:
vsHnd handle value returned by VsOpen(). May also be NULL, in which case all VariSpec filters are disconnected.
Function: Disconnects the filter.
Returns: TRUE if successful, FALSE otherwise
Notes: No other functions will work until VsOpen() is called to re-establish communications with the filter.
VSDRVR_API Int32 VsDefinePalette(
VS_HANDLE vsHnd,
Int32 palEntry,
double wl)
Arguments:
vsHnd handle value returned by VsOpen()
palEntry palette entry to be defined, in the range [0, 127]
wl wavelength associated with this palette entry
Function: creates a palette entry for the entry and wavelength specified. This palette entry can then be accessed using VsSetPalette() and VsGetPalette() functions.
Returns: TRUE if successful, FALSE otherwise
Notes: palettes provide a fast way to define filter settings for wavelengths that are to be repeatedly accessed. The calculations are performed once, at the time the palette element is defined, and the results are saved in a palette table to tune to that wavelength without repeating the underlying calculations. And, one may cycle through the palette table, once defined, by means of TTL a trigger signal to the filter electronics.
For more information about using palettes, consult the VariSpec user抯 manual.
VSDRVR_API Int32 VsEnableImplicitPalette(
VS_HANDLE vsHnd,
BOOL imlEnabled)
Arguments:
vsHnd handle value returned by VsOpen()
implEnabled selects whether to use implicit palette definition
Function: enables or disables implicit palette generation when wavelengths are defined using the VsSetWavelength function. If enabled, a new palette entry is created whenever a new wavelength is accessed, and the VsSetWavelength function will use this palette entry whenever that wavelength is accessed again, until the palette is cleared. The result is improved tuning speed; however, it means that the palette contents are altered dynamically, which can be a problem if one relies upon the palette contents remaining fixed.
Clearing the palette with VsClearPalette() will clear all implicit palette entries as well as explicitly defined palette entries. This is useful if one knows that wavelengths used previously will not be used again, or that a new set of wavelengths is about to be defined and one wishes to make sure there is sufficient room in the palette.
Returns: TRUE if successful, FALSE otherwise
Notes: By default, the implicit palette is enabled for VariSpec filters that have RS-232 interface, and is disabled for newer VariSpec filters that have the USB interface. This is because the newer filters perform the filter tuning calculations fast enough that no performance improvement is obtained by using the implicit palette to set wavelength.
For more information about using palettes, consult the VariSpec user抯 manual.
VSDRVR_API Int32 VsGetError(
VS_HANDLE vsHnd,
Int32 *pErr)
Arguments:
vsHnd handle value returned by VsOpen()
pErr pointer to the int that will receive the most recent error code
Purpose: this function clears any pending error on the VariSpec. This resets the error LED on the filter, and sets the pending error to VS_ERR_NOERROR.
Returns: TRUE if successful, FALSE otherwise
Notes: noneVSDRVR_API Int32 VsGetFilterIdentity(
VS_HANDLE vsHnd,
Int32 *pVer,
Int32 *pSerno,
double *pminWl,
double *pmaxWl
Arguments:
vsHnd handle value returned by VsOpen()
pVer pointer to variable that receives the filter firmware version
pSerno pointer to variable that receives the filter serial number
pminWl pointer to variable that receives the filter抯 minimum wavelength
pmaxWl pointer to variable that receives the filter抯 maximum wavelength
Purpose: this function reads the filter抯 information using the VariSpec 慥?command, and puts it to the call variables. Any one of the pointers may be NULL, in which case that piece of information is not returned.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsGetMode(
VS_HANDLE vsHnd,
Int32 *pMode
Arguments:
vsHnd handle value returned by VsOpen()
pMode pointer to variable that receives the filter mode
Purpose: this function enables one to read the filter抯 present mode. The mode describes how the filter responds to hardware triggers, and is described in the filter manual.
If the pointer *pMode is NULL, no information is returned.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsGetPalette(
VS_HANDLE vsHnd,
Int32 *ppalEntry
Arguments:
vsHnd handle value returned by VsOpen()
ppalEntry pointer to int that receives the 0-based palette entry number.
This pointer may not be NULL.
Purpose: this function determines what palette entry is currently active and returns it to *ppalEntry. If the present palette entry is undefined, it sets *ppalEntry to ? and returns a successful status code.
Returns: TRUE if successful, FALSE otherwise
Notes: noneVSDRVR_API Int32 VsGetSettleMs(
VS_HANDLE vsHnd,
Int32 *pSettleMs
Arguments:
vsHnd handle value returned by VsOpen()
pSettleMs pointer to variable that receives the filter settling time
Purpose: this function returns the filter抯 settling time, in milliseconds. This is useful for establishing overall system timing. The settling time is defined as beginning at the moment that the electronics have processed the request to change wavelength, as determined by VsIsReady() or equivalent. At that moment, the new set of drive signals are applied to the optics, and the optics will settle in *psettleMs milliseconds.
The settling time is defined as a 95% settling time, meaning the filter has settled to 95% of its ultimate transmission value at the new wavelength being tuned to.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsGetTemperature(
VS_HANDLE vsHnd,
double *pTemperature
Arguments:
vsHnd handle value returned by VsOpen()
pTemperature pointer to double that will receive the filter temperature, in C
This pointer may not be NULL
Purpose: this function determines the filter temperature using the VariSpec 慪?command, and puts the result to *pTemperature.
Returns: TRUE if successful, FALSE otherwise
Notes: noneVSDRVR_API Int32 VsGetWavelength(
VS_HANDLE vsHnd,
double *pwl
Arguments:
vsHnd handle value returned by VsOpen()
pwl pointer to double that will receive the filter wavelength, in nm
This pointer may not be NULL
Purpose: this function determines the current filter wavelength and returns it to *pwl. If the present wavelength is undefined, it sets *pwl to ? and returns a successful status code.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsGetWavelengthAndWaves(
VS_HANDLE vsHnd,
double *pwl,
double *pwaves
Arguments:
vsHnd handle value returned by VsOpen()
pwl pointer to double that will receive the filter wavelength, in nm.
This pointer may not be NULL
pwaves pointer to double array that will receive one or more waveplate settings. The actual number of settings may be determined by VsGetWaveplateStages().
Purpose: this function determines the current filter wavelength and returns it to *pwl. If the present wavelength is undefined, it sets *pwl to ? and returns a successful status code. If the present wavelength is defined, it also returns the waves of retardance at each of the polarization analysis waveplates in the optics, in the pwaves[] array.
Returns: TRUE if successful, FALSE otherwise
Notes: See the description of the VsGetWaveplateStages() command for more detail on what stages are considered waveplates.
VSDRVR_API Int32 VsGetWaveplateLimits(
VS_HANDLE vsHnd,
double *pminWaves,
double *pmaxWaves
Arguments:
vsHnd handle value returned by VsOpen()
pminWaves pointer to double array that will receive the minimum retardances possible at each of the waveplate stages in the filter optics.
pmaxWaves pointer to double array that will receive the maximum retardances possible at each of the waveplate stages in the filter optics
Purpose: this function determines the range of retardances that are possible at each waveplate stage, in waves, at the present wavelength setting. Note that the retardance range is itself a function of wavelength, so the results will vary as the wavelength is changed.
Returns: TRUE if successful, FALSE otherwise
Notes: See the description of the VsGetWaveplateStages command for more detail on what stages are considered waveplates.
VSDRVR_API Int32 VsGetWaveplateStages(
VS_HANDLE vsHnd,
Int32 *pnwpStages
Arguments:
vsHnd handle value returned by VsOpen()
pnwpStages pointer to Int32 that will receive the number of waveplate stages in the filter optics. This pointer may not be NULL
Purpose: this function determines how many polarization analysis stages are present in the optics and returns this number. Note that although all VariSpec filters operate by means of variable retarder element, optical stages that perform wavelength tuning rather than polarization analysis are not treated as waveplate stages.
For example, most VariSpec filters do not include any polarization analysis stages and thus report no waveplates. VsGetWaveplateStages will return a value of 2 for conventional PolScope optics.
In contrast, VsGetNstages() reports the total number of stages in a filter, including stages that perform polarization analysis and stages that perform wavelength tuning.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsIsPresent(
VS_HANDLE vsHnd
Arguments:
vsHnd handle value returned by VsOpen()
Function: determines whether a filter is actually present and responding. This is done using the status-check character ??as described in the VariSpec manual.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsIsReady(
VS_HANDLE vsHnd
Arguments:
vsHnd handle value returned by VsOpen()
Function: determines whether the filter is done processing all commands, and is ready to receive new commands.
Returns: TRUE if successful, FALSE otherwise
Notes: this is useful when sending commands such as VsSetWavelength(), VsInitialize(), VsExercise(), and VsDefinePaletteEntry() in asynchronous mode. These commands take a prolonged time, and running them synchronously ties up the processor waiting. Alternatively, one can create a loop that uses CreateWaitableTimer(), SetWaitableTimer(), and WaitForSingleObject() to call VsIsReady() at intervals, checking whether the filter is ready. This approach, though more work for the programmer, leaves most of the processor capacity free for other tasks such as GUI update and the like.
VSDRVR_API Int32 VsOpen (VS_HANDLE *pvsHnd,
LPCSTR port,
Int32 *pErrorCode
Arguments:
pvsHnd pointer to handle. This pointer may not be NULL.
port port name, such as 揅OM1?
pErrorCode pointer to Int32 to receive an error code if VsOpen() fails
Purpose: establishes a connection to the VariSpec using the port specified, and automatically determines the baud rate and end-of-line character for subsequent communications. It also retrieves the filter抯 serial number and wavelength range, to confirm that it is a VariSpec and not some other similar device. However, these are retrieved purely as an integrity check, and the values are not returned to the calling application. See VsGetFilterInfo() to access this information.
If the device responds as a VariSpec does when it is not ready (i.e. still initializing), VsOpen() fails and returns the error code VSD_ERR_BUSY. However, one may not be sure that the device is a VariSpec until VsOpen() completes successfully
The error codes returned by this function are listed in VsDrvr.h. When VsOpen() runs successfully, *pErrorCode is set to VSD_ERR_NOERROR.
The handle associated with this filter is set by VsOpen() to a nonzero handle value if successful, or to NULL if no connection is established.
The port may refer to COM1 through COM8.
Return: TRUE if successful, FALSE otherwise
Notes: Until this function is called, none of the other functions will work.
VSDRVR_API Int32 VsSetLatency(
VS_HANDLE vsHnd,
Int32 latencyMs
Arguments:
vsHnd handle value returned by VsOpen()
latencyMs the serial port latency, in ms, in the range [1, 5000]
Purpose: this function sets the latency time for USB or RS-232 commands to the value given by latencyMs. Commands that do not conclude in this time are considered to have timed-out.
Returns: TRUE if successful, FALSE otherwise
Notes: increasing the latency time does not increase the time for commands to complete, nor does it insert any delays in normal processing. It merely defines the window for maximum transmission time, beyond which time an error is reported.
VSDRVR_API Int32 VsSetPalette(
VS_HANDLE vsHnd,
Int32 palEntry
Arguments:
vsHnd handle value returned by VsOpen()
palEntry the palette entry to be set, in the range [0, 127]
Purpose: this function sets the filter to the palette entry specified by palEntry
Returns: TRUE if successful, FALSE otherwise
Notes: palettes are a good way to control the filter in applications where it will be cycled repeatedly to various, fixed wavelength settings. Palettes calculate the filter settings once, and save the results for rapid access later, rather than calculating them each time, as occurs when one sets the wavelength directly with VsSetWavelength(). See the VariSpec manual for more information on palettes.VSDRVR_API Int32 VsSetRetries(
VS_HANDLE vsHnd,
Int32 nRetries
Arguments:
vsHnd handle value returned by VsOpen()
nRetries the number serial communications retries, in the range [0, 10]
Purpose: The VsDrvr software automatically detects errors in communication and re-sends if an error is detected. This function sets the number of times to retry sending any single command, before reporting a communications failure. The default is 3, which should be adequate, and one should rarely need to change this, if ever. The primary purpose of this function is to enable setting the number of retries to zero, to force single-error events to cause detectable errors (as they would normally be fixed automatically via the retry mechanism)
Returns: TRUE if successful, FALSE otherwise
Notes: noneVSDRVR_API Int32 VsSetWavelength(
VS_HANDLE vsHnd,
double wl,
BOOL confirm
Arguments:
vsHnd handle value returned by VsOpen()
wl wavelength to tune to, in nm
confirm logical flag, indicating whether to confirm actual wavelength value
Purpose: this function sets the filter wavelength to the value in wl. If confirm is TRUE, it waits for the filter to complete the command, and then reads back the actual wavelength to confirm it was implemented successfully. Note that the only time there can be a disparity is when the wavelength requested by wl lies outside the legal range for that filter, or if the wavelength is specified to a finer resolution than the filter recognizes (normally, 0.01 nm).
Returns: TRUE if successful, FALSE otherwise
Notes: noneVSDRVR_API Int32 VsGetAllDrive(
VS_HANDLE vsHnd,
Int32 *pStages,
Int32 drive[]
Arguments:
vsHnd handle value returned by VsOpen()
pStages pointer to int that will receive the number of stages in the filter
drive[] int array to receive the filter drive levels.
Purpose: this function reports the number of filter stages in *pStages. If this argument is NULL, it is ignored. The function returns the actual drive level at each stage, in counts, in drive[] , which must not be NULL.
Returns: TRUE if successful, FALSE otherwise
Notes: The array drive[] must be large enough to receive all the drive levels ?if the exact number of stages is not known, call VsGetNstages() first, or allocate enough array elements (12) to accommodate the largest filter design.VSDRVR_API Int32 VsGetNstages(
VS_HANDLE vsHnd,
Int32 *pStages
Arguments:
vsHnd handle value returned by VsOpen()
pStages pointer to int that will receive the number of stages in the filter
Purpose: this function determines the number of optical stages in the filter and returns it in *pStages, which may not be NULL.
Returns: TRUE if successful, FALSE otherwise
Notes: noneVSDRVR_API Int32 VsGetPendingReply(
VS_HANDLE vsHnd,
LPSTR reply,
Int32 nChars,
Int32 *pQuit,
Int32 firstMs,
Int32 subsequentMs
Arguments:
vsHnd handle value returned by VsOpen()
reply pointer to buffer that is to receive the reply
nChars number of characters to receive
pQuit pointer to flag to control this function ?see Notes below
firstMs maximum time to wait, in ms, for first character of reply
subsequentMs maximum time to wait, in ms, for each subsequent character
Purpose: this function is used to exploit some of the less-common aspects of the filter, and it is likely that most programs will require its use. It receives a reply from the filter that may not arrive for a long time. The routine waits up to firstMs for the first character to arrive. Subsequent characters must arrive within subsequentMs of one another. Typically, this routine is called with a high value for firstMs and a lower value for subsequentMs.
Returns: TRUE if successful, FALSE otherwise
Notes: pQuit can be used to cancel this function while it is waiting for the reply, if that is desired, such as to respond to a user cancellation request. To use this feature, pQuit must be non-NULL and *pQuit must be FALSE at the time VsGetPendingReply() is called. VsGetPendingReply() checks this address periodically, and if it discovers that *pQuit is TRUE, it will cancel and return immediately.VSDRVR_API Int32 VsGetReply(
VS_HANDLE vsHnd,
LPSTR reply,
Int32 nChars,
Int32 waitMs
Arguments:
vsHnd handle value returned by VsOpen()
reply pointer to buffer that will receive the filter reply
nChars the number of characters sought
waitMs the maximum time, in ms, to wait for the reply
Purpose: this function is used to exploit those filter commands that are not directly provided by other functions, and most programmers will not need to use it. If the reply is not received in the time indicated by waitMs, or if less than nChars are received, the function returns with an unsuccessful status code.
Returns: TRUE if successful, FALSE otherwise
Notes: noneVSDRVR_API Int32 VsIsDiagnostic(
VS_HANDLE vsHnd
Arguments:
vsHnd handle value returned by VsOpen()
Function: determines whether the filter is in the diagnostic mode that is used at the factory for setup and calibration. This command is reserved for CRI use only.
Returns: TRUE if diagnostic, FALSE otherwise.
VSDRVR_API Int32 VsIsEngagedInBeam(
VS_HANDLE vsHnd
Arguments:
vsHnd handle value returned by VsOpen()
Function: determines whether the filter is engaged in the beam, when configured into certain CRI systems. This function is reserved for CRI use only
Returns: TRUE if engaged in the beam, FALSE otherwise
VSDRVR_API Int32 VsSendBinary(
VS_HANDLE vsHnd,
char *bin,
Int32 nChars,
BOOL clearEcho
Arguments:
vsHnd handle value returned by VsOpen()
bin pointer a buffer that contains binary data to be sent to the filter
nChars the number of binary characters to be sent
clearEcho flag indicating whether to clear echo characters from the queue
Purpose: this routine sends binary blocks of data to the filter. This is only necessary when programming calibration data to the filter, and it is not anticipated that this function will be necessary in any normal use.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsSendCommand(
VS_HANDLE vsHnd,
LPCSTR cmd,
BOOL sendEolChar)
Arguments:
vsHnd handle value returned by VsOpen()
cmd pointer to the command to be sent to the filter
sendEolChar flag indicating whether to append the end-of-line character or not
Purpose: this function sends the command in cmd to the filter, and appends an end-of-line terminator (or not) based on sendEolChar. It automatically retrieves and discards the character echo of this command by the VariSpec. It does not automatically retrieve the reply, if any, from the VariSpec.
Returns: TRUE if successful, FALSE otherwise
Notes: The parameter sendEolChar should normally be true in all cases, unless one is sending individual character commands such as the ??or 慇?commands described in the VariSpec user抯 manual.VSDRVR_API Int32 VsSetStageDrive(
VS_HANDLE vsHnd,
Int32 stage,
Int32 drive
Arguments:
vsHnd handle value returned by VsOpen()
stage stage number whose drive level is to be adjusted
drive drive level, in counts, for that stage
Purpose: this function provides a way to manually adjust the drive levels at each of the filter抯 optical stages. It is normally used only during manufacture, and is not a function that most software programs will have any reason to use.
Returns: TRUE if successful, FALSE otherwise
Notes: none
VSDRVR_API Int32 VsThermistorCounts(
VS_HANDLE vsHnd,
Int32 *pCounts
Arguments:
vsHnd handle value returned by VsOpen()
pCounts pointer to int that will receive the thermistor signal, in counts
Purpose: this function provides a way to determine the signal level, in counts, at the thermistor. It is normally used only during manufacture, and is not a function that most software programs will have any reason to use.
Returns: TRUE if successful, FALSE otherwise
Notes: none
CVS报表工具
时间: 2004-11-11
Statcvs-xml这个工具是在公司的新闻组上看到的,它是一个开源组织的作品,主要是用来生成CVS修改的报表,包括图形显示功能,很不错。
1.访问http://statcvs-xml.berlios.de/,下载工具JAR包:statcvs-xml-0.9.4-full.jar;
2.步骤:
It takes three steps to create reports for a CVS module:
(1). Check out a copy of the module from CVS
(2). Create a CVS log for the module
(3). Run StatCvs-XML
Detail:
(1).Checking out a module from CVS
You can skip this step if you have already checked out a working copy. Typically, the command looks somewhat like this (replace [cvsroot] and [module] with the cvs module and root you want to check out):
cvs -d[cvsroot] checkout [module]
cvs -q update
(2). Creating a CVS log file
Change into the directory where you have checked out the module, and use the cvs log command to create a log file.
cvs log > cvs.log
(3). Running StatCvs
StatCvs is run using the command:
java -jar statcvs-xml-full.jar
This will generate the reports in the html documents in a directory statcvs-xml-out/. The directory will be created if it does not exist, yet. Point your browser to statcvs-xml-out/index.html to access the table of contents of the generated reports.
如果内存爆出不足,参照下面:
java -Xmx512M -jar statcvs-xml.jar -output-dir cvsreport
SQUASHFS 1.3r3 - A squashed read-only filesystem for Linux
Released under the GPL licence (version 2 or later).
Squashfs is currently at version 1.3 release 3. Please see the CHANGES file
for recent changes to squashfs.
Squashfs is a highly compressed read-only filesystem for Linux.
It uses zlib compression to compress both files, inodes and directories.
Inodes in the system are very small and all blocks are packed to minimise
data overhead. Block sizes greater than 4K are supported up to a maximum
of 32K.
Squashfs is intended for general read-only filesystem use, for archival
use (i.e. in cases where a .tar.gz file may be used), and in constrained
block device/memory systems (e.g. embedded systems) where low overhead is
needed.
The section 'mksquashfs' gives information on using the mksquashfs tool to
create and append to squashfs filesystems. The 'using squashfs' section
gives information on mounting and using squashfs filesystems stored on block
devices and as normal files using the loopback device.
1. Squashfs overview
--------------------
1. Data, inodes and directories are compressed.
2. Squashfs stores full uid/gids (32 bits), and file creation time.
3. Files up to 2^32 bytes are supported. Filesystems can be up to
2^32 bytes.
4. Inode and directory data are highly compacted, and packed on byte
boundaries. Each compressed inode is on average 8 bytes in length
(the exact length varies on file type, i.e. regular file, directory,
symbolic link, and block/char device inodes have different sizes).
5. Squashfs can use block sizes up to 32K (the default size is 32K).
Using 32K blocks achieves greater compression ratios than the normal
4K block size.
6. File duplicates are detected and removed.
7. Both big and little endian architectures are supported. Squashfs can
mount filesystems created on different byte order machines.
2. mksquashfs
-------------
As squashfs is a read-only filesystem, the mksquashfs program must be used to
create populated squashfs filesystems. Beginning with Squashfs 1.2, mksquashfs
will also append directories and files to pre-existing squashfs filesystems, see
the following 'appending to squashfs filesystems' subsection.
SYNTAX:mksquashfs source1 source2 ... dest [options] [-e list of exclude dirs/files]
Options are
-info print files written to filesystem
-b block size size of blocks in filesystem, default 32768
-noappend Do not append to existing filesystem on dest, write a new filesystem
This is the default action if dest does not exist, or if no filesystem is on it
-keep-as-directory If one source directory is specified, create a root directory
containing that directory, rather than the contents of the directory
-root-becomes name When appending source files/directories, make the original
root become a subdirectory in the new root called name, rather
than adding the new source items to the original root
-noI -noInodeCompression do not compress inode table
-noD -noDataCompression do not compress data blocks
-nopad do not pad filesystem to a multiple of 4K
-check_data add checkdata for greater filesystem checks
-le create a little endian filesystem
-be create a big endian filesystem
-ef exclude file file is a list of exclude dirs/files - one per line
-version print version, licence and copyright message
Source1 source2 ... are the source directories/files containing the
files/directories that will form the squashfs filesystem. If a single
directory is specified (i.e. mksquashfs source output_fs) the squashfs
filesystem will consist of that directory, with the top-level root
directory corresponding to the source directory.
If multiple source directories or files are specified, mksquashfs will merge
the specified sources into a single filesystem, with the root directory
containing each of the source files/directories. The name of each directory
entry will be the basename of the source path. If more than one source
entry maps to the same name, the conflicts are named xxx_1, xxx_2, etc. where
xxx is the original name.
To make this clear, take two example directories. Source directory
"/home/phillip/test" contains "file1", "file2" and "dir1".
Source directory "goodies" contains "goodies1", "goodies2" and "goodies3".
usage example 1:
%mksquashfs /home/phillip/test output_fs
This will generate a squashfs filesystem with root entries
"file1", "file2" and "dir1".
example 2:
%mksquashfs /home/phillip/test goodies output_fs
This will create a squashfs filesystem with the root containing
entries "test" and "goodies" corresponding to the source
directories "/home/phillip/test" and "goodies".
example 3:
%mksquashfs /home/phillip/test goodies test output_fs
This is the same as the previous example, except a third
source directory "test" has been specified. This conflicts
with the first directory named "test" and will be renamed "test_1".
Multiple sources allow filesystems to be generated without needing to
copy all source files into a common directory. This simplifies creating
filesystems.
The -keep-as-directory option can be used when only one source directory
is specified, and you wish the root to contain that directory, rather than
the contents of the directory. For example:
example 4:
%mksquashfs /home/phillip/test output_fs -keep-as-directory
This is the same as example 1, except for -keep-as-directory.
This will generate a root directory containing directory "test",
rather than the "test" directory contents "file1", "file2" and "dir1".
The Dest argument is the destination where the squashfs filesystem will be
written. This can either be a conventional file or a block device. If the file
doesn't exist it will be created, if it does exist and a squashfs
filesystem exists on it, mksquashfs will append. The -noappend option will
write a new filesystem irrespective of whether an existing filesystem is present.
The -e and -ef options allow files/directories to be specified which are
excluded from the output filesystem. The -e option takes the exclude
files/directories from the command line, the -ef option takes the
exlude files/directories from the specified exclude file, one file/directory
per line. If an exclude file/directory is absolute (i.e. prefixed with /, ../,
or ./) the entry is treated as absolute, however, if an exclude file/directory
is relative, it is treated as being relative to each of the sources in turn,
%mksquashfs /tmp/source1 source2 output_fs -e ex1 /tmp/source1/ex2 out/ex3
Will generate exclude files /tmp/source1/ex2, /tmp/source1/ex1, source2/ex1,
/tmp/source1/out/ex3 and source2/out/ex3.
The -e and -ef exclude options are usefully used in archiving the entire
filesystem, where it is wished to avoid archiving /proc, and the filesystem
being generated, i.e.
%mksquashfs / /tmp/root.sqsh -e proc /tmp/root.sqsh
Multiple -ef options can be specified on the command line, and the -ef
option can be used in conjuction with the -e option.
The -info option displays the files/directories as they are compressed and
added to the filesystem. The compression percentage achieved is printed, with
the original uncompressed size. If the compression percentage is listed as
0% it means the file is a duplicate.
The -b option allows the block size to be selected, this can be either
512, 1024, 2048, 4096, 8192, 16384, or 32768 bytes.
The -noI and -noD options (also -noInodeCompression and -noDataCompression)
can be used to force mksquashfs to not compress inodes/directories and data
respectively. Giving both options generates an uncompressed filesystem.
The -le and -be options can be used to force mksquashfs to generate a little
endian or big endian filesystem. Normally mksquashfs will generate a
filesystem in the host byte order. Squashfs, for portability, will
mount different ordered filesystems (i.e. it can mount big endian filesystems
running on a little endian machine), but these options can be used for
greater optimisation.
The -nopad option informs mksquashfs to not pad the filesystem to a 4K multiple.
This is performed by default to enable the output filesystem file to be mounted
by loopback, which requires files to be a 4K multiple. If the filesystem is
being written to a block device, or is to be stored in a bootimage, the extra
pad bytes are not needed.
2.1 appending to squashfs filesystems
-------------------------------------
Beginning with squashfs1.2, mksquashfs can append to existing squashfs
filesystems. Three extra options "-noappend", "-keep-as-directory",
and "root-becomes" have been added.
Running squashfs with the destination directory containing an existing
filesystem, will add the source items to the existing filesystem. By default,
the source items are added to the existing root directory.
To make this clear... An existing filesystem "image" contains root entries
"old1", and "old2". Source directory "/home/phillip/test" contains "file1",
"file2" and "dir1".
example 1:
%mksquashfs /home/phillip/test image
Will create a new "image" with root entries "old1", "old2", "file1", "file2" and
"dir1"
example 2:
%mksquashfs /home/phillip/test image -keep-as-directory
Will create a new "image" with root entries "old1", "old2", and "test".
As shown in the previous section, for single source directories
'-keep-as-directory' adds the source directory rather than the
contents of the directory.
example 3:
%mksquashfs /home/phillip/test image -keep-as-directory -root-becomes original-root
Will create a new "image" with root entries "original-root", and "test". The
'-root-becomes' option specifies that the original root becomes a subdirectory
in the new root, with the specified name.
The append option with file duplicate detection, means squashfs can be
used as a simple versioning archiving filesystem. A squashfs filesystem can
be created with for example the linux-2.4.19 source. Appending the linux-2.4.20
source will create a filesystem with the two source trees, but only the
changed files will take extra room, the unchanged files will be detected as
duplicates.
3. Using squashfs
-----------------
Squashfs filesystems should be mounted with 'mount' with the filesystem type
'squashfs'. If the filesystem is on a block device, the filesystem can be
mounted directly, e.g.
%mount -t squashfs /dev/sda1 /mnt
Will mount the squashfs filesystem on "/dev/sda1" under the directory "/mnt".
If the squashfs filesystem has been written to a file, the loopback device
can be used to mount it (loopback support must be in the kernel), e.g.
%mount -t squashfs image /mnt -o loop
Will mount the squashfs filesystem in the file "image" under
the directory "/mnt".
4. Filesystem layout
--------------------
Brief filesystem design notes follow.
A squashfs filesystem consists of five parts, packed together on a byte alignment:
---------------
| superblock |
|---------------|
| data |
| blocks |
|---------------|
| inodes |
|---------------|
| directories |
|---------------|
| uid/gid |
| lookup table |
---------------
Compressed data blocks are written to the filesystem as files are read from
the source directory, and checked for duplicates. Once all file data has been
written the completed inode, directory and uid/gid lookup tables are written.
4.1 Metadata
------------
Metadata (inodes and directories) are compressed in 8Kbyte blocks. Each
compressed block is prefixed by a two byte length, the top bit is set if the
block is uncompressed. A block will be uncompressed if the -noI option is set,
or if the compressed block was larger than the uncompressed block.
Inodes are packed into the metadata blocks, and are not aligned to block
boundaries, therefore inodes overlap compressed blocks. An inode is
identified by a two field tuple <start address of compressed block : offset
into de-compressed block>.
Inode contents vary depending on the file type. The base inode consists of:
base inode:
Inode type
uid index
gid index
The inode type is 4 bits in size, and the mode is 12 bits.
The uid and gid indexes are 4 bits in length. Ordinarily, this will allow 16
unique indexes into the uid table. To minimise overhead, the uid index is
used in conjunction with the spare bit in the file type to form a 48 entry
index as follows:
inode type 1 - 5: uid index = uid
inode type 5 -10: uid index = 16 + uid
inode type 11 - 15: uid index = 32 + uid
In this way 48 unique uids are supported using 4 bits, minimising data inode
overhead. The 4 bit gid index is used to index into a 15 entry gid table.
Gid index 15 is used to indicate that the gid is the same as the uid.
This prevents the 15 entry gid table filling up with the common case where
the uid/gid is the same.
The data contents of symbolic links are stored immediately after the symbolic
link inode, inside the inode table. This allows the normally small symbolic
link to be compressed as part of the inode table, achieving much greater
compression than if the symbolic link was compressed individually.
Similarly, the block index for regular files is stored immediately after the
regular file inode. The block index is a list of block lengths (two bytes
each), rather than block addresses, saving two bytes per block. The block
address for a given block is computed by the summation of the previous
block lengths. This takes advantage of the fact that the blocks making up a
file are stored contiguously in the filesystem. The top bit of each block
length is set if the block is uncompressed, either because the -noD option is
set, or if the compressed block was larger than the uncompressed block.
4.2 Directories
---------------
Like inodes, directories are packed into the metadata blocks, and are not
aligned on block boundaries, therefore directories can overlap compressed
blocks. A directory is, again, identified by a two field tuple
<start address of compressed block containing directory start : offset
into de-compressed block>.
Directories are organised in a slightly complex way, and are not simply
a list of file names and inode tuples. The organisation takes advantage of the
observation that in most cases, the inodes of the files in the directory
will be in the same compressed metadata block, and therefore, the
inode tuples will have the same start block.
Directories are therefore organised in a two level list, a directory
header containing the shared start block value, and a sequence of
directory entries, each of which share the shared start block. A
new directory header is written once/if the inode start block
changes. The directory header/directory entry list is repeated as many times
as necessary. The organisation is as follows:
directory_header:
count (8 bits)
inode start block (24 bits)
directory entry: * count
inode offset (13 bits)
inode type (3 bits)
filename size (8 bits)
filename
This organisation saves on average 3 bytes per filename.
4.3 File data
-------------
File data is compressed on a block by block basis and written to the
filesystem. The filesystem supports up to 32K blocks, which achieves
greater compression ratios than the Linux 4K page size.
The disadvantage with using greater than 4K blocks (and the reason why
most filesystems do not), is that the VFS reads data in 4K pages.
The filesystem reads and decompresses a larger block containing that page
(e.g. 32K). However, only 4K can be returned to the VFS, resulting in a
very inefficient filesystem, as 28K must be thrown away. Squashfs,
solves this problem by explicitly pushing the extra pages into the page
cache.
5. Author info
--------------
Squashfs was written by Phillip Lougher, email phillip@lougher.demon.co.uk,
in Chepstow, Wales, UK. If you like the program, or have any problems,
then please email me, as it's nice to get feedback!
【多选】当表不存在的时候,创建表的SQL命令___不可行。 A.CREATE TABLE IF NOT EXIST’表名’(列) B.CREATE TABLE IF NOT EXISTS’表名’(列) C.CREATE TABLE NOT EXIST’表名’(列) D.CREATE TABLE NOT EXISTS’表名’(列)
当表不存在的时候,创建表的SQL命令A.CREATE TABLE IF NOT EXIST’表名’(列)是不可行的。正确的命令是B.CREATE TABLE IF NOT EXISTS’表名’(列)。
C.CREATE TABLE NOT EXIST’表名’(列)和D.CREATE TABLE NOT EXISTS’表名’(列)也是不可行的。
例如,下面是一个使用正确的命令创建表的示例:
CREATE TABLE IF NOT EXISTS students (
id INTEGER PRIMARY KEY,
name TEXT,
age INTEGER,
gender CHAR(1)
这条命令会在表"students"不存在的情况下创建表,并且表中有4个列:id、name、age和gender。