Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
connection = sqlite.connect('cache.db')
cur = connection.cursor()
cur.execute('''create table item
(id integer primary key, itemno text unique,
scancode text, descr text, price real)''')
connection.commit()
cur.close()
I'm catching this exception:
Traceback (most recent call last):
File "cache_storage.py", line 7, in <module>
scancode text, descr text, price real)''')
File "/usr/lib/python2.6/dist-packages/sqlite/main.py", line 237, in execute
self.con._begin()
File "/usr/lib/python2.6/dist-packages/sqlite/main.py", line 503, in _begin
self.db.execute("BEGIN")
_sqlite.OperationalError: database is locked
Permissions for cache.db are ok. Any ideas?
I'm presuming you are actually using sqlite3 even though your code says otherwise. Here are some things to check:
That you don't have a hung process sitting on the file (unix: $ fuser cache.db
should say nothing)
There isn't a cache.db-journal file in the directory with cache.db; this would indicate a crashed session that hasn't been cleaned up properly.
Ask the database shell to check itself: $ sqlite3 cache.db "pragma integrity_check;"
Backup the database $ sqlite3 cache.db ".backup cache.db.bak"
Remove cache.db as you probably have nothing in it (if you are just learning) and try your code again
See if the backup works $ sqlite3 cache.db.bak ".schema"
Failing that, read Things That Can Go Wrong and How to Corrupt Your Database Files
–
–
–
–
I know this is old, but I'm still getting the problem and this is the first link on Google for it. OP said his issue was that the .db was sitting on a SMB share, which was exactly my situation. My ten minutes' research indicates that this is a known conflict between sqlite3 and smb; I've found bug reports going back to 2007.
I resolved it by adding the "nobrl" option to my smb mount line in /etc/fstab, so that line now looks like this:
//SERVER/share /mnt/point cifs credentials=/path/to/.creds,sec=ntlm,nobrl 0 0
This option prevents your SMB client from sending byte range locks to the server. I'm not too up on my SMB protocol details, but I best I can tell this setting would mostly be of concern in a multi-user environment, where somebody else might be trying to write to the same db as you. For a home setup, at least, I think it's safe enough.
My relevant versions:
Mint 17.1 Rebecca
SMB v4.1.6-Ubuntu
Python v3.4.0
SQLite v3.8.2
Network share is hosted on a Win12R2 server
–
The reason mine was showing the "Lock" message was actually due to me having opened an SQLite3 IDE on my mac and that was the reason it was locked. I assume I was playing around with the DB within the IDE and hadn't saved the changes and therefor a lock was placed.
Cut long story short, check that there are no unsaved changes on the db and also that it is not being used elsewhere.
In Linux you can do something similar, for example, if your locked file is development.db:
$ fuser development.db
This command will show which process is locking the file:
development.db: 5430
Just kill the process...
kill -9 5430
...And your database will be unlocked.
while True:
connection = sqlite3.connect('user.db', timeout=1)
cursor = connection.cursor()
cursor.execute("SELECT * FROM queue;")
result = cursor.fetchall()
except sqlite3.OperationalError:
print("database locked")
num_users = len(result)
# ...
Because this is still the top Google hit for this problem, let me add a possible cause. If you're editing your database structure and haven't committed the changes, the database is locked until you commit or revert.
(Probably uncommon, but I'm developing an app so the code and database are both being developed at the same time)
One possible reason for the database being locked that I ran into with SQLite is when I tried to access a row that was being written by one app, and read by another at the same time. You may want to set a busy timeout in your SQLite wrapper that will spin and wait for the database to become free (in the original c++ api the function is sqlite3_busy_timeout). I found that 300ms was sufficient in most cases.
But I doubt this is the problem, based on your post. Try other recommendations first.
I had the same problem: sqlite3.IntegrityError
As mentioned in many answers, the problem is that a connection has not been properly closed.
In my case I had try
except
blocks. I was accessing the database in the try
block and when an exception was raised I wanted to do something else in the except
block.
conn = sqlite3.connect(path)
cur = conn.cursor()
cur.execute('''INSERT INTO ...''')
except:
conn = sqlite3.connect(path)
cur = conn.cursor()
cur.execute('''DELETE FROM ...''')
cur.execute('''INSERT INTO ...''')
However, when the exception was being raised the connection from the try
block had not been closed.
I solved it using with
statements inside the blocks.
with sqlite3.connect(path) as conn:
cur = conn.cursor()
cur.execute('''INSERT INTO ...''')
except:
with sqlite3.connect(path) as conn:
cur = conn.cursor()
cur.execute('''DELETE FROM ...''')
cur.execute('''INSERT INTO ...''')
Oh, your traceback gave it away: you have a version conflict. You have installed some old version of sqlite in your local dist-packages directory when you already have sqlite3 included in your python2.6 distribution and don't need and probably can't use the old sqlite version. First try:
$ python -c "import sqlite3"
and if that doesn't give you an error, uninstall your dist-package:
easy_install -mxN sqlite
and then import sqlite3
in your code instead and have fun.
–
I had this problem while working with Pycharm and with a database that was originally given to me by another user.
So, this is how I solve it in my case:
Closed all tabs in Pycharm that operate with the problematic database.
Stop all running processes from the red square botton in the top right corner of Pycharm.
Delete the problematic database from the directory.
Upload again the original database.
And it worked again.
–
in my case ,the error happened when a lot of concurrent process trying to read/write to the same table. I used retry to workaround the issue
def _retry_if_exception(exception):
return isinstance(exception, Exception)
@retry(retry_on_exception=_retry_if_exception,
wait_random_min=1000,
wait_random_max=5000,
stop_max_attempt_number=5)
def execute(cmd, commit=True):
c.execute(cmd)
c.conn.commit()
Even when I just had one writer and one reader, my issue was that one of the reads was taking too long: longer than the stipulated timeout of 5 seconds. So the writer timed out and caused the error.
So, be careful when reading all entries from a database especially from one which the size of the table grows over time.
Easy solution: check once if you have opened the database in another window or in another terminal. That also locks your database. In my case, I closed all the other terminals that were locking the database (a terminal tab in Pycharm). Check each tab of the terminals of your IDE as well if there is a terminal that left the database open. exit() all the terminals should work unlocking the database.
I found this worked for my needs (thread locking):
conn = sqlite3.connect(database, timeout=10)
sqlite3.connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri])
When a database is accessed by multiple connections, and one of the processes modifies the database, the SQLite database is locked until that transaction is committed. The timeout parameter specifies how long the connection should wait for the lock to go away until raising an exception. The default for the timeout parameter is 5.0 (five seconds).
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.