Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened,
visit the help center
for guidance.
Closed
10 years ago
.
I guess this has been brought up many times, but I'm bringing it up again!!!
Anyway... In Ruby on Rails Sqlite3 is already setup and no extra picking and slicing is needed, but...
after numerous readings here and other places, some say it's not scalable while others say it can actually be quite good at that. Some say MySQL is much better for bigger projects, while others think, just go with PostgreSQL.
I'm interested in hearing your opinion on this. In two scenarios. One where you are starting a little website for news publishing website like CNN news, and the other scenario where you're creating a website similar to Twitter?
–
–
–
–
Highly depends on your application.
Generally spoken, any
write
operation into a SQLite database is slow. Even a plain :update_attribute or :create may take up to 0.5 seconds. But if your App doesn't write much (killer
against
SQLite: write to DB on every request!), SQlite is a solid choice for most web apps out there. It is proven to handle small to medium amounts of traffic. Also, it is a very good choice during development, since it needs zero configuration. It performs also very well in your test suite with the in-memory mode (except you have thousands of migrations, since it rebuilds from scratch every time). Also, it is mostly seamless to switch from SQLite to, eg MySQL if its performance isn't enough any longer.
MySQL is currently a rock-solid choice. Future will tell what happens to MySQL under Oracle.
PostgreSQL is the fastest one as far as I know, but I haven't used it in production
yet
. Maybe others can tell more.
–
–
–
–
–
For websites, SQLite3 will suffice and scale fine for anything up to higher middle class traffic scenarios. So, unless you start getting hit by millions of requests per hour, there's no need to worry about SQLite3's performance or scalability.
That said, SQLite3 doesn't support all those typical features that a dedicated SQL server would. Access control is limited to whatever file permissions you can set for UNIX accounts on the machine with your database file, there's no daemon to speak of and the set of built-in functions is rather small. Also, there's no stored procedures of any kind, although you could emulate those with views and triggers.
If you're worried about any of those points, you should go with PostgreSQL. MySQL has (indirectly) been bought by Oracle, and considering they also had their own database before acquiring MySQL, I wouldn't put it past them to just drop it somewhere along the line. I've also had a far smoother experience maintaining PostgreSQL in the past and - anecdotally - it always felt a bit snappier and more reliable.
–
–
–
My opinion is completely bias as I have used mysql since it first came out.
Your question brings in another argument about how your development environment should be setup. A number of individuals will argue that you should be using the same dbms in development as you do in testing/production. This is totally dependent upon what you're doing in the first place. Sqlite will work fine, on development, in most cases.
I've personally been involved with more sites using MySql and MsSql than Postgres.
I was involved in a project that scrubbed the National Do-Not-Call list against client numbers. We stored that data locally. Some area codes easily have over 5 million records. The app was initially written in .Net using MsSql. It was "not-so-fast". I changed it to use PHP and MySql (Sad says before I found out about Ruby). It would insert/digest 5 million rows in(about) 3 seconds. Which was infinitely faster than processing it through MsSql. We also stored call log data in tables that would grow to 20 million records in less than a day. MySql handled everything we threw at it like a champ. The processing naturally took a hit when we setup replication but it was such a small one that we ignored it.
It really comes down to your project and what solution fits the need of the project.
–
–
–
–