• Official comment
    Krzysztof Ksiazek



    It is hard to tell as DROP TABLE performance changes from one MySQL version to another. Usually, newer is faster. By default DROP TABLE is executed as TOI so it is locking operation. There are couple of things you can try to minimize the impact.

    First of all, you can try to use RSU to drop the table. It is not recommended by Codership but, as long as you don't write to that table anymore, it should work ok. You need to test it, though, to make sure it will work for you.

    Another method to minimize impact would be to delete data in the table using, for example, DELETE FROM table LIMIT x;. Delete 100, 1000 rows at once, repeating query as long as there is data in the table. You can script this process in a way that you could throttle it and minimize the impact on the cluster's operations. Once you get rid of data, you can drop the table.


    As I said, tests are necessary here - only in that way you can estimate the impact and decide on the best approach.

  • Avatar
    Salvatore Davide Rapisarda



    this is the reason why I wrote to this support because I haven't a test platform where I can try to do this operation.

    I know that with a script I can delete 1k rows at time but I would know if you have some kind of previous experience on do DROP TABLE on production cluster.

    Thank you.


  • Avatar
    Krzysztof Ksiazek

    I'm afraid that no one would give you a green light on such operation without proper testing. MySQL's performance, in general, depends on everything. CPU, I/O, MySQL version, filesystem type and many more. It's hardly possible to compare performance of VM in the cloud with, let's say, physical host on premises running PCIe SSD card. From my experience I can tell that yes, this may affect your workload. To tell if it _will_ affect and to what extend, that's something you can only verify in your test environment.

Please sign in to leave a comment.

Powered by Zendesk