I hope you all feel great. We have a huge mysql table called "messages". It has about 70,000 records and has a size of about 10 GB.
My boss says we need to do something to make it easier for us to process this huge table, because if this table gets corrupted, it will take us a long time to restore the table. Also sometimes slow.
What are the possible solutions to simplify the processing of this table, as in all aspects.
The structure of the table is as follows:
CREATE TABLE IF NOT EXISTS `posts` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `thread_id` int(11) unsigned NOT NULL, `content` longtext CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL, `first_post` mediumtext CHARACTER SET utf8 COLLATE utf8_unicode_ci, `publish` tinyint(1) NOT NULL, `deleted` tinyint(1) NOT NULL, `movedToWordPress` tinyint(1) NOT NULL, `image_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL DEFAULT '', `video_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL, `video_image_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL, `thread_title` text CHARACTER SET utf8 COLLATE utf8_unicode_ci, `section_title` text CHARACTER SET utf8 COLLATE utf8_unicode_ci, `urlToPost` varchar(280) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL, `posts` int(11) DEFAULT NULL, `views` int(11) DEFAULT NULL, `forum_name` varchar(50) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL, `subject` varchar(150) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL, `visited` int(11) DEFAULT '0', `replicated` tinyint(4) DEFAULT '0', `createdOn` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `urlToPost` (`urlToPost`,`forum_name`), KEY `thread_id` (`thread_id`), KEY `publish` (`publish`), KEY `createdOn` (`createdOn`), KEY `movedToWordPress` (`movedToWordPress`), KEY `deleted` (`deleted`), KEY `forum_name` (`forum_name`), KEY `subject` (`subject`), FULLTEXT KEY `first_post` (`first_post`,`thread_title`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=78773 ;
Thank you.
UPDATED
Note: although I am well versed in the answers, almost all of the answers were about optimizing the current database, and not about how to process large tables normally. Although I can optimize the database based on the responses received, it really does not answer the question about processing huge databases. I am talking about 70,000 records now, but over the next few months, if not weeks, we will grow. Each record can be about 300 KB in size.