I am interested to know what is the best / fastest (most efficient) way to solr request in setting up solr / mysql / app. I have a mysql database with one large main table and several smaller tables in a relational schema. I also create an application that uses a hierarchy and creates menus based on data in related tables.
I started from this only in mysql, but quickly discovered (with gigabytes of data) that mysql can quite slowly calculate the counts from these related data when using joins with the main table, etc. (even when using mysql indexes). Right now, the approach I'm taking is to index my main table with solr and store smaller related tables in mysql. For each menu item, I will request solr to count at runtime, which seems slow.
Faster / Better:
1.) Save the linked tables in mysql, also set the faces for each row in the linked tables. Link them together somehow when I access the main table? This sounds like the fastest option, but can be tricky (you have to map 2 different arrays) in my application.
2) Save the related tables in mysql and call / count the data in the main indexed table for each of the related items at runtime. for example, for brand menus, I would need to count each brand, requiring me to send each menu item as a request to solr (to get counts). I understand that each request is fairly fast, but there may be several hundred or thousands of brands.
3) Just put all the data in solr and use faces? - but how to define each facet and determine the corresponding information in mysql tables for each aspect? Each entry in related mysql tables has a title, description, formatted url, metadata, if the corresponding information is also stored in solr? in another index? in this case, should I get rid of mysql alltogether?
Any ideas on a better (practical) option would be much appreciated, as well as any suggestions I was thinking about would be great.
Cheers ke
source share