This is an old question, but I saw it in several places on the Internet, so I thought I could share one trick that I used.
Problem
If you have a large collection in your database, possibly containing hundreds of thousands of keys, for example, it may not be possible to capture all of them. If you are trying to filter results based on location in addition to other criteria, you are stuck with something like:
- Run a location request
- Scroll through each returned geophile key and take the corresponding data in the database
- Check each returned piece of data to make sure it meets other criteria.
Unfortunately, this is a lot of network requests, which is rather slow.
More specifically, let's say we want all users to be within, for example. 100 miles at a specific location that are men and ages 20 to 25 years. If 10,000 users are within 100 miles, this means that 10,000 network requests capture user data and compare their gender and age.
Workaround:
You can store the data needed for your comparisons in the geofin key itself, separated by a delimiter. Then you can simply split the keys returned by the geofire request to access the data. You still have to filter them, but this is much faster than sending hundreds or thousands of requests.
For example, you can use the format:
UserID*gender*age , which might look something like facebook:1234567*male*24 . The important points are
- Separate data points using a delimiter
- Use a valid character for the delimiter - "It can include any unicode character except. $ # [] / And ASCII control characters 0-31 and 127.)"
- Use a character that won't be found elsewhere in your database - I used *, but this may not work for you. Do not use characters from
-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz , as they are fair games for keys generated by firebase push() - Choose a consistent data order — in this case, UserID first, then gender, then age.
You can store up to 768 bytes of data in firebase keys, which is very important.
Hope this helps!
source share