It looks like * will work like a wild Google map, so your answer will make Google crawl, however, wildcards are not supported by other spiders. You can search google for group robot.txt templates for more information. I would look at http://seogadget.co.uk/wildcards-in-robots-txt/ for more information.
Pattern matching
Googlebot (but not all search engines) respects some matching patterns.
To match a sequence of characters, use an asterisk (*). For example, to block access to all> subdirectories starting with private:
User Agent: Googlebot Disallow: / private * /
To block access to all URLs containing a question mark (?) (More precisely, any URL starting with your domain name, followed by any line, followed by a question mark, followed by any line):
User Agent: Googlebot Disallow: / *?
To indicate a match for the end of the URL, use $. For example, to block any URLs that end in .xls:
User Agent: Googlebot Disallow: / *. Xls $
You can use this template in conjunction with the Allow directive. For example, if? indicates the session identifier, you can exclude all URLs that contain them to ensure that Googlebot does not crawl duplicate pages. But urls that end in? there may be a version of the page you want to include. In this situation, you can install the robots.txt file as follows:
User Agent: * Allow: /? $ Disallow: /?
Ban:/*? will the directive block any url that includes? (more specifically, it will block any URL starting with your domain name, followed by any line, followed by a question mark, followed by any line).
Allow Directive: / *? $ resolves any url that ends in? (more specifically, it will resolve any URL starting with your domain name, followed by a string followed by a?, without the characters after?).
Save the robots.txt file by downloading the file or copying the contents into a text file and save it as a robots.txt file. Save the file in the highest level directory of your site. The robots.txt file must be in the root of the domain and must have the name "robots.txt". The robots.txt file located in the subdirectory is invalid because bots only check this file in the root of the domain. For example, http://www.example.com/robots.txt is a valid location, but http://www.example.com/mysite/robots.txt is not.