You can force the Web crawler to visit specific URLs as soon as possible.
If you need to refresh the crawl space with information from certain Web sites, you can monitor the crawler, select the URLs to visit or revisit option, then specify the URLs or URL patterns of the pages that need to be crawled or recrawled.
For example, if your Communications department adds a Web page to your intranet or revises a page to reflect an important policy change, you can specify the URL of the new or changed page. If the crawler is running, the crawler queues the specified URL for crawling the next time that it checks for pages that are waiting to be visited (typically every ten minutes). If the crawler is not running, it queues the specified URL so that it can be crawled the next time that the crawler is started.
Ensure that the crawling rules include a rule that allows the crawler to visit the URLs that you specify. The crawler can visit the URLs that you specify sooner than it normally would. However, for a URL to be crawled at all, a crawling rule must exist that allows the URL to be crawled.
The newly crawled data becomes available for searching the next time that the main index build occurs.