Baidu blocks Google, Bing from scraping content amid demand for data used on AI projects
- Wikipedia-style service Baidu Baike recently barred the search engine crawlers of Google and Bing from indexing its online content
A recent update of Baidu Baike’s robots.txt – a file that tells search engine crawlers which uniform resource locators, commonly known as web addresses, can be accessed from a site – has outright blocked the ability of the Googlebot and Bingbot crawlers to index content from the Chinese platform.
That update appears to have been made some time on August 8, according to records on internet archive service the Wayback Machine. It also showed that earlier on the same day Baidu Baike still allowed Google and Bing to browse and index its online repository of nearly 30 million entries, with only part of its website designated as off limits.
That followed US social news aggregation platform and forum Reddit’s move in July, when it blocked various search engines, except Google, from indexing its online posts and discussions. Google has a multimillion dollar deal with Reddit that gives it the right to scrape the social media platform for data to train its AI services.