In short, proxies are the chosen tool of web scrapers. Though one could use a VPN for the task, it is equivalent to using a sledge hammer to push in a thumb tack; it could be done, but is overkill and not the efficient tool for the job. An verbose expansion follows.

Virtual Private Networks (VPNs) have the ability to route all network ports and protocols (network nerd speak for all apps and all traffic from those apps), no traffic, or anywhere inbetween. The overhead for setting up and shutting down a VPN connection is not insignificant. A VPN connection may route existing traffic over it, or it may just give you access to certain resources or subnets at the network which you are connecting; either behavior is defined by routing rules setup after a VPN connection is established.

Most frequently web scraping takes the form of repeitive small actions to acquire just what one needs (certain pages or resources), not the whole website. Web scraping is more akin to special ops than sending in an army brigade. Proxies are optimal for web requests of a light-weight nature. Automation of said requests and desiring different requests to have unique IP Addresses are common. In earlier days proxies were used by humans (i.e. not automation) as a means to surf the web to accomplish some network attribution, but now there are more optimal tools for such (i.e. VPNs, TOR).

Both are powerful networking tools and can be configured in creative conventional and unconventional ways, but the above are the common uses.