Web scraping using Selenium and BeautifulSoup can be a handy tool in your bag of Python and data knowledge tricks, especially when you face dynamic pages and heavy JavaScript-rendered websites. This guide has covered only some aspects of Selenium and web scraping. In this article I will show you how it is easy to scrape a web site using Selenium WebDriver. I will guide you through a sample project which is written in C# and uses WebDriver in conjunction with the Chrome browser to login on the testing page and scrape the text from the private area of the website. Inspired by Hartley Brody, this cheat sheet is about web scraping using rvest,httr and Rselenium. It covers many topics in this blog. While Hartley uses python's requests and beautifulsoup libraries, this cheat sheet covers the usage of httr and rvest.
I love Dungeons and Dragons. I am also a specifies that we want the path attribute. (Anatomy of an HTML link: <a href='https://www.something.com'>Link text seen on page</a>
).
abs_links
).Finally, we can loop through all pages of results to get the hundreds of pages for the individual monsters:

Step 2: Use RSelenium
to access pages behind login

In Step 1, we looped through pages of tables to get the URLs for pages that contain detailed information on individual monsters. Great! We can visit each of these pages and just do some more rvest
work to scrape the details! Well… not immediately. Most of these monster pages can only be seen if you have paid for the corresponding digital books and are logged in. DnD Beyond uses Twitch for authentication which involves a redirect. This redirect made it way harder for me to figure out what to do. It was like I had been thrown into the magical, mysterious, and deceptive realm of the Feywild where I frantically invoked Google magicks to find many dashed glimmers of hope but luckily a solution in the end.
What did not work
It’s helpful for me to record what things I tried and failed so I can remember my thought process. Hopefully, it saves you wasted effort if you’re ever in a similar situation.
- Using
rvest
’s page navigation abilities did not work. I tried the following code:
But I ran into an error:
- Using
rvest
’s basic authentication abilities did not work. I found this tutorial on how to send a username and password to a form withrvest
. I tried hardcoding the extremely long URL that takes you to a Twitch authentication page, sending my username and password as described in the tutorial, and following [this Stack Overflow suggestion] to create a fake login button since the authentication page had an unnamed, unlabeled “Submit” input that did not seem to conform torvest
’s capabilities. I got a 403 error.
What did work
Only when I stumbled upon this Stack Overflow post did I learn about the RSelenium
package. Selenium is a tool for automating web browsers, and the RSelenium
package is the R interface for it.
I am really grateful to the posters on that Stack Overflow question and this blog post for getting me started with RSelenium
. The only problem is that the startServer
function used in both posts is now defunct. When calling startServer
, the message text informs you of the rsDriver
function.
Step 2a: Start automated browsing with rsDriver
The amazing feature of the rsDriver
function is that you do not need to worry about downloading and installing other sofware like Docker or phantomjs. This function works right out of the box! To start the automated browsing, use the following:
When you first run rsDriver
, status messages will indicate that required files are being downloaded. After that you will see the status text “Connecting to remote server” and a Chrome browser window will pop open. The browser window will have a message beneath the search bar saying “Chrome is being controlled by automated test software.” This code comes straight from the example in the rsDriver
help page.
Web Scraping Legal

Step 2b: Browser navigation and interaction
The rem_dr
object is what we will use to navigate and interact with the browser. This navigation and interaction is achieved by accessing and calling functions that are part of the rem_dr
object. We can navigate to a page using the $navigate()
function. We can select parts of the webpage with the $findElement()
function. Once these selections are made, we can interact with the selections by
- Sending text to those selections with
$sendKeysToElement()
- Sending key presses to those selections with
$sendKeysToElement()
- Sending clicks to those selections with
$clickElement()
All of these are detailed in the RSelenium Basics vignette, and further examples are in the Stack Overflow and blog post I mentioned above.
Web Scraping With Python
The code below shows this functionality in action:
Note: Once the Chrome window opens, you can finish the login process programatically as above or manually interface with the browser window as you would normally. This can be safer if you don’t want to have a file with your username and password saved anywhere.
Step 2c: Extract page source
Now that we have programatic control over the browser, how do we interface with rvest
? Once we navigate to a page with $navigate()
, we will need to extract the page’s HTML source code to supply to rvest::read_html
. We can extract the source with $getPageSource()
:
The subset [[1]]
is needed after calling rem_dr$getPageSource()
because $getPageSource()
returns a list of length 1. The HTML source that is read in can be directly input to rvest::read_html
.
Excellent! Now all we need is a function that scrapes the details of a monster page and loop! In the following, we put everything together in a loop that iterates over the vector of URLs (all_monster_urls
) generated in Step 1.

Within the loop we call the custom scrape_monster_page
function to be discussed below in Step 3. We also include a check for purchased content. If you try to access a monster page that is not part of books that you have paid for, you will be redirected to a new page. We perform this check with the $getCurrentUrl()
function, filling in a missing value for the monster information if we do not have access. The Sys.sleep
at the end can be useful to avoid overloading your computer or if rate limits are a problem.
Step 3: Write a function to scrape an individual page
The last step in our scraping endeavor is to write the scrape_monster_page
function to scrape data from an individual monster page. You can view the full function on GitHub. I won’t go through every aspect of this function here, but I’ll focus on some principles that appear in this function that I’ve found to be useful in general when working with rvest
.
Principle 1: Use SelectorGadget AND view the page’s source
As useful as SelectorGadget is for finding the correct CSS selector, I never use it alone. I always open up the page’s source code and do a lot of Ctrl-F to quickly find specific parts of a page. For example, when I was using SelectorGadget to get the CSS selectors for the Armor Class, Hit Points, and Speed attributes, I saw the following:
I wanted to know if there were further subdvisions of the areas that the .mon-stat-block__attribute
selector had highlighted. To do this, I searched the source code for “Armor Class” and found the following:
Web Scraping Tools
Looking at the raw source code allowed me to see that each line was subdivided by spans with classes mon-stat-block__attribute-label
, mon-stat-block__attribute-data-value
, and sometimes mon-stat-block__attribute-data-extra
.
With SelectorGadget, you can actually type a CSS selector into the text box to highlight the selected parts of the page. I did this with the mon-stat-block__attribute-label
class to verify that there should be 3 regions highlighted.
Because SelectorGadget requires hovering your mouse over potentially small regions, it is best to verify your selection by looking at the source code.
Principle 2: Print often
Continuing from the above example of desiring the Armor Class, Hit Points, and Speed attributes, I was curious what I would obtain if I simply selected the whole line for each attribute (as opposed to the three subdivisions). The following is what I saw when I printed this to the screen:
A mess! A length-3 character vector containing the information I wanted but not in a very tidy format. Because I want to visualize and explore this data later, I want to do a little tidying up front in the scraping process.
What if I just access the three subdivisions separately and rbind
them together? This is not a good idea because of missing elements as shown below:
For attribute-label
, I get a length-3 vector. For attribute-data-value
, I get a length-3 vector. For attribute-data-value
, I only get a length-2 vector! Through visual inspection, I know that the third line “Speed” is missing the span with the data-extra
class, but I don’t want to rely on visual inspection for these hundreds of monsters! Printing these results warned me directly that this could happen! Awareness of these missing items motivates the third principle.
Principle 3: You will need loops
For the Armor Class, Hit Points, and Speed attributes, I wanted to end up with a data frame that looks like this:
This data frame has properly encoded missingness. To do this, I needed to use a loop as shown below.
The code below makes use of two helper functions that I wrote to cut down on code repetition:
select_text
to cut down on the repetitivepage %>% html_nodes %>% html_text
replace_if_empty
to repace empty text withNA
I first select the three lines corresponding to these three attributes with
This creates a list of three nodes (pieces of the webpage/branches of the HTML tree) corresponding to the three lines of data:
We can chain together a series of calls to html_nodes
. I do this in the subsequent lapply
statement. I know that each of these nodes contains up to three further subdivisions (label, value, and extra information). In this way I can make sure that these three pieces of information are aligned between the three lines of data.
Nearly all of the code in the scrape_monster_page
function repeats these three principles, and I’ve found that I routinely use similar ideas in other scraping I’ve done with rvest
.
Summary
This is a long post, but a few short take-home messages suffice to wrap ideas together:
rvest
is remarkably effective at scraping what you need with fairly concise code. Following the three principles above has helped me a lot when I’ve used this package.rvest
can’t do it all. For scraping tasks where you wish that you could automate clicking and typing in the browser (e.g. authentication settings),RSelenium
is the package for you. In particular, thersDriver
function works right out of the box (as far as I can tell) and is great for people like me who are loath to install external dependencies.
Happy scraping!
