Saturday, September 11, 2021

Fixing a formula in multiple Google Sheets

Recently we realized that there was a calculation error in a Google Sheet template, and unfortunately we had already made a copy of it for each teacher.

Rather than opening each Sheet and manually making the correction, I wrote a Google Apps Script to automatically make the replacements. Here is the code in case it is helpful for anyone else:

function loopThem() {
  var folderId = 'aaaaaaaaa'; // the folder containing all of the sheets to be corrected

  var folder = DriveApp.getFolderById(folderId);
  var files = folder.getFiles();
  while (files.hasNext()) {
    var file =;
    var id = file.getId();

function fixIt(id) {
  var rangeToFix = 'P1:P50';  // the range in each sheet that contains the error
  var replaceThis = '$A';
  var replaceWithThis = '$C';

  var ss = SpreadsheetApp.openById(id);
  var range = ss.getRange(rangeToFix);
  var formulas = range.getFormulas();
  for (var i=0; i<formulas.length; i++) {
    var formula = formulas[i][0];
    var newFormula = formula.replaceAll(replaceThis, replaceWithThis);
    formulas[i][0] = newFormula;

Sunday, August 29, 2021

Small Coding Challenge 1

This year I'm going to design some coding challenges, and hopefully release them at least once a month.

The first coding challenge is up at

Saturday, March 13, 2021

Quick Exploratory Data Analysis with SweetViz

I recently came across a Python library that is useful for quick exploration of a dataset (or two) in a Jupyter notebook, SweetViz.


    import sweetviz
    !pip install sweetviz --user
    import sweetviz
import pandas as pd
df = pd.read_csv('')
analysis = sweetviz.analyze(df)
analysis.show_notebook() # or export with show_html()

Here is a brief notebook demonstration.

Saturday, January 9, 2021

Getting Student Submission Data from Brightspace with Python and Selenium

As a teacher with students in multiple Brightspace courses, I was looking for a dashboard to show which students have unsubmitted assignments. While Brightspace does have an API available, I decided that it wasn't going to work for a few reasons. There are also commercial (non-free) plugins that can do most of what I was looking for, but this was a good opportunity to explore scraping of content from dynamic web pages with Python.

You may be familiar with the Python Requests and Beautiful Soup libraries, which are great, but since Brightspace is requiring a Microsoft login I needed to go with Selenium. Selenium is designed for automating web browser interactions, which means we can use it to log in to a site and scrape pages.

While Selenium can be installed and run locally, it also works in a Colab notebook:

!apt update
!apt install chromium-chromedriver
!pip install selenium
from selenium import webdriver
from import WebDriverWait
from import expected_conditions as EC from import By
options = webdriver.ChromeOptions() options.add_argument('--headless') options.add_argument('--no-sandbox') browser = webdriver.Chrome(options=options)

Once that is set up, we need to log in to our Brightspace server:

email = ''
base_url = ''
import getpass # so you don't show your password in the sourcecode
password = getpass.getpass()
email_field = (By.ID, 'i0116')
password_field = (By.ID, 'i0118')
next_button = (By.ID, 'idSIButton9')

From there it's a matter of scraping the course progress pages as we loop through the course IDs and student IDs. I may update this post later with an automated way to scrape these IDs, but for now we need to look them up on Brightspace and code them in:

courses = {'LA':11111, 'Math':11111, 'Science':11111, 'Social':11111}
students = {'First Student':111111111, 'Second Student':111111111}
import pandas as pd
df = pd.DataFrame.from_dict(students)
for course in courses:
    course_id = courses[course]
    submissions = []
    for student in students:
        student_id = students[student]
        url = base_url+'/d2l/le/classlist/userprogress/'+str(student_id)+'/'+str(course_id)+'/Summary'
        WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, "span[class^='d2l-textblock']")))
        elements = browser.find_elements_by_css_selector("span[class^='d2l-textblock']")
        submitted = elements[8].text
    df[course] = submissions

This will give us a Pandas DataFrame (and a CSV file) with information about how many assignments each student has submitted in each course. We can manipulate this a bit to create some visualizations, but perhaps that's a post for later. For now, here's part of a chart that we could generate:

Let me know if you try this or if you come across anything that should be corrected.

Saturday, September 19, 2020

Bookmarklet for Generating a Link to Copy a Google Doc, Sheet, Slides, or Drawing

If you want students to create a copy of a Google Doc, Sheet, Slides, or Drawing, you can replace the /edit at the end of the link with /copy.

To make that easier, I've created a bookmarklet. To set it up for yourself, drag the following link to your bookmark bar or menu:


Then when you have a Doc, Sheet, Slides, or Drawing open (and you've set the sharing permissions) you can click the bookmarklet and it will generate a link that you can copy and send to your students. When they click the link it will prompt them to make a copy.

Wednesday, July 8, 2020

Creating, Editing, and Sharing Jupyter Notebooks with GitHub and Anaconda

For the past year, I have been working on the Callysto project, which involves fostering computational thinking and data science within the regular curriculum for grades 5 to 12. One of the services provided by Callysto is the Callysto Hub, a free online environment hosting an open-source instance of Jupyter Hub that allows you to run, edit, and create Jupyter notebooks such as these.

While some commercial platforms have good sharing options, there isn't currently a great open-source solution for sharing Jupyter notebooks with collaborators and students. I can show you my current workflow, though, for creating, editing, and sharing Jupyter notebooks.

There are two programs I have installed on the computer in front of me, Anaconda for running Jupyter on this computer and GitHub Desktop for synchronizing with GitHub.

We'll start with GitHub, this is the site used by many software developers to share and collaborate. You'll need to create a free account.

Next we'll use the GitHub Desktop application. Download, install, and run it, then log in with the GitHub account you just created. Now when you visit a GitHub repository, such as Callysto Curriculum Notebooks, you can click the green Code button near the top right of the page and then Open with GitHub Desktop. It should pop you back into the GitHub Desktop program, and you can click the Clone button to download the code from that repository.

Then install Anaconda. Run Anaconda Navigator and click the Launch button under Jupyter Notebook (or JupyterLab if you prefer). That will start up the Jupyter server on your computer, and launch your web browser to a page like http://localhost:8888/tree. From there you can browse to the folder where GitHub Desktop downloaded the repository for you in the previous paragraph. You can run and edit any Juypter notebooks, as well as create new ones.

If you encounter errors running notebooks, you likely need to install some Python libraries with commands like !pip install pandas , post in the comments (or reach out on Twitter) for help with that if you need.

At this point you're basically set up, but it gets a little more complicated if you want to share notebooks you create. You'll want to create a new GitHub repository, add your notebook file to that repository folder on your computer, and use GitHub Desktop to commit and push it to GitHub.

Once the notebook file is on GitHub, you can either have your students or colleagues go through this setup process to download your new repository, or you can send them a Callysto nbgitpuller link.

Hopefully that's not too complicated, but feel free to reach out if you have questions or would like help with this.

Tuesday, June 9, 2020

Streaming OBS Recordings to YouTube

Currently OBS Studio can only stream to a single service, such as Facebook or YouTube, but we are going to set up a way to stream to another service at the same time. Assuming that you are already comfortable streaming to Facebook, YouTube will be our second service.
You'll need to install FFmpeg and Python 3.
The following Python code can be saved as something like and run from there.
Replace xxxx-xxxx-xxxx-xxxx with your stream key from YouTube Studio, and /home/username/Videos with the path to the folder where OBS records your videos. You may also need to include the ffmpeg_path.
This code finds the most recent file in your OBS recordings folder and streams that file to YouTube. You may want to enable the setting "Automatically record when streaming" in OBS, otherwise you'll need to click "Start Streaming" and "Start Recording" each time.
Start recording in OBS then run the code, and it should start streaming the recording to YouTube without interfering with your primary stream. You will, of course, need enough upload bandwidth for both streams.
Potentially you could have another copy of this Python script running to streams the recording to a third service, such as Twitch.

Hopefully that helps get you started with secondary streams from OBS Studio. Let me know if any of this doesn't work for you.