Monday, November 27, 2023

Countdown Timer in Google Sheets

 

If you've ever wanted to have a Google Sheet up the screen together with a countdown timer, for example if you have a pivot table leaderboard for a hackathon, you can use Google Apps Script.

From your Google Sheet, click on the Extensions menu and choose Apps Script to open the script editor.

Paste in the following code into the editor:


function onOpen(e) {
  var ui = SpreadsheetApp.getUi();
  ui.createMenu('Timer')
    .addItem('Countdown Timer in Sidebar', 'showSidebar')
    .addToUi();
}
 
function showSidebar() {
  var html = HtmlService.createHtmlOutputFromFile('countdown');
  html.setTitle('Countdown Timer');
  SpreadsheetApp.getUi().showSidebar(html);
}


Then click the + button near the top left to add a new file, and choose HTML. Type countdown as the file name, then paste the following code into the editor:

<!DOCTYPE html>
<html>
  <head>
    <base target="_top">
  </head>
  <body>
    <div id="buttons">
      <button id="thirtyMinutes" onclick="thirtyMinutes()">30 minutes</button><br>
      <button id="fifteenMinutes" onclick="fifteenMinutes()">15 minutes</button><br>
      <button id="tenMinutes" onclick="tenMinutes()">10 minutes</button><br>
      <button id="twoMinutes" onclick="twoMinutes()">2 minutes</button><br>
      <button id="oneMinute" onclick="oneMinute()">1 minute</button><br>
    </div>
    <div id="countdown" style="text-align:center;font-size:50px"></div>
    <script>
      function thirtyMinutes() {startCountdown(30);}
      function fifteenMinutes() {startCountdown(15);}
      function tenMinutes() {startCountdown(10);}
      function twoMinutes() {startCountdown(2);}
      function oneMinute() {startCountdown(1);}
 
      function startCountdown(minutes) {
        document.getElementById("buttons").style.display = "none"; // hide the buttons div
        let count = parseInt(minutes) * 60;
        if (!count > 0) {
          count = 60;
        }
        let displayTime = "0:00";
        const timer = setInterval(function() {
          count--;
          displayTime = Math.floor(count / 60) + ":" + ("0" + count % 60).slice(-2);
          document.getElementById("countdown").innerHTML = displayTime;
          if (count < 60) {
            document.getElementById("countdown").style.color = "red";
          }
          if (count === 0) {
            clearInterval(timer);
            document.getElementById("countdown").innerHTML = "0";
            //alert("Time's up!");
          }
          if (count < 0) {
            clearInterval(timer);
            document.getElementById("countdown").innerHTML = "0";
          }
        }, 1000);
      }
    </script>
  </body>
</html>

You can also edit the code to add or remove buttons, and change the style of the buttons and text.

Then everything should be set up. The next time you load that Google Sheet there will be a custom menu called Timer that will show a sidebar with your new timer in it.

Tuesday, October 3, 2023

I've changed my mind about AI governance

legislator_and_robot

Recently, the Canadian government launched a voluntary code of conduct related to generative AI. Many organizations have agreed to this code of conduct, although support has not been universal.

This was the topic of our last biweekly "AI Issues and Ethics" discussion, what role should there be for regulation of artificial intelligence? It was a great conversation with plenty of dissent, as always, and I found my opinion changing as a result of the wisom of the participants.

I used to be skeptical about the idea of government involvement in regulating artificial intelligence, assuming that policymakers lacked the expertise and agility to make informed decisions. This is a rapidly evolving and technically complicated field and, as others have argued, regulation must stifle innovation. I often argued for fewer regulations and restrictions on technologies, or at least suggested that existing rules and laws related to privacy or safety were sufficient to address these new technologies.

However, my views have evolved, and I now firmly believe that some level of governance is essential to ensure the responsible and ethical development and deployment of AI technologies.

Lawmakers don't need to understand the complexities of internal combustion engines, or electric vehicles for that matter, in order to set speed limits on our roads. They rely on experts, data, and an understanding of societal needs to establish safe parameters. Speed limits are not designed to stifle our driving.

And to use a similar analogy, there was a time in Alberta the seatbelt use in cars was not compulsory. I'll admit that I didn't always wear a seatbelt, even though I knew it was safer to do so. States with fewer requirements for motorcycle helmets have increased rates of injuries and deaths. Humans don't always make the best decisions around our own safety.

That's not to say that the government will always make the best decisions either. There will be instances where regulations fall short or overreach, but we shouldn't abandon the idea of AI governance entirely. Through an iterative process that is open to feedback and fosters collaboration among stakeholders, experts, and policymakers, legislation can be amended and refined. Regulations should be adaptive and responsive to evolving needs and norms of society.

I would argue that we don't require an absence of regulations in order to innovate, nor should we rely solely on the judgement of technology organizations to work for the good of humankind. By fostering collaboration, making informed decisions, and committing to ethical principles, A voluntary code of conduct is a good start, and I hope that we can create an and safe suite of artificial intelligence tools for everyone.

And let me know if you'd be interested in participating in some of these "AI Issues and Ethics" discussions.

Tuesday, February 21, 2023

Ethical Questions Related to Discriminative and Generative Artificial Intelligence

It suddenly seems that everyone is talking about artificial intelligence, computers doing things that look like they require intelligence. Tools that generate images or text have gotten fairly good and easy to use.

These generative AI tools are trained to create new content based on their input data sets. Discriminative AI tools, on the other hand, are trained to differentiate among different classes of input and predict which class a new observation should belong to.

My impression is that people are more comfortable with discriminative AI, with applications in autonomous vehicles and facial recognition, than with generative AI that seems to intrude on our uniquely human creativity. Of course there is a spectrum of opinions from excited techno-optimism to worries that this will bring about the end of civilization.

Lately I've been a part of many discussions about ethical questions surrounding artificial intelligence, particularly in education. For many of them there aren't, or aren't yet, good answers, but they certainly make for interesting debate. These are in no particular order, and feel free to use them in your own conversations.

  • Is it plagiarism or cheating to use generative AI tools?
  • Can we accurately detect if students are using these tools? Is this something we should worry about?
  • What are we training students for? Is school about job training?
  • Why do we make students write?
  • What are uniquely human skills and competencies we need to foster?
  • If we block AI tools on our educational networks and devices, does the problem go away?
  • Are we comfortable with doctors or drivers using AI?
  • Can AI take over some of the mundane parts of our jobs (or lives)?
  • How would we feel about a facial recognition attendance system?
  • Is it ethical to have AI help us draft emails, or blog posts?
  • What does education look like if teachers use AI to help generate questions and students use it to help generate answers?
  • Are we okay with corporations using student questions and responses to help train their models?
  • Will bias in the training data lead to increased societal polarization?
  • Are there analogies to historical inventions that we can learn from?
  • Will AI disintermediate students and learning?
  • Does over-reliance on AI lead to skill loss?
  • Will we lose jobs? Will this change jobs?
  • Does AI exacerbate inequality?
  • Will AI lead to homogenization of culture?
  • Do we like spell check, autocorrect, autofill, predictive text, and autoreplies?
  • Will generative AI answers spell the end of internet search?
  • If we enjoy doing things, should we use AI or machines to do those things? Should we try to prevent AI or machines from doing those things?
  • Is AI worth the environmental impact?
  • What might AI tools look like in six months? How about in five years?

Hopefully these questions can help spark some thoughful and spirited discourse.