Reuben Brasher

2023 Challenge

Generative AI has inspired an explosion of interest. ML had been a field requiring advanced knowledge of mathematics and computer science. Tools such as GPT, Midjourney and Stable Diffusion have opened up AI by providing intuitive natural language interfaces. The power of these models hides a danger. While they can generate text and images that …
Read more

2022 Challenge

The 2021 challenge focuses on air pollution measurement and prediction. Wildfires in the US West raised clouds of smoke which lingered for months. This dramatic incident only highlighted existing problems caused by decades of industrial and other pollution. Air is life and polluted air harms us all, especially those of us with respiratory problems. Polluted …
Read more

Reading Club Session 23 February 2021

Everyone is talking about GPT-3, so, we talked about GPT-3. This seems to be the paper that started it all, “Language Models are Few-Shot Learners.” The paper is long with 75 pages, but there are 31 authors, so there are ~2.42 pages per author. That should make it easy. Probably we will only have time …
Read more

Brahmanandam

There are 220 million speakers of Dravidian languages in the world. Most of these are speakers of Tamil or Telugu who live in South and South-Central India. That population is comparable to the English speaking population of the world, of which 360 million are native speakers. As such it seems reasonable to expect that speakers …
Read more

Reading Club session for 9 February 2021

We began a new thread related to ethics in AI with papers focus on papers by Buolamwini and Gebru. AI is now used routinely to make decisions that once were made by people in areas ranging from hiring to policing to social matchmaking. It seems fair to scrutinize these applications for ethics and fairness. Particularly …
Read more

Reading Club session 26 January 2021

Google Research recently open sourced TaPas, a system for doing natural language queries on tabular data. The model is fully differentiable and based on BERT. We read the paper, “TAPAS: Weakly Supervised Table Parsing via Pre-training.” We ran the code from google research repo on a virtual machine and saw both the power of the …
Read more

Reading Club session 14 January 2021

For this first session of 2021, we did almost a pure tutorial session. We covered BERT again, but went through the experience using using TensorFlow 2.3 and Hub on a virtual machine. Along with the tutorial, we created a small repo with requirements and some instructions.

Matsunosuke Onoe

Science is about asking questions and looking for answers. It is not a bad thing when you get exactly the answer you were expecting to exactly the question you were asking. On the other hand, it seems even better to get a completely surprising answer to a question almost but not quite the exact question …
Read more

autotools

We are prepared to do a tutorial on setting up a C++ autotools project. Here is the autotools skeleton project with the resources to start.

Running BERT in EC2 GPU instance

On 14 January, we will begin with a tutorial on ML in a VM with GPU. I am pretty happy with the experience that i have had using TensorFlow 2.3 and Hub on a virtual machine. We have talked about the BERT paper and other language models, but I want to do a refresher with …
Read more