This image describes the basic character of Artificial Intelligence
- Subject:
- Educational Technology
- Material Type:
- Lesson
- Teaching/Learning Strategy
- Author:
- Thiyagu K
- Date Added:
- 08/25/2024
This image describes the basic character of Artificial Intelligence
A guidebook for the use of AI in the Computer Science classroom
Data Science and AI in Psychology is an interactive eTextbook that provides an introduction to data science, big data, and machine learning in psychology. It covers current trends in data science and big data in the field of psychology (Chapter 1), applications of AI in the field of psychology (Chapter 2), the psychology of data visualization (Chapter 3), data ethics (Chapter 4), an introduction to how machines learn (Chapter 5), a hands-on guide for reading and critiquing machine learning research articles that are relevant to psychological topics (Chapters 6 and 7), and an introduction to coding in Python (Chapter 8). This eTextbook also includes an introduction to ChatGPT and tips for using ChatGPT to assist with writing and coding without plagiarizing (Chapters 6 and 8). This is an interactive resource that provides students with opportunities to engage with their peers and develop critical thinking skills through problem-based, active learning.
Digital Scholarship and Data Science Essentials for Library Professionals is an open and collaboratively curated training reference resource. It aims to make it easier for LIBER library professionals to gain a concise overview of the new technologies that underpin digital scholarship and data science practice in research libraries today, and find trusted training materials recommendations to start their professional learning journey.
The onset of new, more accessible, artificial intelligence (AI) technologies marks a significant turning point for libraries, ushering in a period rich with both unparalleled opportunities and complex challenges. In this era of swift technological transformation, libraries stand at a critical intersection. To effectively chart this transition, two quick polls were conducted among members of the Association of Research Libraries (ARL).
The first poll, which ran in April 2023, provided an initial snapshot of the AI landscape in libraries. The second poll, carried out in December 2023, continued this inquiry, offering a comparative perspective on the evolving dynamics of AI use and possibilities in library services. This study analyzes and juxtaposes the outcomes of these two surveys to better understand how library leaders are managing the complexities of integrating AI into their operations and services. It specifically seeks to capture changing perspectives on the potential impact of AI, assess the extent of AI exploration and implementation within libraries, and identify AI applications relevant to the current library environment.
The insights derived from this comparative analysis shed light on the role of libraries in an increasingly AI-driven era, providing strategic directions and highlighting practices in research libraries.
A discussion with Taylor & Francis VP External Affairs and Policy, Priya Madina, on AI and academic publishing. This session provided an overview of AI and opportunities and challenges of utilizing AI, illustrated by academic publisher use cases of AI. The presentation was followed by a question and answer period. The speaker was introduced by Erin Fields, Open Education and Scholarly Communications Librarian, UBC.
This guide focuses on inference, not training, and as such is only a small part of the entire machine-learning process. In our case, the model's weights have been pre-trained, and we use the inference process to generate output. This runs directly in your browser.
The model showcased here is part of the GPT (generative pre-trained transformer) family, which can be described as a "context-based token predictor". OpenAI introduced this family in 2018, with notable members such as GPT-2, GPT-3, and GPT-3.5 Turbo, the latter being the foundation of the widely-used ChatGPT. It might also be related to GPT-4, but specific details remain unknown.
This guide was inspired by the minGPT GitHub project, a minimal GPT implementation in PyTorch created by Andrej Karpathy. His YouTube series Neural Networks: Zero to Hero and the minGPT project have been invaluable resources in the creation of this guide. The toy model featured here is based on one found within the minGPT project.
A list of Microsoft Generative AI Resources. Not an exhaustive list but it'll get you started with generative AI on the Microsoft Azure Platform.
Host Brenna Clarke Gray (Thompson Rivers University) and guest Autumm Caines (University of Michigan - Dearborn) explore the pedagogical implications of generative AI in this conversation in honour of Open Education Week. They ask such questions as:
- What happens when we leap into new technologies without first pausing to imagine harms, such as surveillance, bias, and discrimination?
- Can recentering the core values of the open education movement—equity, inclusion, transparency, and social justice—in our pedagogy help us move forward in a good way?
- How do we introduce these considerations to our students and empower them to make informed decisions with new technologies?
Generative AI has forced universities to contend with complex ethical and social questions—namely because writing is so deeply entrenched as an institutional gatekeeping. For many students, particularly those from marginalized backgrounds or for whom English is not a first language, the pressure to translate ideas into “proper” English contributes to attrition rates and exacerbates feelings of inadequacy, alienation, and exclusion from many academic communities.
From an equity and inclusion perspective, AI has the potential to disrupt institutional barriers by offering accessible tools that level the grammatical playing field. By functioning as virtual tutors or co-writers, AI systems can assist students in producing more polished and coherent prose, thus challenging the traditional hierarchies that privilege certain grammatical and stylistic norms. Instead of attempting to ban these tools (which is, to say the least, impractical), I side with a growing number of technology scholars who argue that we should focus on teaching students how to use generative AI responsibly and effectively. However, I do so with the caveat that teaching responsible AI use means critically engaging the complex and often messy processes that make AI what it is.
In this presentation, I draw from Indigenous theorists and authors to situate generative AI and large language models (LLMs) within a long colonial history of extraction. Just as colonial states declare Indigenous lands terra nullius, allowing settlers to exploit resources through mining, clear-cutting, and other forms of extraction, generative AI similarly depends on the unchecked extraction of data, including Indigenous knowledge and cultural resources, often without consent. The late Gregory Younging refers to this process as gnaritas nullius, the colonial rendering of Indigenous knowledge into public property. The unchecked extraction of writing, including, but not limited to, Indigenous knowledge, represents a new frontier for colonial capitalism, where cultural and intellectual property are commodified by those with the most access and power. As Nando de Freitas notes, the future of AI development depends on scale: those who control the largest datasets will have the greatest advantage and profit the most from AI.
The numerous high-profile copyright cases against companies like OpenAI and Meta show that how this data is collected is treated as a secondary issue. This unbridled, dehumanizing race for data mirrors the extractive practices that have driven capitalist-colonial expansion for centuries. Building on these ideas, I mobilize the insights of Indigenous authors like Younging, Scott Lyons, and Cherie Dimaline to highlight strategies for resisting colonial extraction and challenging capitalist systems through rhetorical sovereignty and the concept of incommensurability. The goal is not to discourage the use of generative AI but, in the Faustian sense, to reveal the costs of embracing it, especially when it is employed to subvert oppressive institutional structures. The speaker was introduced by Erin Fields, Open Education and Scholarly Communications Librarian, UBC.