Evaluative Information Literacy Rubric for AI Tools
dc.contributor.author | Caico, Marissa | |
dc.contributor.author | Harris, Laura | |
dc.contributor.author | O'Shea, Sarah | |
dc.contributor.author | Mitchell, Emily | |
dc.date.accessioned | 2024-06-27T16:14:57Z | |
dc.date.available | 2024-06-27T16:14:57Z | |
dc.date.issued | 2024-06 | |
dc.identifier.uri | http://hdl.handle.net/20.500.12648/14992 | |
dc.description.abstract | As creators, consumers, and curators of information, students and scholars need to be able to assess AI research tools. The makers of these tools claim they can do everything from locating sources, to reading and explaining them, to writing new papers that synthesize these sources. These tools promise great things, but it’s not always obvious how these tools work, what data they use, and what data they gather from / about users. If we accept that students (and other writers) will use these tools, how can we help them to look behind the curtain? The Evaluative Information Literacy Rubric for AI Tools breaks down larger concepts from ACRL's Framework for Information Literacy for Higher Education and asks questions of AI research tools users need to consider. | en_US |
dc.language.iso | en_US | en_US |
dc.rights | Attribution-NonCommercial 4.0 International | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | * |
dc.title | Evaluative Information Literacy Rubric for AI Tools | en_US |
dc.type | Learning Object | en_US |
dc.description.version | NA | en_US |
refterms.dateFOA | 2024-06-27T16:14:59Z | |
dc.description.institution | SUNY Oswego | en_US |
dc.description.department | Penfield Libary | en_US |
dc.description.degreelevel | N/A | en_US |