(12 Feb 2024) Moving from reaction to action, higher education stakeholders are currently exploring the opportunities afforded by AI for teaching, learning, and work while maintaining a sense of caution for the vast array of risks AI-powered technologies pose. To aid in these efforts, we present this inaugural EDUCAUSE AI Landscape Study, in which we summarize the higher education community’s current sentiments and experiences related to strategic planning and readiness, policies and procedures, workforce, and the future of AI in higher education.
Key Findings include:
Strategic Planning and Readiness:
- Most institutions are working on AI-related strategy. Only 11% of respondents said that nobody at their institution is working on AI-related strategy.
- Institutions are concerned about falling behind. Most respondents said that the rise of student use of AI in their courses and the risks of inappropriate use of AI (73% and 68%, respectively) were primary motivators for AI-related strategic planning.
- The goals of AI-related strategic planning are primarily related to supporting students. The three highest-ranking goals of AI-related strategic planning are preparing students for the future workforce, exploring new methods of teaching and learning, and improving higher education for the greater good (selected by 64%, 63%, and 41% of respondents, respectively). Further, most respondents said that their AI-related strategy is somewhat or to a great extent focused on boosting educational experiences and student services (76%).
- Institutions are primarily operationalizing these goals by providing training for faculty, staff, and students (56%, 49%, and 39%, respectively).
Strategic Leaders and Partners:
- Leaders are cautiously optimistic about AI. Most executive respondents reported that leaders at their institution are either approaching AI with a mix of caution and enthusiasm or feel optimistic about AI (52% and 29%, respectively).
- Stakeholders lack awareness of each other’s AI-related sentiments, strategy, and policy across their institutions, likely a result of institutional silos.
- More than half of respondents (56%) indicated that they have personally been given responsibilities related to AI strategy.
- Most respondents indicated that all functional areas are at least somewhat responsible for AI-related strategy
- More than half of respondents (57%) indicated that either their institution is not working with third-party partners to develop AI strategy or they don’t know whether their institution is working with third-party partners to develop AI strategy.
Policies and Procedures:
- AI is making the biggest impact on policies for teaching and learning, technology, and cybersecurity and data privacy (reported by 95%, 79%, and 72% of respondents, respectively, as “already impacted” or “soon to be impacted”).
- Academic integrity is still top of mind. A majority of respondents (78%) indicated that AI has impacted academic integrity.
- Data governance practices are shifting in response to AI. Nearly half of executive leaders (47%) said that their institution is preparing data to be AI-ready.
- Data privacy and security are central concerns. Privacy and security professionals are most concerned with data security (82%), compliance with federal regulations (74%), ethical data governance (56%), compliance with local regulations (56%), and the impacts of biases in data (52%).
- Only 18% of respondents said their AI-related policies are somewhat or extremely restrictive—for example, banning student or faculty use.
Workforce and the Future of AI in Higher Education:
- Although many faculty and staff are being tasked with AI-related job duties, few job roles have been formally created or restructured to accommodate such duties. More than half of respondents (56%) reported that they have been personally given AI-related responsibilities, but few respondents were aware of new jobs being created or existing jobs being formally modified (11% and 14%, respectively).
- Stakeholders feel that there are some appropriate uses for AI-powered technologies in higher education: personalized student support; acting as a teaching, research, or administrative assistant; conducting learning analytics; and supporting digital literacy training.
- Respondents also identified inappropriate uses, such as using outputs without human oversight, failing to disclose or cite AI as a resource, and failing to properly protect data security and individuals’ privacy.
The report can be accessed here.