1 of 27

GenAI in Flux - Framing The Conversation

Policy Shifts, Open-Source Disruption, Adoption Surge, and Ethical Implications for Healthcare, Education & USF

Freddie Seba,�MBA | MA | EdDr. Candidate, Working in GenAI Ethics | USF Faculty and Program Director | Serial Entrepreneur | Former Corporate Executive

Copyright Freddie Seba

1

@2025 Freddie Seba. All Right Reserved

2 of 27

Opening: Framing the Conversation

  • Welcome to the 2nd USF GenAI Symposium.
  • At our first symposium in 2024, we asked: Is GenAI a fad or a force?
  • Since then, GenAI has disrupted every technological, policy, and education assumption.
  • Today, I offer a framework for navigating this two-day event through the lens of ethics, equity, and institutional mission alignment.

2

@2025 Freddie Seba. All Right Reserved

3 of 27

Positionality & Transparency

  • MBA, MIPS, EdDr. Candidate, Focusing in GenAI Ethics USF
  • USF Faculty, Program Director, Serial Entrepreneur, Former Corporate Executive
  • A longtime advocate of ethical, inclusive GenAI
  • Engaged in USF-wide GenAI efforts since Fall 2022
  • Practitioner-scholar using GenAI critically for research, teaching, praxis
  • Transparent about bias: committed to reflective and values-aligned adoption– GenAI Ethics for Leadership Framework

3

@2025 Freddie Seba. All Right Reserved

4 of 27

GenAI Rapid Adoption Continues

  • Rapid adoption continues –ChatGPT hits 400M+ users (Reuters, 2025)
  • 55% of Americans use AI regularly (Pew Research Center, 2024)
  • 56% of U.S. college students used GenAI in coursework (Nam, 2023)
  • 55% of Americans use AI regularly (Pew Research Center, 2024)
  • 70% of USF students used GenAI academically (ETS, 2025)
  • 76% of employers seek GenAI-skilled workers (Wells, 2024)

Benefits and Challenges: efficiency, rapid analysis, cost reduction — but also growing ethical risks

4

@2025 Freddie Seba. All Right Reserved

5 of 27

GenAI BigTech Technology Dominance Shifting

  • BigTech grip on the market challenged
  • DeepSeek, small Chinese launches a very competitive model with less than $6 million and two months, using open source models and older NVIDIA computer chips, billions of market valuations evaporates (CNN, 2025; Stanford Cyber Policy Center, 2025; Reuters, 2025)
  • Open-source teams at Stanford et al., launches model with $50 investment (Zhang et al., 2024)
  • This carks open the door to more cost-effective choices for the market and high education

5

@2025 Freddie Seba. All Right Reserved

6 of 27

GenAI Policy Shifting: U.S.

  • Biden’s Whitehouse’s AI Bill of Rights promising, was not enforceable, repealed with new administration (White House, 2024)
  • U.S. Congress launched bipartisan AI task force report (U.S. Congress, 2024)
  • California vetoed its AI accountability bill (SB 1047) in Sept 2024, but author plans to reintroduce it (CA Legislature, 2024; ACM.org 2025)
  • 700 bills submitted, only 20% approved (Tech Policy Press, 2025; BSA 2025)

6

@2025 Freddie Seba. All Right Reserved

7 of 27

GenAI Global Policy

  • EU AI Act (2nd Amendment): mandates transparency, oversight, and ethical compliance across law, healthcare, education, finance, and IT (European Commission, 2025)
  • China: Accelerated state-sponsored AI innovation, and policy (CSET, 2024)

7

@2025 Freddie Seba. All Right Reserved

8 of 27

Navigating GenAI Ethical and Operational Tensions

Ethical & Operational Challenges

  1. Bias in training data and outputs
  2. Data monopoly & access control
  3. Dependency and erosion of autonomy
  4. Deskilling of learners and professionals
  5. Global imbalance in AI access and influence
  6. Human replacement anxieties
  7. Inequality and access gaps
  8. Intellectual property confusion
  9. Manipulation of user behavior
  10. Market dominance by few actors
  11. Misinformation and hallucinations
  12. Over-reliance on AI tools
  13. Plagiarism and academic misconduct
  14. Political influence and policy gaps
  15. Privacy violations and surveillance risks
  16. Transparency and algorithmic opacity
  17. Unintended consequences of AI deployment
  18. User consent inadequacies

8

Sources: AAC&U, 2024; AMIA 2024; Bowen & Watson, 2024; European Commission, 2025; JAMA Pediatrics, 2024; Liang et al., 2024; Pew Research Center, 2024; Stanford HAI, 2024; U.S. Department of Education, 2024; U.S. Congress, 2024; White House, 2024; World Health Organization, 2024)

Ethical Opportunities & Pathways

  • Accessibility-first frameworks
  • AI literacy embedded in curriculum
  • Critical thinking + digital discernment
  • Data sovereignty and ethical stewardship
  • Equity-by-design in tools and policy
  • Ethical impact reviews (AI IRB-style)
  • GenAI-compatible honor code structures
  • Human-in-the-loop feedback systems
  • Legal frameworks for GenAI + IP clarity
  • Mission-aligned digital governance
  • Open-source foundational model alternatives
  • Participatory co-design with students & stakeholders
  • Redefining rigor through reflective design
  • Sustainability and long-term digital planning
  • Transparent algorithms & traceability tools
  • Upstream & downstream monitoring of AI use

9 of 27

GenAI Ethics in Higher Education

  • Who is accountable when GenAI misguides a diagnosis or student paper?
  • Are detection tools unfair to multilingual students? (Liang, 2023)
  • Can bias be traced, corrected, and mitigated?
  • Are institutions ready for this surge?

Higher Education policy continues to shift from prohibition – detection and punishment– to faculty-centred policies, to institutional critical assessment and adoption. Specially given that detection tools are unreliable and introduce new ethical concerns (Bender et al., 2021)

9

@2025 Freddie Seba. All Right Reserved

10 of 27

GenAI in Education: Use Cases

  • GenAI detection falsely flags multilingual students (Lian et al., 2023)
  • New pedagogy: require GenAI use + critical reflection (Bowen & Watson, 2024)
  • Institutional mission framing: trust, inclusion, student-centered support
  • Focus on fairness, transparency, and digital equity

10

@2025 Freddie Seba. All Right Reserved

11 of 27

GenAI in Healthcare- Case Studies

  • AMIA reports widespread AI tool use in diagnostics (AMIA, 2025)
  • JAMA Pediatrics: 83% LLM misdiagnosis rate in pediatric triage (2024)
    1. Implication: Efficiency without ethics = harm
    2. Response: human-in-the-loop, privacy, values-first innovation

11

@2025 Freddie Seba. All Right Reserved

12 of 27

USF GenAI Efforts – Non Exhaustive List

  • GenAI literacy efforts: 160+ faculty/staff completed USF ETS GenAI Certificate
  • USF’s AAC&U Interdisciplinary AI Institute launched, which allow to connect, learn and share best practices with campuses in North America (Howell, Seba, Ramos, Azarm, Munnich)
  • Surveys: 1,000+ students & 60+ faculty shared GenAI usage data

12

13 of 27

USF GenAI Efforts – Non Exhaustive List

  • Multiple groups sprucing in campus (for example AI and Faith)
  • UAC “Bias & Ethics in AI” Workshop Series created and launched to interrogate GenAI detection and equity (Seba & Howell, 2024)
  • Curriculum redesign – new and enhance GenAI centric courses
  • SOM conference taking place this Friday

13

14 of 27

From Flux to Framework – Where do we go from here?

  • Policy: Clear institutional guidelines with student voices
  • Practice: AI fluency for all—educators, clinicians, students
  • Values: Equity, human dignity, and justice at the center

14

@2025 Freddie Seba. All Right Reserved

15 of 27

UAC Workshop Sessions– Howell & Seba

  • First day: “high level” review of GenAI and the role of bias in the technology.
  • Second day: “dive in” to detection tools and hands on exercises
  • Third: “monolingualism and/or lingualism”
    • standard language ideology language attitudes that deem standardized English superior and all other Englishes and their users inferior,
    • tacit English only Policies:
    • The Myth of LInguistic Homogeneity:, and
    • The Myth of Linguistic Uniformity, Stability, and Separateness
  • Fourth day: connecting language bias to LLMs and the way language is used for gatekeeping detection platforms…just don’t. What do we think we are detecting and why do we think we need to do it?

15

@2025 Freddie Seba. All Right Reserved

16 of 27

UAC Workshop Thoughts-Faculty’s Critical ROle

  • Call to Action: Let’s continue the conversation and collaborate on how we can collectively shape GenE's role in education at our institution.
  • For Students: International and first-generation students are disproportionately impacted, leading to accusations of academic dishonesty. This can damage reputations and result in penalties like failing grades or disciplinary actions.
  • For Faculty: Faculty members, relying on faulty tools, have faced challenges in fairly assessing student work. In some cases, faculty trust in AI detection tools has led to disputes between instructors and students, complicating the student-teacher relationship (Lian et al., 2023).

16

@2025 Freddie Seba. All Right Reserved

17 of 27

GenAI Ethics Framework

  • More than tools— teach discernment
  • More than results – teach process
  • More than detecting GenAI – teach how to critically and ethically use tools
  • More than keeping student’s GenAI use underground – teach to bring it to the light, understand its challenges – critical thinking, bias, deskilling – assist students to get the required skills for citizenship and workforce integration in the AI era!

17

Sources: AI capabilities derived with reference to an analysis of the MAGE framework, based on ChatGPT 4 as of October 2023. See Zaphir, L., Lodge, J. M.,

Lisec, J., McGrath, D., & Khosravi, H. (2024). How critically can an AI think? A framework for evaluating the quality of thinking of generative artificial

intelligence. arXiv preprint arXiv:2406.14769.

18 of 27

More Use Cases – Curriculum Design & Advocacy

  • HS 633: Exploring GenAI Ethics (from scratch): Fall 2024
  • HS 632: Consumer Health Informatics enhanced w/ GenAI integration: Spring 2025
  • AAC&U Leadership Conference Podium Presentation: April, 2025
  • AMAI Clinical Health Informatics Podium Presentation: May, 2025
  • UIC & USF Joint GenAI Curriculum Research Fall 2024- Spring 2025
  • USF SOE Dissertation – GenAI Ethics Leadership Within Rapidly Shifting Technology and Policy Context: A Case Study of Higher Education Stakeholders' Perspectives and Concerns (expected Fall 2025)

18

19 of 27

Recap & Reflection

  • GenAI disruption is not a moment—it is a movement
  • We design with GenAI ethics and foresight
  • Mission values are not add-ons—they are our goal
  • Institutional support, open conversation, and faculty and student empowerment are critical with GenAI

Please reflect on how to critically engage in this transformational force to shape the future of society, higher education, our roles as educators, and students lives…….. How are you going to lead?

19

@2025 Freddie Seba. All Right Reserved

20 of 27

To Conclude: Encouraging Progress

  • Global, state and civil society actors policy encouraging
  • Open-source and more cost effective capable models (DeekSeep, Stanford $50 models et al) challenging BigTech and providing more access
  • Anthropic making progress on understanding GenAI’s “back-box”
  • Workforce seeking more GenAI skilled candidates
  • We know more about GenAI now than ever before
  • Higher education:
    1. moving from prohibition to per course/faculty to selected cases to cross programs/campus curriculum adoption (Florida University, et al)
    2. GenAI enthusiast groups popping up in campuses across the country
    3. Critical faculty and leaders more aware!

20

@2025 Freddie Seba. All Right Reserved

21 of 27

Reading – 2024 GenAI Symposium List

  • Ethan Mollick (2024). Co-Intelligence. Living and Working with AI,
  • Reid Hoffman (2023). Impromptu: Amplifying Our Humanity Through AI, 23) Free download
  • Benjamin, R. (2019) Race After Technology
  • Lee, P., Goldberg, C., & Kohane, I. (2023). The AI Revolution in Medicine: GPT-4 and Beyond
  • Haby David Epstein (2019) Range: Why Generalists Triumph in a
  • Clive Thompson (2019) Specialized WorldCoders: The Making of a New Tribe and the Remaking of the World

21

@2025 Freddie Seba. All Right Reserved

22 of 27

Useful Reading

  • As If Human – Shadbolt & Hampson (2023)
  • Genesis – Kissinger, Schmidt, Huttenlocher (2021)
  • Teaching with AI – Bowen & Watson (2024)
  • Holley & Mathur (2024) – LLMs and Generative AI for Healthcare
  • Seba’s GenAI Ethics for Leaders Newsletter: LinkedIn | Substack | freddieseba.com

22

@2025 Freddie Seba. All Right Reserved

23 of 27

Gratitude

  • My students, co-designers, and thought partners at USF
  • Colleagues at USF’s AAC&U Institute on AI, Pedagogy & the Curriculum
  • Faculty, researchers, staff, leadership, and practitioners committed to responsible innovation
  • USF leadership, faculty, and staff!

Let’s shaping a more equitable GenAI-powered present and future!

Connecting & Collaborating: seba@usfca.edu; linkedin.com/in/freddiesebaprofile, freddieseba.com

23

@2025 Freddie Seba. All Right Reserved

24 of 27

Transparency

  • This presentation reflects my academic and practical perspective as a faculty member, researcher, and doctoral candidate focused on GenAI ethics.
  • I have used GenAI tools in drafting and refining this presentation.
  • I strive to model transparency, responsibility, and ethical curiosity in GenAI use and welcome critical reflection and dialogue on this evolving topic.

24

@2025 Freddie Seba. All Right Reserved

25 of 27

References

  • AAC&U. (2024). Institute on AI, Pedagogy & the Curriculum. https://www.aacu.org
  • AI & Society Journal. (2023). The ethics of AI in academic integrity: Detection tools and the future of assessment. AI & Society Journal.
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2023). On the dangers of stochastic parrots: Can language models be too big? Communications of the ACM, 64(10), 62–71. https://doi.org/10.1145/3442188.3445922
  • Bloomberg. (2025). How DeepSeek and Open-Source AI Models Are Disrupting Big Tech. https://www.bloomberg.com
  • Bowen, J., & Watson, C. (2024). Teaching with AI. Routledge.
  • Brock University. (2023). International students and AI: Misclassification and discrimination in academic integrity algorithms. https://brocku.ca
  • California State Legislature. (2024). SB 1047 – Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. https://leginfo.legislature.ca.gov
  • Chronicle of Higher Education. (2023). We should normalize open disclosure of AI use. https://www.chronicle.com
  • CNN. (2025). What is DeepSeek? The Chinese AI Startup That Shook the Tech World. https://www.cnn.com
  • European Commission. (2025). EU AI Act. https://digital-strategy.ec.europa.eu
  • Goldman, D. (2025). DeepSeek AI: China’s Challenger to Big Tech. CNN Business. https://cnn.com
  • Holley, K., & Mathur, R. (2024). LLMs and generative AI for healthcare: Ethical implications. In Proceedings of the 2024 AMIA Symposium.

25

@2025 Freddie Seba. All Right Reserved

26 of 27

References

  • Howell, N., & Seba, F. (2024). Bias in GenAI detection. USF-UAC Workshop.
  • JAMA Pediatrics. (2024). Misdiagnosis in pediatric LLM applications.
  • Liang, W., Yuksek Gonul, M., Mao, Y., Wu, E., & Zou, J. (2024). GPT detectors are biased against non-native English writers. arXiv preprint arXiv:2304.10638.
  • McFaul, S., et al. (2024). Fueling China’s Innovation: The Chinese Academy of Sciences and Its Role in the PRC’s S&T Ecosystem. Center for Security and Emerging Technology (CSET). https://cset.georgetown.edu
  • Mok, C. (2025). Taking Stock of the DeepSeek Shock. Stanford Cyber Policy Center. https://cyber.fsi.stanford.edu/publication/taking-stock-deepseek-shock
  • Nam, H. (2023). GenAI in higher ed survey. EdTech Journal.
  • National Conference of State Legislatures. (2024). Artificial Intelligence 2024 Legislation. https://www.ncsl.org
  • New York Times. (2023). ChatGPT in schools: Bans, challenges, and new directions. https://www.nytimes.com
  • Pew Research Center. (2024). AI in U.S. education and workforce. https://www.pewresearch.org
  • Seba, F. (2024). GenAI Ethics for Leaders Newsletter. https://freddieseba.com
  • Silicon UK. (2024). Stanford Team Builds Powerful AI Model for $50. https://www.silicon.co.uk
  • Stanford HAI. (2024). Human-Centered AI Principles. https://hai.stanford.edu
  • Stanford University. (2024). AI in education: Guidelines and frameworks. https://hai.stanford.edu
  • U.S. Congress. (2024). Bipartisan Task Force on AI. https://www.congress.gov
  • U.S. Department of Education. (2024). Guiding Principles for AI in Education. https://www.ed.gov
  • USF ETS. (2025). Certificate in Ethical GenAI Use. University of San Francisco.

26

@2025 Freddie Seba. All Right Reserved

27 of 27

References

  • White House. (2024). AI Bill of Rights. https://www.whitehouse.gov
  • World Health Organization. (2024). Ethics and Governance of Artificial Intelligence for Health. https://www.who.int
  • Zaphir, L., Lodge, J. M., Lisec, J., McGrath, D., & Khosravi, H. (2024). How critically can an AI think? A framework for evaluating the quality of thinking of generative artificial intelligence. arXiv preprint arXiv:2406.14769. https://arxiv.org/abs/2406.14769

27

@2025 Freddie Seba. All Right Reserved