1 of 28

Sociotechnical Synergy & AI

(Artificial Intelligence)

Ram Tenkasi & Chris Malek for

MDC Learning Labs

2 of 28

Executive Summary: The STS Imperative

2

  • AI's Impact is Conditional and Contextual: AI adoption offers a modest, not guaranteed, financial benefit that varies significantly by sector. The greatest gains were seen in Software/IT and Consumer Goods.
  • The "How" Matters More Than the "What": The quantitative results confirm that merely adopting AI is insufficient; success hinges on the strategic design of its implementation.
  • Theoretical Pivot: From Description to Action: The study shifts the lens from descriptive models like Actor-Network Theory (ANT) to the prescriptive framework of Sociotechnical Systems (STS) Theory.
  • Core Mandate - Joint Optimization: Effective AI deployment requires the joint optimization of the technical system (the AI model) and the social system (people, roles, and culture). AI must be treated as a sociotechnical artifact, not a purely technical tool.
  • Actionable Framework: Organizational leaders must use variance analysis to proactively identify and mitigate eight critical vulnerabilities (e.g., Machine Bias, Automation Bias, Accountability Gaps) that emerge when the social and technical systems fail to align.
  • The Goal: Cultivate Sociotechnical Capital: The ultimate strategic objective for OD is to build the organizational capacity to continuously align, adapt, and optimize human-technology systems over time.

3 of 28

Section I

Introduction & Research Foundations

3

4 of 28

The Next Evolution of Work: Artificial Intelligence (AI) and Sociotechnical Synergy - Repositioning AI as an Actionable Socio-Technical Artifact

  • Presenters: Ramkrishnan (Ram) V. Tenkasi, Ph.D. & Christopher (Chris) Malek, Ph.D., M.B.A.
  • Affiliation: Benedictine University, Lisle, Illinois, USA
  • Forum: Presented at Academy of Management, Management Consulting Division, Learning Labs | January 14, 2026
  • Audience Focus: Management, Leadership, and Organization Development Scholars, Management Consultants, and Practitioners
  • Keywords: Artificial Intelligence, Sociotechnical Systems Theory, Joint Optimization, Organization Development
  • Impact: This is the most crucial period for OD scholars and Management Consultants to define the future of work given the contemporary technological environment.

4

5 of 28

Context: The Intersection of OD and AI

  • Since the early 20th century, OD has focused on achieving competitive advantage through the strategic integration of technology.
  • The nature of work has consistently evolved through the interplay between human labor and technological innovation. This dynamic gave rise to Socio-Technical Systems (STS) theory.
  • The Opportunity: AI promises massive gains in efficiency, data processing, and innovation—an engine for competitive advantage.
  • The Risk: AI poses fundamental organizational and humanistic challenges -
    • Ethical Erosion: Bias, accountability gaps, data privacy.
    • Displacement: Labor shifts, skill obsolescence, and deskilling.
    • Organizational Misalignment: Integration failures leading to resistance and performance decline.
  • OD's Imperative: Our role is to provide robust frameworks to guide the nuanced process of AI integration, ensuring approaches are not only effective but also ethical and human-centered.
  • The goal is to leverage AI in a manner that upholds fundamental humanistic values.

5

6 of 28

Defining the Core Research Questions

  • This study investigates the evolving intersection between AI and OD, building on prior dissertation research (Malek 2024).
  • Three Central Research Questions (CRQs):
    • CRQ1 - Performance: Does AI function as a meaningful intervention in enhancing organizational financial performance compared to firms that have not adopted AI?
    • CRQ2 - Variance: Does the financial impact of AI vary across different industrial sectors?
    • CRQ3 - Strategy/Theory: What constitutes an effective AI implementation or deployment strategy? (Specifically, should AI be understood purely as a technical artifact or as a socio-technical construct?)

6

7 of 28

Section II

Theoretical Framing

7

8 of 28

Traditional Theories on AI and Performance

  • The adoption of AI can lead to superior financial performance through established strategic frameworks:
    • Resource-Based View (RBV): AI technologies (e.g., advanced analytics) are strategic resources that are valuable, rare, and inimitable, supporting superior performance.
    • Dynamic Capabilities: AI enables organizations to rapidly sense market trends, analyze behavior, and adapt operations, thereby sustaining performance.
    • Knowledge-Based View (KBV): AI facilitates knowledge generation, advanced analytics, and dissemination, enhancing organizational learning and innovation.
  • STS Theory: Emphasizes that successful AI integration requires a balanced focus on the technology and the human/structural elements (the social system).

8

9 of 28

Conceptualizing AI: Agency, Structure, and Context

  • Adaptive Structuration Theory (AST): Technologies introduce "structures" (e.g., protocols), but users often appropriate them in ways that deviate from their intended use (ironic appropriation).
  • STS & Institutional Contexts: STS argues effectiveness requires the joint optimization of three components: technologies, human agents, and institutions (formal rules, cultural norms).
  • Actor-Network Theory (ANT): A theoretical foundation initially utilized in this study.
    • ANT conceptualizes humans and non-humans (AI, algorithms) as "actants" that influence outcomes.
    • It offers an agnostic view of agency, suitable for analyzing AI systems that learn and adapt.

9

10 of 28

Section III

Research Methodology & Results

10

11 of 28

Research Design: A Mixed-Methods Approach

  • Phase 1 - Quantitative Component (Cohort Study): Assess Financial Performance, CRQs 1 & 2
    • Sample: 46 AI-adopting companies (exposed group) matched against 46 non-AI counterparts (comparison group) across 10 industrial sectors.
    • Data: Financial data gathered over 11 fiscal quarters (6/2020 to 12/2023).
    • Dependent Variables: Stock Price, Total Revenue, Operating Expenses, Pre-Tax Operating Income (PTOI), and Combined Average of Stock Price and PTOI.
  • Phase 2 - Qualitative Component (Discourse Analysis): CRQ3
    • Analyzed language used in quarterly earnings calls by 10 AI-adopting companies.
    • Compared corporate narratives to academic discourse (specifically ANT language) to see if linguistic alignment correlated with superior financial performance.

11

12 of 28

Quantitative Findings

Modest and Directional Impact (CRQ1)

  • The quantitative phase presents a moderately positive picture of AI’s impact on financial performance.
  • Regression Analysis (Overall):
    • AI increased Stock Price: Cautiously accepted at p<.10 (R² = 4.9%).
    • AI increased Pre-Tax Operating Income (PTOI): Accepted at p<.10 (R² = 6.0%), showing stronger evidence of impact.
    • Results for Total Revenue, Operating Expenses, and Combined Average were rejected or unsupported.
  • Caution: Firm-wide performance measures are distal metrics, meaning results are influenced by many uncontrolled confounders.

Sector-Specific Benefits (CRQ2)

  • The financial impact of AI is highly context-dependent; industry context matters.
  • T-tests showed statistically significant differences (p<.05) in 42 out of 50 data points across sectors.
  • Industries with Strong, Consistent Advantages for AI Firms:
    • Software & IT
    • Consumer Goods
    • Entertainment
  • Mixed or Contrary Results: Finance, Hardware, Manufacturing, and Healthcare showed varying outcomes. Non-AI companies even showed stronger PTOI performance in the Restaurants sector.
  • Longitudinal Analysis (MANOVA): Only the Consumer Goods sector showed a statistically significant and sustained performance benefit for AI-adopting firms over the 11 quarters.

12

13 of 28

Qualitative Findings: The ANT Disconnect

  • Linguistic Divergence: Corporate CEOs emphasized the "machine" aspects of AI in earnings calls, while academic texts gave greater attention to the "human" and "network" dimensions, aligning with ANT principles.
  • Predictive Value: An analysis of ten AI-focused firms found no consistent pattern where language aligning with academic (ANT-based) discourse predicted superior financial outcomes.
    • High-performing firms (e.g., Amazon, Apple) did not exhibit notable linguistic alignment with academic discourse.
  • Correlation Analysis: While "Human" language correlated strongly positive with Stock Price (.77), "Machine" and "Network" language showed consistently negative correlations with Stock Price, PTOI, and the combined average.
  • Conclusion: The emphasis on language components (human, machine, or network) was not a reliable indicator of financial success.

13

14 of 28

Section IV

Theoretical Pivot From ANT to STS

14

15 of 28

The Limitations of ANT and the Need for Intervention

  • ANT is Descriptive: Its strength lies in mapping relationships among actants; it is weak in explaining why outcomes happen well or equitably.
  • Interventional Limitation: ANT is limited from an interventional perspective required to achieve joint optimization, a cardinal principle in aligning human and technological systems.
  • The Pivot to STS: The inconclusive qualitative findings prompted a theoretical re-examination, shifting the lens to Sociotechnical Systems (STS) theory. STS is a more actionable framework for achieving joint optimization.

15

16 of 28

Sociotechnical Systems (STS): The Foundation for Success (CRQ3)

  • STS Core Definition: Organizational effectiveness requires the joint optimization of social (people, roles, culture) and technical (tools, machines, processes) systems.
  • Key Principles:
    • Interaction Drives Performance: Performance relies on how social and technical elements interact, regardless of whether those interactions are planned or unpredictable.
    • Joint Optimization is Crucial: Attempting to optimize only the social or only the technical side can backfire and lead to harmful interactions. Intentional design must support both systems.
  • STS as an Intervention Tool: While ANT helps trace complexity, STS provides tools to intervene and design organizations where people and technology thrive together. This approach enhances sociotechnical capital.
  • Conclusion: AI is not an isolated technical tool, but a sociotechnical artifact embedded within dynamic organizational ecosystems.

16

17 of 28

Section V

Theoretical Deep Dive: 8 Points of Variance

17

18 of 28

Variance Analysis: Identifying Alignment Vulnerabilities

  • Variance: The difference between expected and actual system performance, often emerging when social and technical components fail to align.
    • Example: A sophisticated AI tool may underperform if users are poorly trained or resist its integration.
  • Drawing on Human-Computer Interaction literature, the study identifies eight key variance points in AI system design that can hinder joint optimization and performance if left unaddressed:

18

1. Misspecification

2. Machine Bias & Error

3. Human Misinterpretation

4. Performative Behavior

5. Non-Intended Appropriation

6. Dynamic Socio-Technical Changes

7. Downstream Impact

8. Accountability Gaps

19 of 28

8 Key Variance Points: Variances In System Specification & Data

1. Misspecification

  • Errors or omissions in defining the sociotechnical system.
  • Specifications must include the interaction between the AI model and other technical, social, and institutional components.
  • Failing to recognize the AI model as a simplification of reality is a major risk.

2. Machine Bias & Error

  • Machine errors (false positives/negatives) or bias (disproportionate/unfair errors across different groups).
  • These often stem from societal inequalities embedded in the training data or the model's technical design.

19

20 of 28

8 Key Variance Points: Variances in Human-AI Interaction & Behavior

3. Human Misinterpretation

  • Occurs due to a lack of understanding of the model’s conclusions, or automation bias (over-reliance on the output).
  • Highly complex "black-box" models suffer from the "curse of flexibility," exceeding human cognitive capacity.

20

4. Performative Behavior

  • The model’s predictions influence actors to behave in a way that confirms the prediction, distorting the intended outcome.
  • Systems must accommodate user autonomy or actively counteract these performative influences.

5. Non-Intended Appropriation

  • Users deviate from the intended use to navigate work complexities, especially if the system lacks flexibility or user trust.
  • This can lead to function creep (using the application for unintended purposes over time).

21 of 28

8 Key Variance Points: Variances In System Dynamics & Propagation

6. Dynamic Socio-Technical Changes

  • Vulnerabilities emerge over time due to shifts in data, concepts, regulations, or organizational strategies.
  • Poor management of updates can lead to runaway feedback loops that exacerbate existing biases.

7. Downstream Impact

  • Errors or biases propagate through secondary processes or interconnected systems that rely on the AI's output.
  • Interactions between multiple models can lead to a loss of control and cascading failures.

21

22 of 28

8 Key Variance Points: Variances in Accountability & Governance

8. Accountability Gaps

  • Unclear definition of who is responsible for system outcomes, a primary source of vulnerabilities and hazards.
    • Internal: Top management often lacks the technical knowledge to adequately oversee risks, creating gaps in responsibility.
    • External: Lack of transparency about the model and decision-making process means affected individuals cannot identify errors or contest outcomes.

22

23 of 28

Section VII

Discussion, Implications for Practice, & Conclusion

23

24 of 28

Core Conclusions

  • AI's Promise is Conditional: AI can enhance financial performance, but the effect is modest, uneven, and sector-specific (strongest in Software and Consumer Goods).
  • The Conceptual Shift: The qualitative findings revealed a disconnect between corporate (mechanistic) and academic (network/human-centered) views of AI.
  • The STS Imperative: Actor-Network Theory (ANT) was limited as an interventional tool; the study reasserts the renewed relevance of STS.
  • AI as a Sociotechnical Construct: Realizing AI's potential requires treating it as an interconnected system of human and technological components, not a purely technical solution.

24

25 of 28

Implications for Management, Leadership, & The Field of OD

  • Focus on Intentional Design: Organizations must intentionally design AI systems that align with institutional, social, and technical contexts.
  • Beyond Technical Expertise: Success requires not just technical expertise, but a deep understanding of the social, institutional, and organizational contexts.
  • Adopt STS Principles: Practitioners and leaders should prioritize robust variance analysis to identify and address the eight vulnerabilities (e.g., machine bias, accountability gaps, human misinterpretation).
  • Cultivate Sociotechnical Capital: The ultimate goal is the deliberate cultivation of sociotechnical capital—the organizational capability to align, adapt, and optimize human-technology systems over time.

25

26 of 28

Analogy for Socio-Technical Systems and Variance Analysis

26

27 of 28

Understanding Sociotechnical Systems (STS) and variance analysis in the context of AI is like building a custom, high-performance race car.

  • The Technical System is the engine, the chassis, and the cutting-edge computer systems (the AI model). You can optimize these perfectly, but if you only focus here, you neglect the environment.
  • The Social System is the driver, the pit crew (users and management), the race rules (institutions), and the team culture.
  • Joint Optimization is making sure the engine is tailored to the specific driver's skills, the pit stops are fast and accurate, and the car follows the rules of the track.
  • Variance Analysis is identifying the weak points where these systems clash: A technical variance might be an engine overheating; a social variance might be the pit crew misunderstanding a radio command (human misinterpretation); and a sociotechnical variance might be the autonomous braking system misfiring because the driver is ignoring its warnings (automation bias).
  • By using variance analysis, you don't just fix the machine; you redesign the entire system—the tools, the training, and the communication protocols—to ensure the human and the machine are performing in perfect synergy.

27

28 of 28

Thanks for Attending!

* Speakers' Contacts:

  • Ramkrishnan (Ram) V. Tenkasi: Tenkasi@msn.com
  • Christopher E. Malek: cem7372559@gmail.com

We invite you to consider:

How can leaders best integrate variance analysis into existing organizational change methodologies to proactively mitigate socio-technical risks?

28