An Angry Bibliography on “Machine Behavior”/Social Studies of AI

(for those who think it’s new)

Jacqueline Wernimont and Nikki Stevens

Prompted by the suggestion that studying “machine behavior” is somehow new

Notes: Algorithms != AI

        AI is a big grouping of other smaller fields (i.e. imaginary)

        This is a living document

        The citations are not in a single standard format (got work to do)

In response to a few of the points made in the article:

Historical context (both from the period when AI was being developed, and recent history of that time)

Dreyfus, H. L. (1978). What Computers Can’t Do: The Limits of Artificial Intelligence (Revised, Subsequent edition). New York: HarperCollins.

Feigenbaum, E. A., & Feldman, J. (Eds.) (1963).Computers and thought. New York: McGraw-Hill.

Kline, R. R. (2017). The Cybernetics Moment: Or Why We Call Our Age the Information Age (Reprint edition). Baltimore: Johns Hopkins University Press.

McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (2 edition). Natick, Mass: A K Peters/CRC Press.

Nilsson, N. J. (2009). The quest for artificial intelligence. Cambridge University Press.

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation (1st edition). San Francisco: W.H. Freeman and Company.

On Algos in the wild... examples:

Barry-Jester, A. M., Casselman, B., & Goldstein, D. (2015, August 4). Should Prison Sentences Be Based On Crimes That Haven’t Been Committed Yet? Retrieved from FiveThirtyEight website: https://fivethirtyeight.com/features/prison-reform-risk-assessment/

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press.

Related to the “objective understanding”

Miller, T. (2018). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence.

On “why do only computer scientists study AI algorithms?”... they do not:

Legal scholars do:

Levendowski, A. (2017). How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem (SSRN Scholarly Paper No. ID 3024938). Retrieved from Social Science Research Network website: https://papers.ssrn.com/abstract=3024938

Calo, R. (2017). Artificial Intelligence policy: a primer and roadmap. UCDL Rev., 51, 399.

Joh, E. E. (2017). Artificial intelligence and policing: First questions. Seattle UL Rev., 41, 1139.

Urban planners do:

Wu, N., & Silva, E. A. (2010). Artificial intelligence solutions for urban land dynamics: a review. Journal of Planning Literature, 24(3), 246-265.

Biomedical researchers/clinicians do:

Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., ... & Wang, Y. (2017). Artificial intelligence in healthcare: past, present and future. Stroke and vascular neurology, 2(4), 230-243.

Yu, K. H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature biomedical engineering, 2(10), 719.

Computational Social Scientists do:

        https://dgarcia.eu/machine-behavior-datathon-at-eurocss-2019/

There are even journals!

For example: Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science

And Centers/Institutes!

        For example: AI Now https://ainowinstitute.org/ or

the 3A Institute https://3ainstitute.cecs.anu.edu.au/

(anyone heard of Genevieve Bell or Kate Crawford or Meredith Whittaker??)

A more serious (but still angry) bibliography

Attabou, R. and R. Chabot (eds.), Science sociales et intelligence artificiel: Technologies, idéologies, pratiques, Aix-en-Provence, 1992, vol. X, pp. 2-4.

Barry-Jester, A. M., Casselman, B., & Goldstein, D. (2015, August 4). Should Prison Sentences Be Based On Crimes That Haven’t Been Committed Yet? Retrieved from FiveThirtyEight website: https://fivethirtyeight.com/features/prison-reform-risk-assessment/

Bell, Genevieve.  2011. Unpacking Anthropology at Intel. AnthroNotes. Vol 32: 2 http://anthropology.si.edu/outreach/anthnote/Fall2011web.pdf

Bell, G. and Dourish, P. 2007. Yesterday’s Tomorrows: Notes on Ubiquitous Computing’s Dominant Vision. Personal and Ubiquitous Computing, 11(2), 133-143.

Bird, Sarah, Solon Barocas, Kate Crawford, Fernando Diaz and Hanna Wallach, 2016 'Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI', Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT-ML), New York University. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2846909

Broussard, Meredith Artificial Unintelligence: How Computers Misunderstand the World (Cambridge: MIT Press, 2018)

Boulamnwini, J. Gender and Skin Type Bias in AI http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

boyd, danah, Karen Levy, and Alice Marwick. (2014). “The Networked Nature of Algorithmic Discrimination.” Data & Discrimination: Collected Essays (Eds. Seeta Peña Gangadharan and Virginia Eubanks), pp. 43-57. [pdf]

Calo, R. (2017). Artificial Intelligence policy: a primer and roadmap. UCDL Rev., 51, 399.

Caplan, Robyn and danah boyd. (2018).“Isomorphism Through Algorithms: Institutional Dependencies in the Case of Facebook.” Big Data & Society 5(1). [pdf]

Combi, Mariella. “The imaginary, the computer, artificial intelligence: A cultural anthropological approach” (1992) https://doi.org/10.1007/BF02472768

Crawford, Kate  2016 'Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics', Science, Technology & Human Values, 41(1), 77-92. doi: 10.1177/0162243915589635 [pdf]

Daniels, J., Nkonde, M., & Mir, D. (n.d.). Advancing Racial Literacy in Tech: Why Ethics, Diversity in Hiring & Implicit Bias Trainings Aren’t Enough. 10.

Dreyfus, H. L. (1978). What Computers Can’t Do: The Limits of Artificial Intelligence (Revised, Subsequent edition). New York: HarperCollins.

Edwards, Paul N. "We Have Been Assimilated: Some Principles for Thinking About Algorithmic Systems." Working Conference on Information Systems and Organizations. Springer, Cham, 2018.

Elish, M.C. and danah boyd. (2018). “Situating Methods in the Magic of Big Data and AI.”Communication Monographs, 85(1): 57-80. [draft pdf]

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press.

Forsythe, Diana E. Studying Those who Study Us: An Anthropologist in the World of Artificial Intelligence (2011)

-- “The Construction of Work in Artificial Intelligence” Science, Technology, & Human Values

Vol. 18, No. 4 (Autumn, 1993), pp. 460-479

-- “Engineering knowledge: The construction of knowledge in artificial intelligence,” Soc. Studi. Sci. vol. 23, pp. 445-477, 1993

Feigenbaum, E. A., & Feldman, J. (Eds.) (1963).Computers and thought. New York: McGraw-Hill.

Finn, Ed. What Algorithms Want: Imagination in the Age of Computing (Cambridge, MIT Press, 2017)

Gilliard, Chris (2018) “Bad Algorithms are Making Racist Decisions.” Spark, Interview with Nora Young, CBC Radio. 2 November, 2018.

-- (2016) “Digital Redlining, Access, and Privacy.” Common Sense. 24 May 2016

Hicks, Mar. (2017) Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge In Computing (Cambridge, MIT Press)

--  "Hacking the Cis-tem," IEEE Annals of the History of Computing (March 2019) vol 41 issue 1 pp 20-33

Hjorth, L., Horst, H., Galloway, A., & Bell, G. (Eds). 2016. The Routledge Companion to Digital Ethnography. Routledge.

Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., ... & Wang, Y. (2017). Artificial intelligence in healthcare: past, present and future. Stroke and vascular neurology, 2(4), 230-243.

Joh, E. E. (2017). Artificial intelligence and policing: First questions. Seattle UL Rev., 41, 1139.

King, T., Aggarwal, N., Taddeo, M. and Floridi, L. (2018) "Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions", Science and Engineering Ethics.

Kline, R. R. (2017). The Cybernetics Moment: Or Why We Call Our Age the Information Age (Reprint edition). Baltimore: Johns Hopkins University Press.

Langley, Patrick. “Intelligent Behavior in Humans and Machines, Advances in Cognitive Systems” 2 (2012) 3–12

Langley, P., & Choi, D. (2006). “A unified cognitive architecture for physical agents.” Proceedings of the Twenty-First National Conference on Artificial Intelligence. Boston: AAAI Press.

Langley, P., Laird, J. E., & Rogers, S. (2009). “Cognitive architectures: Research issues and challenges.” Cognitive Systems Research,10, 141–160

Levendowski, A. (2017). How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem (SSRN Scholarly Paper No. ID 3024938). Retrieved from Social Science Research Network website: https://papers.ssrn.com/abstract=3024938

Lynch, W. (1990). Social Aspects of Human-Computer Interaction. Educational Technology, 30(4), 26-31. Retrieved September 2, 2019 from https://www.learntechlib.org/p/170620/.

Ma, B.; Yang, H.; Wei, J.; Meng, Q. Robot Path Planning Agent for Evaluating Collaborative Machine Behavior. Preprints 2019, 2019050264 (doi: 10.20944/preprints201905.0264.v1).

Malsch, Thomas “Autonomous Agents and Multi-Agent Systems” (2001) https://doi.org/10.1023/A:1011446410198

McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (2 edition). Natick, Mass: A K Peters/CRC Press.

Meaker, Morgan. (2019) “How Digital Virtual Assistants Like Alexa Amplify Sexism” Medium https://onezero.medium.com/how-digital-virtual-assistants-like-alexa-amplify-sexism-8672807cc31d

Robertson, L. J., Abbas, R., Alici, G., Munoz, A., & Michael, K. (2019). Engineering-based design methodology for embedding ethics in autonomous robots. Proceedings of the IEEE, 107(3), 582-599.

Miller, T. (2018). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence.

Muhuri, P. K.; Rauniyar, A. Multi-robot coalition formation problem: Task allocation with adaptive immigrants based genetic algorithms. Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9-12 Oct. 2016; pp. 137-142.

Neff, G. (2016) "Talking to Bots: Symbiotic Agency and the Case of Tay", International Journal of Communication. 10 4915-4931.

Neff, G. and Nagy, P. (2018) "Agency in the digital age: Using symbiotic agency to explain human-technology interaction" In: A Networked Self: Human Augmentics, Artificial Intelligence, Sentience Papacharissi, Z. (eds.). Routledge.

Nilsson, N. J. (2009). The quest for artificial intelligence. Cambridge University Press.

Noble, Safiya. Algorithms of Oppression: How Search Engines Reinforce Racism (NY: NYU, 2018)

Nyquist, E. “Clever Machines Learn How to be Curious” https://www.wired.com/story/clever-machines-learn-how-to-be-curious-and-play-super-mario-bros/

Richardson, Kathleen.(2015) An Anthropology of Robots and AI: Annihilation Anxiety and Machines

Roberts, Sarah. (2019) Behind the Screen: Content Moderation in the Shadows. New Haven: Yale UP.

Scheibinger, L. and J. Zou.  “AI can be Sexist and Racist—It’s Time to Make it Fair,” Nature, 559.7714(2018), 324-326.

Schwartz, R. D.  “Artificial intelligence as a sociological phenomenon,” Can. J. Sociol., vol. 14, no. 2, pp. 179-202.

Seaver, Nick. (2018) "What should an anthropology of algorithms do?." Cultural Anthropology 33.3: 375-385.

Sachs, S. E.  (2019) The algorithm at work? Explanation and repair in the enactment of similarity in art data, Information, Communication & Society, DOI: 10.1080/1369118X.2019.1612933

Stephan, K. D., K. Michael, M. G. Michael, L. Jacob, and E. P. Anesta, “Social implications of technology: The past, the present, and the future,” Proc. IEEE, vol. 100, pp. 1752–1781, 2012. [34] E. Strickland. (2014, June 10). Medtronic wants to implant sensors in everyone. IEEE Spectrum. [Online]. Available: http://spectrum.ieee .org/tech-talk/biomedical/devices/medtronic-wants-to-implant-sensorsin-everyone

Suchmann, L. (1987). Plans and situated actions: The problem of human–machine communication. Cambridge: Cambridge University Press.

Sweeney, Latanya. (2005) AI Technologies to Defeat Identity Theft Vulnerabilities.

AAAI Spring Symposium, AI Technologies for Homeland Security

-- (2004)  Navigating Computer Science Research Through Waves of Privacy Concerns: Discussions among Computer Scientists at Carnegie Mellon University.

Sweeney, L. ACM Computers and Society, 34 (1).

-- (2003) That's AI?: a history and critique of the field. Carnegie Mellon University, School of Computer Science, Technical Report, CMU-CS-03-106. Pittsburgh:

Vidal, Denis. “Anthropomorphism or sub‐anthropomorphism? An anthropological approach to gods and robots” https://rai.onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9655.2007.00464.x

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation (1st edition). San Francisco: W.H. Freeman and Company.

Whittaker, Meredith, et al. (2018) AI now report 2018. AI Now Institute at New York University.

Winfield, A. F., Michael, K., Pitt, J., & Evers, V. (2019). Machine ethics: the design and governance of ethical AI and autonomous systems. Proceedings of the IEEE, 107(3), 509-517

Wolfe, A. (1993) “The human difference: Animals, computers, and the necessity of social science”

 

Woolgar, S. (1985). Why not a Sociology of Machines? The Case of Sociology and Artificial Intelligence. Sociology, 19(4), 557–572. https://doi.org/10.1177/0038038585019004005

-- (2000) "MACHINES?." Artificial Intelligence: Critical Concepts 4: 371.

-- (1991)  “The turn to technology in social studies of science,” Sci. Techno. Human Values, vol. 1, no. 16, pp. 20-50, 1991.

Wu, N., & Silva, E. A. (2010). Artificial intelligence solutions for urban land dynamics: a review. Journal of Planning Literature, 24(3), 246-265.

Yu, K. H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature biomedical engineering, 2(10), 719.