Human rights and new technologies
DOI:
https://doi.org/10.37467/revtechno.v11.4486Keywords:
Fundamental Rights, Technologies, Artificial intelligence, Algorithms, Social score, Data protection, PoliticsAbstract
Numerous risks have been evidenced not only for data protection, but in general for the fundamental rights of the person such as equality and freedom with the use of numerous technological applications. Ethical biases and dilemmas in the application of artificial intelligence and algorithms, interference in electoral processes or control and social score techniques for political purposes are analyzed. It reflects on the dysfunctions and inequities of many of these new tools for citizenship, and proposes solutions that generate well-being for our democratic societies.
References
Allport, G.W.; Postman, L. (1947). The Psychology of Rumor. New York: Henry Holt.
Angwin, J., Larson, J., Mattu, S., Kirchner, L. (2016, March 23). Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Bhandari, B. (2018, January 2). China’s Year in Scams. Despite tougher laws, Chinese citizens continue to get swindled in a surprising variety of ways, Sixthtone. https://www.sixthtone.com/news/1001431/chinas-year-in-scams
BBC (2018, April 9). Cómo Cambridge Analytica analizó la personalidad de millones de usuarios de Facebook. https://www.youtube.com/watch?v=7831NGClsrM
Bolukbasi, T., Chang, K-W., Zou, J., Saligrama, V., Kalai, A. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings, The Conference on Advances in Neural Information Processing Systems 29 (NIPS 2016), pp. 4349–435. https://arxiv.org/pdf/1607.06520v1.pdf
Buolamwini, J. (2017). How I’m fighting bias in algorithms, TED. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms/transcript
Campbell, C. (2019). How China Is Using “Social Credit Scores” to Reward and Punish Its Citizens, Time. https://time.com/collection/davos-2019/5502592/china-social-credit-score/
Chen, M. y Grossklags, J. (2020). An Analysis of the Current State of the Consumer Credit Reporting System in China, Proceedings on Privacy Enhancing Technologies, 2020 (4), DOI:10.2478/popets-2020-0064
Coglianese, C. y Lehr, D. (2017). Regulating by robot: administrative decision making in the machine-learning era, en Georgetown Law Journal, 105 (5), 1147.
Comisión Europea (2018). A multi-dimensional approach to disinformation, Report of the independent High level Group on fake news and online disinformation. Luxembourg: Publications Office of the European Union. https://op.europa.eu/en/publication-detail/-/publication/6ef4df8b-4cea-11e8-be1d-01aa75ed71a1
El Mundo (2016, March 28) Una inteligencia artificial se vuelve racista, antisemita y homófoba en menos de un día en Twitter. https://www.elmundo.es/tecnologia/2016/03/28/56f95c2146163fdd268b45d2.html;
El Mundo (2018, January 16). Google soluciona a las bravas su algoritmo ‚racista‘, que confundía a personas negras con gorilas. https://www.elmundo.es/f5/comparte/2018/01/16/5a5dff50468aeb81638b45e7.html
El País (2016, March 25). Microsoft retira un robot que hizo comentarios racistas en twitter. https://elpais.com/tecnologia/2016/03/24/actualidad/1458855274_096966.html
El País (2018, January 14). Google arregla su algoritmo “racista” borrando a los gorilas. https://elpais.com/tecnologia/2018/01/14/actualidad/1515955554_803955.html?id_externo_rsoc=TW_CM
El País (2018b, March 22). Así fue el atropello moral del Uber sin conductor. https://elpais.com/elpais/2018/03/22/videos/1521681970_604181.html
Fei, C. (2019). Social Credit System in China, Panorama. Insights into Asian and European Affairs, 2/2018. KAS190325 Digital Asia New Sub Project_FA
Fundación Telefónica (2017). Inteligencia artificial. Las máquinas que aprenden solas. http://fundaciontelefonica.com/
García, R. M. (2019). Tratamiento de datos personales de las opiniones políticas en el marco electoral: todo en interés público, Revista de Estudios Políticos, 183. https://doi.org/10.18042/cepc/rep.183.05
Gutiérrez-Rubí, A. (2016). Bots para la comunicación Política. https://www.gutierrez-rubi.es/2016/11/02/bots-en-comunicacion-politica
Harari, Y.N. (2015). Sapiens. De animales a dioses: Breve historia de la humanidad. Debate.
Harari, Y.N. (2019, March 6) Los cerebros “hackeados” votan, Suplemento Ideas de El País. https://elpais.com/internacional/2019/01/04/actualidad/1546602935_606381.html
Harcourt, B.E., (2010). Risk as a Proxy for Race, en Criminology and Public Policy, Forthcoming, University of Chicago Law & Economics Oline Working Paper n.535, University of Chicago Public Law Working Paper n.323. https://ssrn.com/abstract=1677654
Harwell, D. (2018, July 19). The Accent Gap, The Washington Post. https://www.washingtonpost.com/graphics/2018/business/alexa-does-not-understand-your-accent
Hay, B. (2016, January 25). Cinq objets connectés pour économiser l’énergie, La Tribune. https://www.latribune.fr/entreprises-finance/la-tribune-de-l-energie-avec-erdf/cinq-objets-connectes-pour-economiser-l-energie-545571.html
Hern, A. (2017, November 14). Thirty countries use ‘armies of opinion shapers’ to manipulate democracy, The Guardian. https://www.theguardian.com/technology/2017/nov/14/social-media-influence-election-countries-armies-of-opinion-shapers-manipulate-democracy-fake-news
Holgado González, M. (2017). Publicidad e información sobre elecciones en los medios de comunicación durante la campaña electoral. Teoría y Realidad Constitucional, 40, 457-485. DOI: https://doi.org/10.5944/trc.40.2017.20914
Horwitz, J. y Goh, B. (2020, May 26). As Chinese authorities expand use of health tracking apps, privacy concerns grow, Reuters. https://www.reuters.com/article/us-health-coronavirus-china-tech-idUSKBN23212V
Human Rights Council of the United Nations. (2019, October 11). Digital technology, social protection and human rights, Report, A/74/493. https://undocs.org/A/74/493
Lee, A. (2020, August 9). What is China’s social credit system and why is it controversial?, South China Morning Post. https://www.scmp.com/economy/china-economy/article/3096090/what-chinas-social-credit-system-and-why-it-controversial
Millar, J. (2016). An Ethics Evaluation Tool for Automating Ethical Decision-Making, Robots and Self-Driving Cars, 30(8), 787-809. https://doi.org/10.1080/08839514.2016.1229919
Mirror (2016). Intelligent robot that remenbers and learns could be scrapped after escaping a lab for a second time. https://www.mirror.co.uk/news/weird-news/intelligent-robot-remembers-learns-could-8248559
Mistreanu, S. (2018, April 3). Life Inside China’s Social Credit Laboratory, Foreign Policy. https://foreignpolicy.com/2018/04/03/life-inside-chinas-social-credit-laboratory/
Montero, S. (2021, March 27). Los algoritmos y sus sesgos de género, raza o clase: así te perjudican en la búsqueda de trabajo o de ayudas sociales, Público. https://www.publico.es/ciencias/algoritmos-y-sesgos-genero-raza.html
Mozur, P., Zhong, R. y Krolik, A. (2020, March 1). In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags, The New York Times. https://www.nytimes.com/2020/03/01/business/china-coronavirus-surveillance.html
Needham, K. (2018, March 6). If you get on China’s ‘blacklist,’ you can be banned from travel, The Sydney Morning Herald. https://www.smh.com.au/world/asia/big-brother-stops-millions-boarding-planes-trains-in-china-20180306-p4z33u.html
Nowak & Eckel (2012). US Patent no. US20160283485A1. Washington, DC: U.S. Patent and Trademark Office.
Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S., (2019, October 25). Dissecting racial bias in an algorithm used to manage the health of populations, Science 366, Issue 6464. DOI: 10.1126/science.aax2342. https://science.sciencemag.org/content/366/6464/447
O´Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, Penguin Random House: New York.
Pariser, E. (2011). Cuidado con las burbujas de filtro. TED. https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles?language=es
Pariser, E. (2017). El filtro burbuja. Taurus.
Pardo, P. (2010, December 29). Las máquinas que controlan la economía, El Mundo. https://www.elmundo.es/elmundo/2010/12/29/internacional/1293605644.html
Pascua,, F.A. (2019). Un nuevo capítulo en la tutela del derecho a la protección de datos personales: los datos de contenido político. comentario a la sentencia del tribunal constitucional 76/2019, de 29 de mayo, en el recurso de inconstitucionalidad núm. 1405-2019, Revista de las Cortes Generales, 106. https://doi.org/10.33426/rcg/2019/106/1411
Peytibi, X. (2012). La segmentación electoral, Más poder local, 13. http://www.maspoderlocal.es/files/articulos/110-F508305df1101350763999-articulo-1.pdf
Perc, M.; Ozer, M.; y Hojnik, J. (2019). Social and juristic challenges of artificialintelligence, Palgrave Commun, 5(61): https://doi.org/10.1057/s41599-019-0278-x
Proyecto Carabela (2019). http://carabela.prhlt.upv.es/es
Público (2017). ¿Qué fue de Tay, el robot de Microsoft que se volvió nazi y machista?. https://acortar.link/EVRCYq
Punset, E. (2008). Por qué somos como somos. Aguilar.
Ramírez-Bustamante, N.; Páez, A. (2021). Análisis jurídico de la discriminación algorítmica en los procesos de selecciónlaboral, SSRN. http://dx.doi.org/10.2139/ssrn.3765741
Ricoy-Casas, R.M. (2019). Inteligencia artificial y políticas públicas en la UE, Valcárcel, P.; Fernández, R.; Bonorino. P.R. (Dirs.), Derecho, desarrollo y nuevas tecnologías, Aranzadi Thomson Reuters, ISBN 9788413087184, 187-234.
Sunstein, C. (2003). República.com: Internet, democracia y libertad (Estado y Sociedad). Estado y Sociedad. Ediciones Paidós.
Tatman, R. (2016, July 12). Google’s speech recognition has a gender bias. https://makingnoiseandhearingthings.com/2016/07/12/googles-speech-recognition-has-a-gender-bias/
Tatman, R. (2017, April 4). Gender and Dialect Bias in YouTube’s Automatic Captions, Proceedings of the First Workshop on Ethics in Natural Language Processing. http://www.aclweb.org/anthology/W17-1606
The Economist (2016, December 17). China invents the digital totalitarian state. The worrying implications of its social-credit project. https://www.economist.com/briefing/2016/12/17/china-invents-the-digital-totalitarian-state
The Guardian (2016, March 24). Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter
The Guardian (2018, May 29). Computer learns to detect skin cancer more accurately than doctors.
The Washington Post (2016, June 30). The brave escape and untimely demise of one Russian robot. https://www.washingtonpost.com/news/the-switch/wp/2016/06/30/the-brave-escape-and-untimely-demise-of-one-russian-robot/
Thomas, N. (2019, November 11). AI sensor keep fuel flowing at Europe´s largest refinery, Financial Times. https://www.ft.com/content/bfbac636-ee8b-11e9-a55a-30afa498db1b
Victoria, M. y Nadal, S. (2020, December 8). Los algoritmos que permiten recuperar idiomas perdidos. Un sistema de inteligencia artificial desarrollado por el MIT pretende descifrar lenguas desaparecidas y conocer más sobre las personas que las hablaron, El País. https://elpais.com/retina/2020/12/07/innovacion/1607359036_565608.html
Zuboff, S. (2018). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Publicafairsbook.
Zuckerberg, M. (2018, November 15). A Blueprint for Content Governance and Enforcement, Menlo Park: Facebook.https://m.facebook.com/nt/screen/?params=%7B%22note_id%22%3A751449002072082%7D&path=%2Fnotes%2Fnote%2F&refsrc=deprecated&_rdr
Downloads
Published
How to Cite
Issue
Section
License
Those authors who publish in this journal accept the following terms:
- Authors will keep the moral right of the work and they will transfer the commercial rights.
- After 1 year from publication, the work shall thereafter be open access online on our website, but will retain copyright.
- In the event that the authors wish to assign an Creative Commons (CC) license, they may request it by writing to publishing@eagora.org