Skip to content

Emerging Algorithms, Borders, and Belonging

Details

Article

Renata Barreto wrote “Emerging Algorithms, Borders, and Belonging” as part of the 2016 Humanity in Action Diplomacy and Diversity Fellowship

I realized that the line on the left, where I stood, was composed mainly of people of color, slightly accented English, and mixed heritage families…the line on the right was made up of white Americans, with more tempered travel experiences and no foreign relatives accompanying them on the trip.

In August of 2013 while traveling back from a trip to Brazil, I encountered, for the first time, a set of blue and bulky stations at the George Bush Houston International Airport. Exhausted after a cramped, 6-hour flight from Rio de Janeiro, I begrudgingly smiled as the machine snapped my photo, scanned my passport, and asked a series of questions. After a short pause, it printed a receipt branded with a bold, red X on the photo of my face. An officer stood in the middle of the vast room, directing the flow of human traffic; she briefly glanced at the X on my paper and pointed at the line on the left, urging me to join my fellow cohort of travelers bearing the scarlet letter. As I looked around, one distinct pattern emerged from the human data points: we were all United States (US) citizens—one of the precursors to being eligible to use the Automated Passport Control (APC) kiosks to begin with—but the two lines had distinct demographic characteristics. Upon further observation, I realized that the line on the left, where I stood, was composed mainly of people of color, slightly accented English, and mixed heritage families, who had spent a significant time abroad. Meanwhile, from what I could see, the line on the right was made up of white Americans, with more tempered travel experiences and no foreign relatives accompanying them on the trip, giving them a clean receipt.

Earlier that year, the US government had rolled out these kiosks, which rely on an algorithm, a series of predetermined steps and calculations programmed into a software, at major airports to assess a passenger’s status for increased, manual inspection. According to US Customs and Border Protection (CBP), a subsidiary of the Department of Homeland Security, these self-service kiosks are meant to help travelers “experience shorter wait times, less congestion, and faster processing.” (1) What they fail to mention is that this technology is opaque and, from the user’s perspective, unpredictable. When it was finally my turn to speak with a CBP officer, I explicitly brought up the issue. “Why was I flagged as needing extra security check?” His face went blank. “Well, these kiosks are new here, so it could be that it had never encountered a last name like yours, hyphenated,” he suggested.

Technology can also be used to process, classify, and control a population, resulting in a potentially biased outcome.

Often, technological progress is assumed to go hand-in-hand with social process. But, as this anecdote highlights, technology can also be used to process, classify, and control a population, resulting in a potentially biased outcome. Throughout this piece, I will use the term technology to capture the rapid growth and consequent reliance on and necessity for governance of information technology systems, such as databases and algorithms. How do these new interfaces affect the immigrant experience, and moreover, what values do these technologies reflect? By interviewing academics, coders, and policy experts in this area, as well as engaging with the current debates in algorithmic governance, I identify three major trends at the intersection of technology and immigration: discrimination, surveillance, and resistance.

The specifics of these methods of classification remain unknown to the general public – the algorithms shielded behind proprietary laws and an unsubstantiated fear of gaming the system. (3)

A number of other countries, from Canada, Germany, and Spain to the United Arab Emirates, have implemented automated systems at customs, also known as Automated Border Control (ABC) or e-gates. (2) The process is strikingly similar across these nations, reflecting an industry push for standardization; a traveler appears before a machine, which takes a photo of the passenger and uses a facial recognition algorithm to compare the picture to the one on the passport. However, the specifics of these methods of classification remain unknown to the general public – the algorithms shielded behind proprietary laws and an unsubstantiated fear of gaming the system. (3) Bruno Latour, a heavily-cited philosopher of science, argues that technology operates as a black box and that “when a machine runs efficiently…one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.” (4) In other words, so long as technology works, we turn a blind eye to the intricate inner processes. Only at the point of rupture, such as the inefficiency of false positives, do we begin to question the system itself.

Ultimately, human designers lie behind the inscrutable machine decisions, encoding specific values into the categories and process behind ABC systems.

Indeed, the black box is the default modus operandi of the ABC systems, leaving the user and casual observer with unanswered questions. For example, how does this algorithm decide to filter the original population into two distinct groups? What criteria are used in designating an individual for further inspection? Ultimately, human designers lie behind the inscrutable machine decisions, encoding specific values into the categories and process behind ABC systems.

Facial recognition algorithms have a notorious history of misidentifying people of color.

According to reports by the German Federal Office of Information Security, in 2012 approximately 500 users passed through Easy Pass, the German equivalent of ABC, with an 88% success rate, defined as smooth border crossing that did not require manual inspection, and a 12% operational rejection rate, defined as additional manual inspection by a border guard. (5) Put more concretely, 1 in 8 passengers required additional screening by an officer. About 5% of rejections were due to a failure in facial recognition, although whether this was due to the user or the algorithm remains unclear. Approximately 7% of rejections were due to other reasons, such as “non-compliant behavior, document check failed, or hits from background database checks.” (6) Not all countries are as forthcoming with their data, thus the German case may not be representative of the larger sample of ABC systems. However, it does raise some red flags regarding the accountability, transparency, and fairness of algorithmic decision-making. In particular, the facial recognition algorithm has a set of narrow requirements in order to adequately and precisely cross-identify the photo to the one on the passport. The non-ideal conditions that can compromise the integrity of the facial recognition algorithm include “low quality images, non-International Civil Aviation Organization (ICAO) compliant photography, inhomogeneous illumination, lack of neutral expressions and poses, skin conditions, aging, inhomogeneous background and object occlusion, extreme temperature and humidity, scalability problems, and non-ICAO compliant performance and efficiency.”(7)

At first glance, these requirements appear neutral and, although highly technical, do not raise issues with discrimination. However, facial recognition algorithms have a notorious history of misidentifying people of color. In June of 2015, Google Photos, an app that boasts the ability to organize and automatically label pictures, received backlash for incorrectly labeling two Black users as gorillas. (8) Jacky Alciné, a web developer, took to Twitter to alert Google of this egregious bug, uploading a screenshot of the misclassification, and a Google employee quickly responded, apologized, and sought to fix the problem. Ultimately, Google’s patch up consisted of eliminating the category of gorillas altogether, so the underlying algorithmic logic may have been left uninterrupted. Was this simply an error in the code, or was something more nefarious at work?

These types of biases are not uncommon in machine learning, a specific type of algorithm that’s used to make out of sample predictions. Here, the training datasets—the photos of faces, animals, landscapes and objects fed to the algorithm in order to help it along its process of learning to correctly identify and categorize images—are vital to its overall success. If, for example, the data lacks a substantial number of people with a darker complexion, then the algorithm will most likely classify these groups inaccurately.

“This is not to say that facial recognition algorithms are ‘racist’ or that racial bias has been intentionally introduced into how they operate…[but rather] the engineer that develops an algorithm may program it to focus on facial features that are more easily distinguishable in some races than in others.” (11)

Likewise, the facial recognition algorithms that undergird the e-gates have difficulties identifying people who deviate from the norm, as defined by the baseline set by the original dataset. Innocuous variables such as glasses or hairstyles can throw the facial recognition algorithm off target, although as the programmers become familiar with the setting in which these algorithms operate, the code in turn becomes more sophisticated in identifying the weary, haggard faces of travelers passing customs. (9) Previously, commercial-grade facial recognition software, like the one used by Nikon and Hewlett-Packer, has failed to recognize Asian and black users, as their features fell outside of the scope of the algorithm’s prowess. (10) As Clare Garvie and Jonathan Frankle, Fellows at the Center on Privacy and Technology at Georgetown Law, explain in a piece in The Atlantic, “this is not to say that facial recognition algorithms are ‘racist’ or that racial bias has been intentionally introduced into how they operate…[but rather] the engineer that develops an algorithm may program it to focus on facial features that are more easily distinguishable in some races than in others—the shape of a person’s eyes, the width of the nose, the size of the mouth or chin.” (11) These instances illustrate just how easily algorithms can perpetuate social inequalities and the tangible trade-off between perceived efficiency and fairness of a technological system.

This technology is being employed by Frontex, the organization charged with keeping the political borders of the European Union (EU) intact. Recognizing the fragility of the system, Frontex has advised for a two-step verification of ABC, with biometric data, like fingerprints or iris recognition. The algorithms that govern ABC are running rampant, not only in the US, but in Europe as well. However, the EU is taking concrete steps to mitigate the risk involved in incorporating algorithms into virtually every domain, from border control to financial regulation. As of April 14, 2016, the EU adopted a holistic set of regulations for the “collection, storage, and use of personal information, [known as the] General Data Protection Regulation (GDPR). (12) These laws are intended to allow users more control over the harvesting of their personal information. However, the US does not have such stringent legal protections. What implications does algorithmic decision-making have on the inclusion of immigrants into the social fabric? Thus far, these instances represent how border-crossing algorithms can perpetuate inequalities and execute discrimination, whether intentional or not.

This modern method for categorizing people can have long-lasting impacts on the sense of belonging in the American political project for those deemed suspicious.

While the effects of the kiosks might be interpreted as largely symbolic, this modern method for categorizing people can have long-lasting impacts on the sense of belonging in the American political project for those deemed suspicious. By subjecting certain groups to more scrutiny, such as going through extra security measures, the state continues a broader pattern of surveilling people of color and immigrant communities in the US. Well before the use of APC kiosks, David Lyon, Director of Surveillance Studies Center and Professor of Sociology at Queen’s University in Ontario, writes:

The surveillance dimensions of security arrangements have everything to do with ‘social sorting.’ That is, they are coded to categorize personal data such that people thus classified may be treated differently. People from suspect countries of origin or with suspect ethnicities can expect different treatment from others. Although the category citizen is still used, for example in passport and IDs, this term is both broader and narrower than it at first appears. Even citizens with those ‘awkward’ aspects of identity may find themselves in a separate group from majority citizens.” (13)

In other words, this technological artefact quickly, but perhaps erroneously, sifted human beings into discrete categories, with very different consequences. I certainly felt singled out and treated differently for having a hyphenated surname that the algorithm did not recognize as normal. Moreover, given the opaque nature of this process, we are unable to exactly identify what other personal factors could result in an extra encounter with a customs inspector.

The Danger of Technology: Infringing on Rights Based on Personal Data

2014 Supreme Court case Riley v. California specifies that a mobile phone is protected from being searched under the Fourth Amendment

After Edward Snowden revealed the vast powers of the American spy apparatus in 2013, the public discourse has focused heavily on the right to privacy. Policymakers called for more oversight of intelligence agencies and the way they deployed their resources both at home and overseas. However, one question that’s often sidestepped is whose right to privacy is actually being protected. If the allegations that German Chancellor Angela Merkel’s personal phone communications were tapped by the National Security Agency (NSA) are true, (14) what does that say about these agencies’ willingness to intercept private conversations among immigrant communities? In particular, Muslim communities in the US have been among one of the most heavily targeted in these efforts. Although the 2014 Supreme Court case Riley v. California specifies that a mobile phone is protected from being searched under the Fourth Amendment, which prohibits unlawful search and seizure, (15) the border presents a unique case where these laws are relaxed. According to the Electronic Frontier Foundation, between 2015 to 2016, “the US government reported a five-fold increase in the number of electronic media searches at the border in a single year, from 4,764 to 23,877” respectively. (16) Customs officers are reportedly asking to browse through the mobile phones and social media accounts of individuals crossing the border into the US.

The US border is a litmus test for the values of freedom and fairness being applied to all people.

However, not all people are treated equally at this moment of inspection. Refusing to cooperate with officials in this scenario can result in being in a range of repercussions, from foreign visitors being denied entrance into the country to immigrant resident facing increased legal complications. (17) US citizens cannot be barred entrance, but this has not stopped border agents from exercising their power unduly. (18) Increasingly, in the news and in the communities I inhabit, from recent immigrants to third generation, non-white passing Americans, I hear of first hand accounts that describe the direct monitoring of people of color, regardless of their citizenship status. For example, US-born NASA engineer Sidd Bikkanavar had his phone confiscated as he returned from a brief trip to his hometown of Pasadena, California from Santiago, Chile. Kevah Waddell, a former staff writer for The Atlantic, captures the tension in these moments: “Everything started to go wrong just after 5 a.m., when Sidd Bikkannavar scanned his passport, placed his hand on a fingerprint reader, and watched as the automated customs kiosk spat out a receipt with a black X drawn across it.” (19) Despite being a US citizen, a member of Global Entry, and a NASA employee, thus requiring frequent background checks, he was still required to hand over his phone and his passcode, endangering his privacy and the secrecy of NASA documentation in the process. If the privacy rights for someone with this level of privilege and status are blatantly disregarded, what types of implications does this have for those going through customs who cannot speak English or do not know their rights? The US border is a litmus test for the values of freedom and fairness being applied to all people. So far, it seems to be at risk of failing. Alyssa Vance, a machine learning programmer who resides in San Francisco, describes these algorithmic governance tools as “no different than the tracking and blacklisting of people from Middle Eastern descent at airports since the attacks on the World Trade Center buildings.” (20)

On January 25, 2017, President Donald J. Trump signed Executive Order 13768, which prevented entry into the US by people from seven majority-Muslim countries: Iraq, Iran, Syria, Yemen, Somalia, Libya, Sudan. (21) The Muslim Ban, as it has come to be known, was so far-reaching that even green-card holders who had been living in country for several years were turned away at customs. (22) While the discriminatory ban has received righteous outrage and extensive media coverage, less has been said about another statute embedded in the executive order. Section 14 of the executive order “Enhancing Public Safety in the Interior of the United States” specifies that “agencies shall, to the extend consistent with applicable law, ensure that their privacy policies exclude persons who are not US citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.” (23) At first blush, this policy could threaten the Privacy Shield, a data-transfer initiative between the US and the EU to ensure that data protection requirements are compliant and consistent with the laws of reach region. Some political commentators speculate that the order cannot supersede the power of the Privacy Act of 1974, but that remains to be seen. (24) Either way, it signals a preoccupation with treating immigrants under a separate set of laws.

Executive Order 13768 threatens the digital privacy rights of a number of undocumented immigrants in the US as well as legal immigrants who do not fit into the legal permanent resident category.  

The Privacy Act of 1974 extends protections to personally-identifiable information about individuals that is collected and maintained by federal agencies; specifically, it establishes a standard set of practices for dealing with sensitive information and explicitly prohibits sharing of data across agencies without the written permission of the individual whom it concerns. (25) By explicitly limiting the protections of the Privacy Act to American citizens and green card holders, Executive Order 13768 threatens the digital privacy rights of a number of undocumented immigrants in the US as well as legal immigrants who do not fit into the legal permanent resident category. Along with rapid mobilization of Immigration and Customs Enforcement (ICE) to conduct raids of immigrant communities and the heavy campaign rhetoric that vilified people of color, (26) the Trump administration has a long record, both in word and in deed, of targeting this community; the scaling back of privacy rights for non-American citizens represents one instance in this broader trend.

After the results of the 2016 election became apparent, cities across the US jump started efforts to protect the data collected on its undocumented population. In New York, for example, the De Blasio administration promised to delete the records on IDNYC, a program that provided municipally-accredited identification to more than 900,000 New Yorkers since it began in 2014. (27) This program especially served the undocumented community, allowing them to report domestic abuse, for example. According to a local law LL35, which created IDNYC, the city would destroy records collected as a byproduct of the application process every two years. (28) These provisions, which were reinforced by a number of executive orders, (29) helped ease privacy concerns for undocumented immigrants who feared retribution if the data fell into the wrong hands. But these provisions were recently overturned in a local court case, leaving room for this data to be used against immigrants.

Data in the Hands of Governments: Protecting or Facilitating Xenophobic Immigration Policies?

Given the vast increase in computing power and memory, collecting and storing data has become easier and more precise. However, privacy rights advocates agree that data collection should be at a minimum; (30) excess information, even if initially well-intended, can be used for nefarious purposes depending on the motives of the institution or government in place. But, some might push further, asking why privacy is so important in this context.

IBM played a significant role in collecting, organizing, and interpreting of personal data about the Jewish population in Germany. (31)

The history of data collection in aiding the state-sponsored genocide of different ethnic groups is sufficient cause for concern. Two examples in particular highlight the dangerous use of data as a technological infrastructure for mass atrocities. First, the International Business Machines Corporation (IBM) played a significant role in collecting, organizing, and interpreting of personal data about the Jewish population in Germany. (31) Although computers had not been invented in 1933, the punch card system, a precursor to the Personal Computer(PC), served as a way of making the Nazi apparatus run with unprecedented precision and efficiency. (32) According to historian Edwin Black, whose work explores the tight-knit relationship between IBM and the Third Reich, “people and asset registration was only one of the many uses Nazi Germany found for high-speed data sorters. Food allocation was organized around databases, allowing the Nazi government in Germany to starve the Jews. Slave labor was identified, tracked, and managed largely through punch cards. Punch cards even made the trains run on time and cataloged their human cargo.” (33)

Yet, the Holocaust has not been the only time in history when data has been employed to identify and target marginalized populations for the purposes of annihilation. During the Rwandan genocide in 1994, the use of identity cards with clearly demarcated ethnic categories allowed for the Hutu government to turn that data against the Tutsis. The practice of including these socially-constructed racial categories started in 1933, when the Belgian colonial government superimposed its racial paradigm on Rwandan social structures. (34) Indeed, “the prior existence of ethnic ID cards was one of the most important factors facilitating the speed and magnitude of the 100 days of mass killing in Rwanda.” (35)

Data is not a neutral collection of information, but instead, often politically motivated and if left unchecked can wreak havoc on the people it monitors and surveils. Today that threat continues, especially in light of President Trump’s promise to create a Muslim registry. (36) This type of behavior is nothing new – it falls in line with the practice of using US Census data next to round up Japanese-Americans during World War II to take them to internment camps. (37)

In the Middle Eastern community within the US, there is a very real concern about personal data being used against them as well. The census does not have an ethnic group consisting of Middle Eastern; on the one hand, this means that people of Middle Eastern descent are not being counted and its harder to provide an argument for representation of this community’s needs; on the other hand, it gives some cover from the government surveillance apparatus. A Muslim registry would recreate the architecture that facilitates the identification of minorities for nefarious purposes. Moreover, it would build on the work of a deactivated but robust database that keeps the records of “non-citizen, non-resident visitors from over 25 countries—all of them Muslim-majority countries, except for North Korea.” (38) In the aftermath of the September 11 attacks, the US Department of Homeland security implemented the National Security-Entry Exist Registration System (NSEERS), a program that collected a slew of information, from photographs, biometrics, and interviews, of people who fit the profile, namely an immigrant from one of the blacklisted countries. Although suspended in 2011, NSEERS’s regulatory framework remains, buried behind bureaucratic oversight and ready to resurge at a moment’s notice.

“Many of us have become enraptured by the Age of Computerization and the Age of Information…but now…as we look back and examine technology’s wake…unless we understand how the Nazis acquired the names, more lists will be compiled against more people.” (41)

Recognizing these potential problems with data retention, California legislatures have stepped up to enact legal protections on this type of data about undocumented immigrants especially from being accessed by federal authorities. (39) Introduced by Senator De Leon, who is one of the leaders in California promoting immigrants’ rights, Senate Bill No. 54 prevents oversharing of data among agencies, therefore limiting the danger posed to vulnerable populations. California lawmakers have also proposed a bill to limit the collection of data that would allow federal agencies to build a registry based on religion. (40) As Edward Black warns, “many of us have become enraptured by the Age of Computerization and the Age of Information…but now…as we look back and examine technology’s wake…unless we understand how the Nazis acquired the names, more lists will be compiled against more people.” (41) Only through active resistance at the political and social level can we work against the crimes of the past from being reconstructed.

Conclusion

Technology can take many different forms, from the data structures that collect information on whole populations to the autonomous drones that roam the sky. The synthesis of technology and surveillance is not new, but we are reaching unprecedented levels of power discrepancy between individuals and the government. In light of this, how do we recapture the spirit of fairness and inclusion in our digital age? First, we have to be clear about the values that are embedded in these technological systems; never will we find tech that is value-free. If a value is not at the surface, then it’s cached at a deeper layer. According to Kranzberg’s first law, “technology is neither good nor bad; nor is it neutral.” (42) The algorithms, for example, that govern entry at US customs are trained on a standard of who is considered “normal” and who is considered a deviation from that mean. Even the well-intentioned apps developed for refugees and immigrants come pre-packaged with a set of values and assumptions of the needs of this particular population.

“Technology is neither good nor bad; nor is it neutral.”

Second, the long history of these strategies in the US and abroad forces us to grapple with the way that technology can be complicit with the oppression of marginalized people. Despite this gloomy tone, there is some cause for celebration on two levels. As of last year, the major companies in Silicon Valley and their engineers signed a pledge, “refus[ing] to participate in the creation of databases of identifying information for the US government to target individuals based on race, religion, or national origin.” (43) Additionally, the legal system is taking account of the impact of technology across different social groups. Congress recently introduced a bill that “would require US [CBP]…to obtain a [probable] cause warrant before searching the digital devices of US citizens and legal residents at the border.” (44) Although this is a step forward, it also does not go far enough to protect the most at risk immigrant population. Only through concerted effort can we begin to see real change at the border, a space that occupies that public imagination and the lived experience of many people. Technology, as a tool for liberation or for surveillance, remains an important factor in this movement for a more just world.

 

•     •     •

References

  1. “Automated Passport Control (Apc),” U.S. Customs and Border Protection, https://www.cbp.gov/travel/us-citizens/automated-passport-control-apc.
  2. Jose Sanchez del Rio, Daniela Moctezuma, Cristina Conde, Isaac Martin Diego, and Enrique Cabello, “Automated Border Control E-Gates and Facial Recognition Systems,” Computers and Security 62 (2016): 49-72.
  3. Joshua Kroll, Joanna Huey, Solon Barocas, Edward Felten, and Joel Reidenberg, “Accountable Algorithms,” University of Pennsylvania Law Review 165 (2017): 1-16.
  4. Bruno Latour. “Pandora’s Hope: Essays on the Reality of Science Studies,” Cambridge, Massachussetts: Harvard University Press, 1999.
  5. Markus Nuppeney, “Automated Border Control Based on (Icao Compliant) Emrtds,” edited by Federal Office for Information Security (BSI), Gaithersburg, Germany, 2012
  6. Ibid.
  7. Jose Sanchez del Rio, Daniela Moctezuma, Cristina Conde, Isaac Martin Diego, and Enrique Cabello, “Automated Border Control E-Gates and Facial Recognition Systems.”
  8. Alistair Barr, “Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms,” Wall Street Journal, 2015.
  9. Anne-Marie Oostveen, Mario Kaufman, Erik Krempel, and Gunther Grasemann, “Automated Border Control: A Comparative Usability Study at Two European Airports,” 8th International Conference on Interfaces and Human Computer Interaction, Lisbon, Portugal, 2014.
  10. Maggie Zhang, “Google Photos Tags Two African-Americans as Gorillas through Facial Recognition Software,” Forbes, 2015.
  11. Clare Garview, and Jonathan Frankle, “Facial-Recognition Software Might Have a Racial Bias Problem,” The Atlantic, 2016.
  12. B. Goodman, and S. Glaxman, “EU Regulations on Algorithmic Decision-Making and a “Right to Explanation,” In ICML Workshop on Human Interpretability in Machine Learning, New York, NY, 2016.
  13. David Lyon, “Surveillance, Security, and Socical Sorting: Emerging Research Priorities,” International Criminal Justice Review, 17 no. 3 (2007): 161-170.
  14. Melissa Eddy, “Germany Drops Inquiry into Claims U.S. Tapped Angela Merkel’s Phone,” The New York Times, 2015.
  15. “Riley V. California,” Supreme Court of the United States, Washington, D.C., 2014.
  16. Sophia Cope, Amul Kalia, Seth Schoen, and Adam Schwartz, “Digital Privacy at the U.S. Border: Protecting the Data on Your Devices and in the Cloud,” San Francisco, CA: Electronic Frontier Foundation, 2017.
  17. Ibid.
  18. Ibid.
  19. Kaveh Waddell, “A NASA Engineer Required to Unlock His Phone at the Border,” The Atlantic, 2017
  20. Alyssa Vance, “Machine Learning,”
  21. President Donald J. Trump, “Executive Order: Enhancing Public Safety in the Interior of the United States,” edited by The White House, Washington, D.C.: Office of the Press Secretary, 2017.
  22. News article with this information of people being turned away
  23. Donald Trump, “Executive Order: Enhancing Public Safety.”
  24. Natasha Lomas, “Trump Order Strips Privacy Rights from Non-U.S. Citizens, Could Nix EU-US Data Flows,” TechCrunch, 2017.
  25. Kristi Lane Scott, “Overview of the Privacy Act of 1974 (2015 Edition),” Washington D.C.: Department of Justice’s Office of Privacy and Civil Liberties (OPCL), 1974.
  26. Phillip Rucker, “Trump Touts Recent Immigration Raids, Calls Them a ‘Military Operation,'” The Washington Post, 2017.
  27. Laura Nahmias, “City Will Retain Substantial Data About Idnyc Card Applicants, Regardless of Lawsuit,” Politico, 2017.
  28. Erin Durkin, “NYC Asks Judge If It Can Dump Personal Info on Municipal Id Cards,” NY Daily News, 2017.
  29. Commissioner Steven Banks, “New York City Identity Card (Idnyc) Program Database Security,” In Executive Order No. E-734, edited by The City of New York Human Resources Administration, New York, NY 2014.
  30. “Collect the Minimum Amount of Personal Data Necessary,” Unified Compliance Framework, https://www.unifiedcompliance.com/products/search-controls/control/78/.
  31. Edwin Black, “IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation,” New York, NY: Dialog Press, 2012.
  32. Ibid
  33. Ibid
  34. Zara Rahman, “Dangerous Data: The Role of Data Collection in Genocides,” Engine Room, 2016.
  35. “Ten Years Ago in Rwanda This Identity Card Cost a Woman Her Life,” Prevent Genocide International, http://www.preventgenocide.org/edu/pastgenocides/rwanda/ indangamuntu.htm#intro.
  36. Kaveh Waddell, “America Already Had a Muslim Registry,” The Atlantic, 2016.
  37. J.R. Minkel, “Confirmed: The U.S. Census Bureau Gave up Names of Japanese-Americans in WWII,” Scientific American, 2007.
  38. Kaveh Waddell, “Muslim Registry.”
  39. Spencer Woodman, “States Move to Protect Their Immigration Data from the Trump Administration,” The Verge, 2017.
  40. Senator de Leon, “Law Enforcement: Sharing Data,” In SB-54, edited by California State Legislature, 2016.
  41. Edwin Black. “IBM and the Holocaust,” pg. 10. 42. Melvin Kranzberg, “Technology and History: ‘Kranzeberg’s Laws,’” Technology and Culture 27, no. 3 (1986): 544-60. 43. “Never Again Tech,” news release, 2016, http://neveragain.tech/.
  42. Sophia Cope, “Border Search Bill Would Rein in Cbp,” Electronic Frontier Foundation, https://www.eff.org/deeplinks/2017/04/border-search- bill-would- rein-cbp.