Commentary On The AI Now Institute 2018 Report

Scroll down to content

The interdisciplinary New York-based AI Now Institute has released their sizable and informative 2018 report on artificial intelligence.

The paper, authored by the leaders of the institute in conjunction with a team of researchers, puts forth 10 policy recommendations in relation to artificial intelligences (AI Now policy suggestions in bold-face, our commentary in standard-type).

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. This point is fairly obvious: AI should be regulated based upon the functional potential and the actual application(s). This is particularly urgent given the spread of facial recognition (ability for computers to discern particular individuals from photos and cameras) technologies such as those employed by Facebook to allow tag-suggestions to users based upon only a picture of another person. The potential for misuse prompted Microsoft’s Brad Smith to call for congressional oversight of facial recognition technologies in a July, 2018 blog post. If there is to be serious regulation in America, a state-by-state approach, given its modularity, would be preferable to any kind of one-size-fits-all federal oversight program. Corporate self-regulation should also be incentivized. However, regulation itself, is not the key issue, nor is it what principally allows for widespread technological misuse, rather, it is the novelty and lack of knowledge surrounding the technology. Few Americans know what companies are using what facial recognition technology when or how and fewer still understand how precisely or vaguely these technologies work and thus cannot effectively guard against them when malevolently or recklessly deployed. Thus, what is truly needed is widespread public knowledge surrounding the creation, deployment and functionality of these technologies as well as a flowering culture of technical ethics in these emerging fields as the best regulation is self-regulation, that is to say, restraint and dutiful consideration in combination with a syncretic fusion of technics and culture. That, above anything else is what should be prioritized.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest. [covered above]
  3. The AI industry urgently needs new approaches to governance. Internal governance structures at most technology companies are failing to ensure accountability for AI systems. This is a tricky issue but one which can be addressed in one of two ways: externally or internally. Either outside (that is, outside the company) governmental or public oversight can be established, investigatory committee, etc., or, the companies can themselves establish new norms and policies for AI oversight. Outside consumer pressure, if sufficiently widespread and sustained, on corporations (whether through critique, complaint or outright boycott) can be leveraged to incentive corporations to change both the ways they are presently using AI and and their policies pertaining to prospective development and application. Again, this is a issue which can be mitigated both by enfranchisement and knowledge elevation.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Anti-black boxing is a excellent suggestion with which I have no contention. If one is going to make something which is not just widely utilized but infrastructurally necessary, then its operation should be made clear to the public in as concise a manner as possible.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. As whistleblowing is a wholly context independent enterprise, it is difficult to really say much on any kind of rigid policy, indeed, AI Now’s stance seems a little too rigid in this regard. If the information leaked was done merely to damage the company and is accompanied by spin, the whistleblower may appear to the public as a hero when in reality he may be nothing more than a base rogue. Such things must be evaluated case by case.
  6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. Yes, they should.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. When one hears “exclusion and discrimination” one instantly registers a ideological scent, familiar and disconcerting in its passive-aggressive hegemony. The questions: what/who is being excluded and why and/or what/who is being discriminated against and for what reason, ought be asked else the whole issue is moot and, if pursued, will merely be the plaything of (generally well-meaning) demagogues. The paper makes particular mention of actions which “exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability,” obviously harassing people is unproductive and should be discouraged, but what about practices which “systemically undervalue?” Again, depends upon the purpose of the company. If a company wants to hire only upon the basis of gender, race, sexuality or disability, they will, more often than not, find themselves floundering, running into all kinds of problems which they would not otherwise have, the case of James Damore springs to mind. Damore was fired for arguing that Google’s diversity policies were discriminatory to those who were not women or ‘people of color’ (sometimes referred to as POC, which sounds like a medical condition) and that the low representation of women in some of the companies engineering and leadership positions was due to biological proclivities (which they almost invariably were and are). All diversity is acceptable to Google except ideological diversity because that would mean they would have to accept various facts of biology which would put the company executives in hot water, as such, their policies are best avoided.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” By “full stack supply chain” the authors mean the complete set of component parts of a AI supply chain: training and test data, models, app program interfaces (APIs) and various infrastructural components, all of which the authors advise incorporating into a auditing process. This would serve to better educate both governmental officials and the general public on the total operational processes of any given AI system and as such is a excellent suggestion.
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. Given the concentration of AI development into such a small segment of the population and the relative novelty of the technology, this is clearly true.
  10. University AI programs should expand beyond computer science and engineering disciplines. Whilst I am extremely critical of the university system in its present iteration, the idea is a good one, as critical thought on the broad-spectrum applications of current and potential AI technologies require a vigorous and burgeoning class of theorists, speculative designers and policy makers, in addition to engineers and computer scientists; through such a syncretism, the creative can be incorporated into the technical.

A PDF of the report is provided below under creative commons.


AI_Now_2018_Report

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: