Space

XXVIII.1 January - February 2021
Page: 16
Digital Citation

Human perceptron


Authors:
RYBN.ORG

back to top 

In 2016, we were invited by PACT Zollverein in Essen, Germany, to conduct a workshop on the UNESCO world heritage site of Zollverein, during the IMPACT 16 symposium. The workshop included a presentation of our artistic works, as well as our topics of research—algorithms, high-frequency trading, market microstructures, cybernetics, and complex computation processes.

ins01.gif Bonus Bureau, Computing Division. November 24, 1924.

PACT is an international center dedicated to contemporary dance practices. Its location on the site of Zollverein, a former coal factory transformed into a cultural hub, echoed our research around the history of computation, and especially the computation factory designed by Gaspard de Prony in 1793 [1]. From the combination of these different factors, the idea to experiment with the embodiment of a full algorithmic procedure emerged. For this workshop, we chose the Perceptron algorithm as our object of study, to address the reemerging discussion of artificial intelligence, underpinned in large part by neural networks.

The workshop was the starting point of a larger research project about the crossed histories of computation, cybernetics, work physiology, work division, and pseudo-AI. We developed work around these themes at the Preservation & Art - Media Archaeology Lab (PAMAL), Avignon, in 2017 and 2018, and in a research residency at PACT in 2018.

The workshop was conducted several times, notably at the Zentrum für Kunst und Medientechnologie, (ZKM), Karlsruhe, within the Open Codes exhibition workshop program (December 2017), and in Oradea's summer lab, Attempts, Failures, Trials, and Errors (July 2018).

back to top  Perceptron

The Human Perceptron workshop takes advantage of the Perceptron, an algorithm invented by Frank Rosenblatt in 1958 [2]. The Perceptron proceeds to a suite of recursive calculations, to separate two classes of objects, identified by a set of data through supervised learning, until the objects have been classified. Based on this learning phase, the Perceptron is then able to make predictions and recognize any given object as belonging to its recorded classes.

Known as the first neural network, the Perceptron (Figure 1) exemplifies the logic at work in more recent artificial intelligence computing processes. Supervised machine learning operates on sets of data derived from the quantification of objects, making them computable and predictable. The complexity of neural networks has greatly improved and evolved since the Perceptron (multilayer Perceptrons, deep convolutional neural networks, generative adversarial networks, etc.). This has been enabled by the increase in available computing power, and by the development of facilities that provide prepared data, such as Amazon Mechanical Turk (MTurk). We presume, however, that the differences between the Perceptron and more recent neural networks is mostly quantitative and does not fundamentally affect the nature of the logic at work.

ins02.gif Figure 1. Diagram of a Perceptron.

The Perceptron also has the advantage of being able to perfectly embody the cybernetics narrative, and the analogies between computing and the nervous system. The model is simple enough to be executed manually until its completion, through a limited number of steps. It remains understandable in its successive steps, providing a base on which to build up an accurate understanding of artificial intelligence. This in turn can be mobilized with regard to ongoing issues such as digital labor, manual training, construction of datasets, algorithmic biases, and so on.

back to top  Methodology

We began by an analysis of Rosenblatt's Perceptron procedure. We then decomposed this procedure into single manual operations: data collection, construction of datasets, calculation of the separator, conditions that will retrigger the computation of the separator, end of the cycle. We defined a logical suite of operation and decided that the process would have to continue until the completion of the algorithm. Similarly to de Prony's computation factory, two teams would execute the computations in parallel and compare results, thereby reducing mistakes.

We then defined the objects to be classified: We chose tables and chairs for their availability in various types and for their identifiable and differing characteristics. We defined two values to collect for each object: height and surface.

We also defined the spatial setup: The graph was set on a paperboard; the separator line was a red thread fixed on the paperboard with pins, allowing it to be placed in various positions.

We beta-tested the procedure several times to test the workflow.

We also programmed a simple Perceptron that included a visualization of its results. This was presented during a preliminary introduction and used with the data manually collected by the participants as a conclusive step.

back to top  Prerequisites

Technical requirements

  • Paperboard
  • Red thread and pins
  • Several tables and chairs of various forms, randomly arranged in the room
  • Rulers and pens
  • Optional: a video projector and a computer to show a Perceptron at work.

Attribution of roles / functions to the participants

Two (or more) teams with, in each team:

  • One person (or more) to measure the objects
  • One person (or more) to index the data
  • One person (or more) to make the calculations.

In addition, one person to be the oracle, verifying the separator line position, and repositioning it if necessary.

back to top  Procedure

Data collection. Data collection is executed manually by the participants, with the help of a ruler. For each piece of furniture, participants collect two values: height and area, and the type of object: table or chair.

Participants construct a dataset using pieces of paper, listing all of their collected measurements. For instance, a paper could have this information:

Table: height=0.8 (m), area=1 (m2) Or, in another example:

Chair: height=0.5 (m), area=0.25 (m2) All pieces of paper (dataset) are put in a bag.

Graph and separator positioning. On the graph, the x-axis is assigned to height, and the y-axis to the area of the measured furniture pieces.

A line (the separator) is materialized by a red thread and placed on the graph, separating it into two zones; the line is placed in an arbitrary position at the start of the process.

Each area on both sides of the line is arbitrarily assigned one of the two types of furniture: a zone for chairs and a zone for tables (Figure 2).

ins03.gif Figure 2. PACT workshop spatial setup, and human computers at work, 2016.

Training phase. The most important part of the elaboration of a neural network is the training phase. During this phase, the neural network learns to recognize patterns/entities.

Here is the transposition for a human Perceptron: During the training phase, one piece of furniture is randomly picked from the bag. Participants verify if its position on the graph (with coordinates x=height and y=area) is located in the right zone.

If the piece of furniture is in the right zone, then it goes back in the bag and another piece of paper is drawn.

If the point is in the wrong zone, the participants have to recompute the position of the separator, with the help of the function "recalculate line":

if furniture is in the wrong zone:
    if furniture = chair
        D = −1
    if furniture = table
        D = 1
then:
line = line + (height, area, 1) * 0.1 * D

Calculation after calculation, the position of the separator will be corrected, a little bit closer to its final position each time (Figure 3): The training cycle is designed to recalculate the position of the separator until it separates the chairs from the tables with as much as precision as possible.

ins04.gif Figure 3. Position of the separator, at the beginning, in intermediary steps, and at the end of the process.

back to top  Example Calculation for a New Position of the Separator

We start with an arbitrary line x = y; the line passes through the points (0, 0) and (1, 1).

So, the line can be defined as: x − y = 0; equivalent to 1x − 1y + 0 = 0; equivalent to (1, −1, 0).

So, the line has the coordinates:

line = (1, −1, 0)

We pick an imaginary chair; its height is = 0.5 and its area = 0.25 (so on the graph: x = 0.5 and y = 0.25).

As the separator line position is (1, −1, 0), this chair is in the wrong zone.

So, D = −1

We recalculate the separator position:

New line position = (1, −1, 0) + (0.5, 0.25, 1) * 0.1 * −1

= (1, −1, 0) + (−0.05, −0.025, −0.1)

= (0.95, −1.025, −0.1)

We now have the new equation for the separator line:

0.95x − 1.025y − 0.1 = 0

Calculate the coordinates of two points of the line to reposition it on the graph:

if x = 0, y = −0.1/1.025 = −0.098

if x = 1, y = 0.85/1.025 = 0.83

Reposition the line on the graph, according to these two points.

back to top  Conclusions

As stated in our introduction, we believe the Perceptron, despite its age, perfectly embodies and exemplifies the logic at work within AI's recent systems, through:

  • Data reductionism and quantification of objects in simple sets of data, unable to render the complexity of the real; the impossible consideration for systems for the imponderable or the unquantifiable.
  • The inevitable manual training for the machine to learn and recognize, also called human in the loop. This human input is largely externalized and invisibilized in most systems, while humans remain necessary for cognitive tasks.
  • Predictive systems are trained on past data and are thus are unable to forecast singular "black swan" outlier events.
  • Multiple algorithmic biases, coming either from the algorithmic design or the errors introduced at all phases of the process. The transfer of imperfect and fallible tools from laboratories to the world transforms, in return, the world into a laboratory.

Participants were particularly interested in the reduction of physical objects to a simple set of values. The experiment brings to light the reductionism at work in digitization processes. Participants also noted that during the data-collection phase and the manual training phase, errors can slip in. But, moreover, the various subjectivities at work can generate, or transfer, perception biases. For example, the workshop showed that unusual forms of tables and chairs (such as small tables or very long chairs) generate false positives in the computing process. Participants remarked that false positives and approximation are part of the process, and that no automated classification can ever be perfectly optimal. This allowed the participants to maintain a critical distance from the belief in a fully working digitization procedure.

The bodies at work, the repetitiveness of simple operations, the factory setup—all participate in framing digital procedures such as database construction as manual labor. Once this is established, the conversation can open up to address digital labor issues, labor externalization, and labor invisibilization in modern computing. Thus, the body reveals itself as a fantastic medium to incorporate, digest, remember, and counter the narrative of dematerialization.

We publish this workshop as documentation, as a tutorial be reproduced or forked [3]. We hope that it will provide an interesting platform to initiate further discussions on artificial intelligence, neural networks, computational logic, algorithmic biases, and digital labor.

back to top  Developments

We continued our research, first by conducting a mediarchaeological investigation to index the systems and apparatuses of labor metrics. We focused on an early apparatus invented by physiologists during the industrial era to measure and record human physical and intellectual workload and fatigue at work, and to optimize worker performance—but also to measure pupils' orientation in classrooms, the efficiency of soldiers, and the productivity of society as a whole.

In the project Human Computers, those apparatuses and machines are placed in resonance with the history of management, from the conceptualization of labor division by Adam Smith to the actual systems imagined by Apple, Amazon, Facebook, Microsoft, and Google. These companies have massively patented similar systems, such as motorized cages for workers, GPS monitoring bracelets, and other devices that demonstrate their obsession with control, surveillance, and optimization [4].

From this historical research, in 2019, we programmed an algorithm designed to analyze the response time of workers on Amazon Mechanical Turk (Figure 4), during a simple task, AAI Chess [5]. The system is camouflaged within a game of chess, where workers play against each other. Their response time and other recorded data are used to define their wages, creating a miniature pricing market (Figure 5).

ins05.gif Figure 4. Capture of the movements of fingers over a keyboard, during the execution of a task on Amazon MTurk. This image is an attempt to produce a chronocyclegraph of an Amazon MTurk task.
ins06.gif Figure 5. AAI Chess board game and pricing graph.

More recently, we collaborated again with PACT in June 2020, within the 1000 Scores series. We proposed an online project called Double Negative Captchas [6]. This project invites the visitor to experiment with a reverse Captcha system: The visitor has to behave like a robot to pass the system.

The project Human Computers, by engaging with various incarnations of algorithmic standardization—such as Perceptron, neural networks, Amazon MTurk, Captchas—and by operating in parallel with the history of metrics apparatus, proposes a genealogy that clearly positions AI as a new step of work rationalization and labor optimization. By proposing to experience concretely the invisibilized part of labor within our technological systems, we propose to elaborate a history of artificial intelligence as labor, a possible counter-narrative to algorithmic governmentality.

back to top  References

1. On de Prony see Roegel, D. The great logarithmic and trigonometric tables of the French Cadastre: A preliminary investigation; or Peaucelle, J.L. Le détail du calendrier de calcul des tables de Prony de 1791 à 1802; or the dedicated chapter of the book When Computers Were Humans, by David Allan Grier, MIT Press, 2007.

2. See Rosenblatt, F. The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review 65, 6 (1958), 386–408; https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.335.3398&rep=rep1&type=pdf

3. We publish the whole procedure following the example of James Bridle's tutorial of the reenactment of Donal Michie's Matchbox Educable Noughts and Crosses Engine (Menace) experiment from 1960; https://jamesbridle.com/works/menace

4. For example, "Ultrasonic bracelet and receiver for detecting position in 2D plane, "Amazon, 2017; or "System and method for transporting personnel within an active workspace," Amazon, 2016. All the collected patents have been indexed in our "Industrial Property Curiosity Cabinet" project: http://www.rybn.org/IPPI/CC/

5. AAI Chess, 2018; http://rybn.org/human_computers/aaichess.php; see also the algorithmic diagram here: http://rybn.org/human_computers/images/AAICHESS-print.pdf

6. Double Negative Captcha is hosted at https://1000scores.com/portfolio-items/rybn-double-negative-captchas/ or directly at http://rybn.org/projects/1000scores/; see also !mediengruppe bitnik's response to DNC, 1000bots https://1000scores.com/portfolio-items/mediengruppe-bitnik-1000-bots/

back to top  Author

RYBN.ORG is an artist collective founded in 1999 and based in Paris. The group leads investigations within the esoteric realms of offshore finance, high-frequency trading, algorithmic market microstructures, flash crashes, artificial intelligence and pseudo-AI, and algorithmic governmentality. [email protected]

back to top  Footnotes

http://www.rybn.org

back to top 

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.

Post Comment


No Comments Found