time P1 thread() T3 T2 T1 join() P1

Model 1. Threaded.

The multi-threaded APU at work in the human...


The initial line (P1) represents the user's main thought process. At that point, there is only one process running. The user could be in a state of deep attention toward one thing. For example, the user focuses on reading a dense academic journal. Then, two more processes start in the background to deal with distractions, such as a phone vibrating upon receipt of a text message. The phone demands attention. The main process is then split into three: the first branch of attention continues to focus on the academic journal, the second attends to the phone's now-lit screen, and the third is a realization of the distraction and the work required to either respond to the message or redirect one's attention back toward the academic journal. The divergence of the main line into three separate paths represents this split. For the APU, this requires multi-threading. Each process has access to the same underlying system resources, and each process has a specific start point founded in the original process. They perform their own operations, sometimes modifying the same underlying data. When two threads mutate the same object, the program may err. It cannot trust the integrity of its output because two different functions use and modify the same resources for different purposes. As the processes conclude, the APU must join them together, which is an arduous task for the APU if it also needs to understand how the processes interacted with the system data while they were on their respective paths. The speed and correctness of the joining phase depends on how well each thread isolated and shared its data. After the joining function succeeds, the processor returns to a solitary thread, a state of deep attention on one item. The deep attention toward the academic journal resumes.

This progression--and its problems--mimics that of parallel computing. When threads in a multi-threaded program access the same system data, they must obtain a lock on the data before operating on it. This mechanism prevents other threads from altering the data while the live thread acts. There are several different ways to implement data locking. Some are very fast, and some are very slow in different use cases.



Model 2. Forked.

The forked APU at work in the human...

time P1 fork() P2 fork() P3

Again, the initial line represents the user's main thought process with one process running. The user focuses on the academic journal (P1). When the user's phone vibrates and a distraction arises that requires the user's attention, the main process of reading pauses and runs in the background while the user attends to the notification. In this sense, the process forks to attend to the distraction. The forked process spawns an entirely new process (P2). It does not share the same resources as the original process. It runs as if it were the only thing on the user's mind. The pattern repeats when another distracting process (P3) begins: in this case, the distraction from the distraction, in which the user takes note of his lack of focus. All other processes run in the background, but without conducting any activity. When a forked process concludes, control of system resources is handed back to the previous process. The APU must manage the processes' scheduling and resources.

This forking process is also implemented in computer operating systems. It differs from multi-threading in that it always isolates system resources for each process. Each process acts as its own program. Instead of splitting into multiple branches as is the case with the threading, calling fork() gives birth to an independent program. The new program executes and then returns to the parent process. Because modern computers have several processing units, they can run forked programs at the same time, similar to the parallel nature of multi-threading. However, spawning large numbers of forked processes is time-consuming because the computer needs to create copies of the program for each new process. If the programmer needs to perform numerous independent actions at once, it is therefore advisable to implement threading.


In each case, attention is a mechanism. It is not a commodity or resource. On the contrary, it manages other resources.

This concept is--in some ways--modeled off of Herbert Simon's discussion of the information-processing system (IPS). He describes a computer's primary functionalities as input, storage and memory, computing, and output. For Simon, "the distribution of [a computer's] own attention" among those same features determine how useful computers are to combatting information overload. He argues that attention is a resource that acts on information. The computer's purpose is to reduce information so that humans manage their own attention resources better. However, it is reasonable to depart from this object-oriented ideology. Considering attention as the function rather than the resource clarifies its role. It manages how computers and humans interact with information, not the other way around.

The threaded APU model aligns with Hayles's model of hyper attention. Threading is a powerful computing option when a large number of independent and smaller tasks can be executed at the same time. For example, it is useful for cracking passwords. Passwords are often stored as hashes, which are seemingly random strings of characters generated by an algorithm that is very difficult to reverse. The only way to efficiently guess the password that corresponds to a hash is by throwing every possible combination of letters at it. In simpler terms, the computer brute forces a password by guessing. Each guess does not depend on the previous one. The computer can guess 'password' and '12345678' as the password at the same time. Generating strings does not demand a particular resource; they can be created on the fly without impacting the overall program data.

Hayles defines the strengths of hyper attention as "negotiating rapidly changing environments in which multiple foci compete for attention." This is also the case for parallel computing. Each thread functions off of the same source and quickly reasons with the underlying architecture to solve a problem. Hayles also indicates that hyper attention is instinctual in humans. Survival demands quick reactions to a mutating environment. Multi-threading fits this description as well. Because each thread accesses the same program data, its environment can change rapidly. The programmer must understand all of these parameters to effectively parallelize the program's functions. In the threaded model example, the APU attends to each action at the same time. Each distraction runs off the main attention line. In order to manage this hyperactivity, you require hyper attention.

Or do you?

The forked APU model supports Hayles's deep attention concept. "[C]oncentrating on a single object for long periods, ignoring outside stimuli while so engaged, preferring a single information stream, and having a high tolerance for long focus times" describe facets of a forking program's design. Recall that a forked process has total control over its own set of program resources. It accesses and modifies a unique copy of all program data. In this sense, it is not subject to external influence. The forked process remains in this state until it completes, and then it hands control back to the operating system. The forked model treats the attention mechanism as something that directs deep attention. In the forked model example, the APU attends to each action separately, not all three at once, as is the case with the threaded model. While things may appear as distractions from a main task, they end up consuming your mental resources just the same, requiring deep attention.

end_