Chapter 10 Chapter 10: Behaviors in the network

Networks are ultimately theoretical, mathematical objects that offer behavioral scientists a useful framework to study and characterize phenomena that they are interested in studying. As we have seen in the previous chapters, relational data from various subfields of psychology can be represented as a network, and network science provides the tools and measures that enable us to study its micro, meso, and macro-level structure. However, what arguably makes networks really meaningful are the processes that operate on the network.

A key reason why we spend so much time discussing measures of network structure is because an underlying assumption is that the network structure has meaningful implications on network processes, and it is through these network processes whereby complex behaviors emerge from the networks. For instance, the presence of echo chambers in social networks and the emergence of epilepsy episodes in brain activation networks highlight how it is not just the structure of the network that contributes to these phenomena, but also the mechanisms and processes that occur within that network structure. In other words, the mechanism implemented on the network structure represents a key theoretical linkage between the abstract mathematical representation and the psychological construct.

In this first iteration of this chapter, we explore how spreading activation, which is a key construct in the cognitive sciences, can be implemented on a language network to provide a computational account of empirical phenomena. In future iterations, I plan to include other approaches of implementing “behaviors” in the network, including random walks and SIR models.

10.1 Introduction to spreadr

First, let’s download and load the spreadr R package.

# install.packages('spreadr')
# remotes::install_github('csqsiew/spreadr') # to get the most updated version from my github 

library(spreadr)
## Loading required package: Rcpp
library(igraph)

spreadr is a R package that enables the user to implement the spreading activation mechanism in a network structure (Siew, 2019). Although the concept of spreading activation is very prominent in psychology research (Collins & Loftus, 1975; Dell, 1986), there are not many accessible ways of formally implementing this idea on an explicit cognitive structure. spreadr assumes that activation is a limited cognitive resource that can be passed from one node to another node, as long as those two nodes are connected. This “passing” of activation to a neighboring node mimics the idea that an activated node can activate other related nodes. This process is assumed to proceed in a parallel fashion across all nodes in the network that have a non-zero activation value assigned to them, and are connected to other nodes that could receive activation from them. For details on the algorithm used, see Siew (2019).

In order to implement the spreading activation mechanism, we need to specify a number of parameters:

  1. network: the network igraph object to conduct the simulation on
  2. time: number of time steps to continue for (not actual time)
  3. start_run: a data frame object that has two columns (“node” and “activation”) that specifies which nodes receive how many activation units at time = 0 (advanced: it is possible to specify different times at which the node receive activation as well)
  4. retention: This represents the proportion of activation that remains in the node (not spread) at each time step. Then, 1 - retention of the activation at each node is spread to neighbouring nodes.
  5. decay: Number from 0 to 1 (inclusive) representing the proportion of activation that is lost at each time step.
  6. suppress: Number representing the maximum amount of activation in a node for it to be set to 0, at each time step.
# use the pnet network that comes in the spreadr package
pnet
## This graph was created by an old(er) igraph version.
## ℹ Call `igraph::upgrade_graph()` on it to use with the current igraph version.
## For now we convert it on the fly...
## IGRAPH a7c0363 UN-- 34 96 -- 
## + attr: name (v/c)
## + edges from a7c0363 (vertex names):
##  [1] spike --speak  spike --spoke  spike --speck  spike --spook  spoke --spook  spoke --speck  speak --spoke  speck --spook  speak --spook  speak --speck  speck --sped   sped  --speed  sped  --spud  
## [14] speed --spud   speed --stead  speed --seed   speed --speech speak --speed  stead --seed   seed  --seek   seek  --sneak  seek  --sleek  speak --seek   seek  --peek   sneak --sleek  speak --sneak 
## [27] speak --sleek  speak --peek   speak --speech speech--peach  peach --peace  peek  --peach  peach --peep   peach --pea    peach --peat   peach --peel   peach --peas   peach --teach  peach --leach 
## [40] peach --beach  peach --each   peach --pouch  peach --pitch  peach --patch  peach --pooch  peach --poach  peach --preach peach --reach  pea   --peas   pea   --peel   peep  --pea    peek  --pea   
## [53] peace --pea    pea   --peat   peep  --peel   peep  --peas   peep  --peat   peace --peep   peek  --peep   peel  --peas   peat  --peel   peace --peel   peek  --peel   peat  --peas   peace --peas  
## [66] peek  --peas   peek  --peat   peek  --peace  peace --peat   teach --leach  leach --reach  leach --each   leach --beach  teach --beach  teach --each   teach --reach  beach --each   beach --reach 
## [79] each  --reach  preach--reach  peach --perch  pouch --perch  pitch --perch  pooch --perch  poach --perch  patch --perch  pouch --patch  pitch --patch  patch --pooch  patch --poach  pouch --poach 
## [92] pitch --poach  pooch --poach  pitch --pooch  pouch --pooch  pouch --pitch
# starting activation values (time = 0)
start_run <- data.frame(
  node = c("beach"),
  activation = c(20))

# run the simulation
result <- spreadr(network = pnet, start_run = start_run, 
                  retention = 0.5, decay = 0, suppress = 0,
                  time = 2, include_t0 = TRUE)

# view the result 
result
##       node activation time
## 1    spike  0.0000000    0
## 2    speak  0.0000000    0
## 3    spoke  0.0000000    0
## 4    speck  0.0000000    0
## 5    spook  0.0000000    0
## 6     sped  0.0000000    0
## 7    speed  0.0000000    0
## 8     spud  0.0000000    0
## 9    stead  0.0000000    0
## 10    seed  0.0000000    0
## 11  speech  0.0000000    0
## 12    seek  0.0000000    0
## 13   sneak  0.0000000    0
## 14   sleek  0.0000000    0
## 15    peek  0.0000000    0
## 16   peach  0.0000000    0
## 17   peace  0.0000000    0
## 18    peep  0.0000000    0
## 19     pea  0.0000000    0
## 20    peat  0.0000000    0
## 21    peel  0.0000000    0
## 22    peas  0.0000000    0
## 23   teach  0.0000000    0
## 24   leach  0.0000000    0
## 25   beach 20.0000000    0
## 26    each  0.0000000    0
## 27   pouch  0.0000000    0
## 28   pitch  0.0000000    0
## 29   patch  0.0000000    0
## 30   pooch  0.0000000    0
## 31   poach  0.0000000    0
## 32  preach  0.0000000    0
## 33   reach  0.0000000    0
## 34   perch  0.0000000    0
## 35   spike  0.0000000    1
## 36   speak  0.0000000    1
## 37   spoke  0.0000000    1
## 38   speck  0.0000000    1
## 39   spook  0.0000000    1
## 40    sped  0.0000000    1
## 41   speed  0.0000000    1
## 42    spud  0.0000000    1
## 43   stead  0.0000000    1
## 44    seed  0.0000000    1
## 45  speech  0.0000000    1
## 46    seek  0.0000000    1
## 47   sneak  0.0000000    1
## 48   sleek  0.0000000    1
## 49    peek  0.0000000    1
## 50   peach  2.0000000    1
## 51   peace  0.0000000    1
## 52    peep  0.0000000    1
## 53     pea  0.0000000    1
## 54    peat  0.0000000    1
## 55    peel  0.0000000    1
## 56    peas  0.0000000    1
## 57   teach  2.0000000    1
## 58   leach  2.0000000    1
## 59   beach 10.0000000    1
## 60    each  2.0000000    1
## 61   pouch  0.0000000    1
## 62   pitch  0.0000000    1
## 63   patch  0.0000000    1
## 64   pooch  0.0000000    1
## 65   poach  0.0000000    1
## 66  preach  0.0000000    1
## 67   reach  2.0000000    1
## 68   perch  0.0000000    1
## 69   spike  0.0000000    2
## 70   speak  0.0000000    2
## 71   spoke  0.0000000    2
## 72   speck  0.0000000    2
## 73   spook  0.0000000    2
## 74    sped  0.0000000    2
## 75   speed  0.0000000    2
## 76    spud  0.0000000    2
## 77   stead  0.0000000    2
## 78    seed  0.0000000    2
## 79  speech  0.0500000    2
## 80    seek  0.0000000    2
## 81   sneak  0.0000000    2
## 82   sleek  0.0000000    2
## 83    peek  0.0500000    2
## 84   peach  2.7666667    2
## 85   peace  0.0500000    2
## 86    peep  0.0500000    2
## 87     pea  0.0500000    2
## 88    peat  0.0500000    2
## 89    peel  0.0500000    2
## 90    peas  0.0500000    2
## 91   teach  2.6166667    2
## 92   leach  2.6166667    2
## 93   beach  5.8166667    2
## 94    each  2.6166667    2
## 95   pouch  0.0500000    2
## 96   pitch  0.0500000    2
## 97   patch  0.0500000    2
## 98   pooch  0.0500000    2
## 99   poach  0.0500000    2
## 100 preach  0.2166667    2
## 101  reach  2.6500000    2
## 102  perch  0.0500000    2

It is possible to change various parameters of the simulation in order to get it to mimic the situation that you want. For instance, you could assign activation to more than 1 node in the network, different starting activation values, different network structures, different values for the retention, decay, and suppress parameters. You can also allow the simulation to continue for several time steps. There are also multiple ways to analyze the end result. You may choose to focus on a couple of target nodes’ final activation levels, or look at how activation is distributed across the entire network or among a subset of nodes. At the end of the day, what you decide to do should align with your research questions so that the simulations can help you test a specific idea or question that you had of your network.

10.2 Case study: False memories

Vitevitch et al. (2012) (Experiment 1) conducted a false memory task comparing false alarm rates for words with high and low clustering coefficients. In this task, the phonological neighbors of the target word were presented to the participants, but the target word was never presented. Then, the participants were asked to recall as many words as they could. The authors found that participants falsely recognized more words with low C than high C, suggesting that the lower connectivity structure of the low C word led to more activation spreading to the target word, increasing false alarm rates.

In this simulation, we explore if their empirical result could be replicated computationally using spreadr. The chapter10-networks.RData contains two phonological networks containing the two target words each, “seethe” and “wrist”. Although both words have the same degree of 16, “seethe” has a higher local C (0.49) than “wrist” (0.16).

In the simulation below, equal levels of activation are assigned to all neighbors of the target, but never the target node itself. Activation is allowed to spread for a fixed number of time steps, and the final activation values of the target nodes are retrieved at the final time step. A higher final activation of target node is taken as an indicator of more false alarms in memory retrieval.

Question: Based on the outputs of the simulation below, which word has the higher final activation value? Does this result align with the empirical result reported by Vitevitch et al. (2012)?

load('data/chapter10-networks.RData')

library(tidyverse)

# low C network 
start_run_low <- data.frame(
  node = neighbors(lowC, v = 'rIst;wrist')$name,
  activation = rep(10, 16))

result_low <- spreadr(network = lowC, start_run = start_run_low, 
                  retention = 0.5, decay = 0, suppress = 0,
                  time = 5, include_t0 = TRUE)

result_low |> filter(node == 'rIst;wrist', time == 5) 
##         node activation time
## 1 rIst;wrist   8.516831    5
# high C network 
start_run_high <- data.frame(
  node = neighbors(highC, v = 'siD;seethe')$name,
  activation = rep(10, 16))

result_high <- spreadr(network = highC, start_run = start_run_high, 
                  retention = 0.5, decay = 0, suppress = 0,
                  time = 5, include_t0 = TRUE)

result_high |> filter(node == 'siD;seethe', time == 5) 
##         node activation time
## 1 siD;seethe   3.149976    5

10.3 Exercise: Design your experiment!

For this exercise, try to design a simulation study using spreading activation, on any network of your choosing. Some questions to help you with this:

  • What would I learn from doing this simulation?
  • What does the notion of “spreading activation” mean for the specific network that I’ve chosen?
  • How should the simulation be set up? What are the starting values and parameters? And why were these specific values chosen?
  • How should the outputs be analyzed? Why?
  • What are your expected results? Did the actual results align with your initial expectations?
  • What did I learn from doing this simulation?

10.4 References

Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82(6), 407–428.

Dell, G. S. (1986). A spreading-activation theory of retrieval in sentence production. Psychological Review, 93(3), 283.

Siew, C. S. Q. (2019). spreadr: An R package to simulate spreading activation in a network. Behavior Research Methods, 51(2), 910–929. https://doi.org/10.3758/s13428-018-1186-5

Vitevitch, M. S., Ercal, G., & Adagarla, B. (2011). Simulating retrieval from a highly clustered network: Implications for spoken word recognition. Frontiers in Psychology, 2, 369.

Vitevitch, M. S., Chan, K. Y., & Roodenrys, S. (2012). Complex network structure influences processing in long-term and short-term memory. Journal of Memory and Language, 67(1), 30–44. https://doi.org/10.1016/j.jml.2012.02.008

10.5 Future topics under this chapter

  • random walker
  • SIR models