---
title: "Gain control explains Weber's law"
author: |
| Ansgar D. Endress
| City, University of London
bibliography:
- /Users/endress/ansgar.bib
- /Users/endress/ansgar.own.bib
csl: /Users/endress/csl_files/apa.csl
output:
html_notebook:
theme: spacelab
number_sections: yes
toc: yes
toc_float: yes
pdf_document:
toc: false
number_sections: true
keep_tex: true
citation_package: natbib
html_document:
theme: spacelab
number_sections: yes
df_print: paged
toc: yes
toc_float: yes
keywords: Keywords
abstract: 'Weber''s law states that the difficulty of discriminating two sensory stimuli depends on their ratio rather than their absolute difference. It applies across senses and species, but its origins are still unclear. Here, I propose that Weber''s law is a natural consequence of an equally ubiquitous mechanism: multiplicative gain control . This model is analoguous to the display of a calculator with a limited number of digits. In a two-digit display, we can discriminate 10 from 11, 10*e*1 from 11*e*1 and 10*e*2 from 11*e2*, but more fine-grained discriminations are impossible. Hence, the *ratio* of the viable discriminations is constant. Multiplicative gain control thus provides a simple and natural explanation of Weber''s law and of other observations that have led to strong conclusions about the nature of internal representations of quantities.'
---
```{r setup, echo = FALSE, include=FALSE}
rm (list=ls())
#load("~/Experiments/TP_model/tp_model.RData")
#options (digits = 3)
knitr::opts_chunk$set(
# Run the chunk
eval = TRUE,
# Don't include source code
echo = FALSE,
# Print warnings to console rather than the output file
warning = FALSE,
# Stop on errors
error = FALSE,
# Print message to console rather than the output file
message = FALSE,
# Include chunk output into output
include = TRUE,
# Don't reformat R code
tidy = FALSE,
# Center images
# Breaks showing figures side by side, so switch this to default
fig.align = 'center',
# Show figures where they are produced
fig.keep = 'asis',
# Prefix for references like \ref{fig:chunk_name}
fig.lp = 'fig',
# For double figures, and doesn't hurt for single figures
fig.show = 'hold',
# Default image width
out.width = '90%')
# other knits options are here:
# https://yihui.name/knitr/options/
```
```{r load-libraries, echo = FALSE, include = FALSE, message = FALSE, warning = FALSE}
# Read in a random collection of custom functions
if (Sys.info()[["user"]] %in% c("ansgar", "endress")){
source ("/Users/endress/R.ansgar/ansgarlib/R/tt.R")
source ("/Users/endress/R.ansgar/ansgarlib/R/null.R")
#source ("helper_functions.R")
} else {
# Note that these will probably not be the latest versions
source("http://endress.org/progs/tt.R")
source("http://endress.org/progs/null.R")
}
library ("knitr")
library(latex2exp)
```
```{r define-functions, echo = FALSE, include = FALSE, message = FALSE, warning = FALSE}
convert_to_range1 <- function (x0,
precision = 2,
representation_width = 1,
noise_sd = .1,
base = 10,
order_of_magnitude = NULL,
offset = NULL) {
if (is.null (order_of_magnitude))
order_of_magnitude <- base^floor (log (x0, base = base))
# scale to range
x <- x0 / order_of_magnitude
# center in range
if (is.null (offset))
offset <- x - (base / 2)
x <- x - offset
x <- signif (x, precision)
x <- x + rnorm (length (x), 0, noise_sd)
x <- pmin (x,
representation_width * base)
# symmetrically extended below zero
x <- pmax (x,
- (representation_width-1) * base)
return (list (x = x,
order_of_magnitude = order_of_magnitude,
offset = offset))
}
run_simulation <- function (x0,
n_sim = 1000, ratios = seq (.5,2,length=201),
precision = 2,
representation_width = 1,
...){
res.df <- data.frame ()
for (i in 1:n_sim){
for (noise_sd in seq(0,1,.2)) {
x1 <- convert_to_range1(x0, noise_sd = noise_sd,
precision = precision,
representation_width = representation_width,
...)
tmp.df <-
ratios %>%
data.frame %>%
setNames("Ratio") %>%
arrange(Ratio) %>%
mutate (noise_sd = noise_sd,
x0 = x0,
x = x1$x,
y0 = x0 * Ratio) %>%
mutate (y =
convert_to_range1(y0,
noise_sd = noise_sd,
order_of_magnitude = x1$order_of_magnitude,
offset = x1$offset,
precision = precision,
representation_width = representation_width,
...)$x) %>%
mutate (greater = ifelse (y > x,
1,
ifelse (y < x,
-1,
0)))
res.df <- rbind (res.df,
tmp.df)
}
}
return (res.df)
}
display_stuff <- function (dat = .){
dat %>%
mutate (noise_sd = factor (noise_sd)) %>%
group_by (noise_sd, Ratio) %>%
summarize (greater = mean (greater),
greater.se = sd (greater) / sqrt (n() -1 )) %>%
ggplot (aes(x = Ratio, y = greater,
color = noise_sd,
linetype=noise_sd)) +
theme_light() +
theme(#text = element_text(size=20),
plot.title = element_text(size = 18, hjust = .5),
axis.title = element_text(size=16),
axis.text.x = element_text(size=14, angle = 0),
axis.text.y = element_text(size=14),
legend.title = element_text(size=14),
legend.text = element_text(size=12)) +
#geom_errorbar(aes(ymin=greater-greater.se, ymax=greater+greater.se)) +
#geom_ribbon(aes(ymin=greater-greater.se, ymax=greater+greater.se,
# colour = factor (noise_sd))) +
geom_line() +
#scale_x_continuous(trans='log2') +
scale_colour_discrete (name = "SD") +
scale_linetype_discrete (name = "SD") +
theme(legend.justification=c(0,1),
legend.position=c(0.05,.95)) +
labs (y = TeX("P(x_2 > x_1)"))
}
```
```{r run-simulations, echo = FALSE}
res.df5 <- run_simulation(5, 500, precision = 2)
res.df500 <- run_simulation(500, 500, precision = 2)
```
Weber's law is a fundamental principle of perception: The difficulty of discriminating two sensory stimuli depends on their ratio rather than their absolute difference. For example, it is easier to discriminate 15 from 10 (ratio 1.5) than 600 from 500 (ratio 1.2) even though the absolute difference is much larger in the second case. Weberâ€™s law applies across senses and species [@Gleitman2007a], with important individual differences [@Halberda2012] that can even predict mathematics performance in children [@Halberda2008;@Lourenco2012].
However, the reasons for Weber's law are still unclear. While a number of authors explained it based on assumptions about the format and distribution of the mental representations of quantities [@Whalen1999;@Piazza2004;@Pica2004;@Feigenson2004;@Gordon2004;@Dehaene2008;@Izard2008;@Revkin2008;@Ditz2015;@Nieder2017;@Pardo-Vazquez2019], these representational assumptions have not always been independently motivated, and a natural explanation of Weber's law is still lacking.
Here, I propose that Weber's law is a natural consequence of a mechanism that is ubiquitous across brain areas and species: multiplicative gain control [@Levi1969;@Abbott1997;@Salinas2000;@Priebe2002;@Willmore2014].
As an analogy, consider the display of a pocket calculator with just two digits. In this display, we can discriminate 10 from 11, $10 e1$ from $11 e1$ (i.e., 100 from 110) and $10 e2$ from $11 e2$, but not, say, 100 from 101 because only the first two digits are represented. In other words, the ratio of the possible discriminations is constant, due to the presence of gain control coupled with a limited precision of the representations.
More generally, for a number of magnitude $10^{1+a}$, there is uncertainty about the last $a$ digits, leading to an average error of $<0\ldots(10^a-1)> \approx 5 \times 10^{a-1}$. In this simple model, the uncertainty about a quantity is thus proportional to the quantity, with a ratio of $\frac{5 \times 10^{a-1}}{10^{1+a}} = 5 \%$. Given the ubiquity of multiplicative gain control in the brain [@Salinas2000;@Priebe2002], it provides a natural account of Weber's law.
Gain control also provides a natural explanation of the observation that quantities seem to be represented logarithmically [@Whalen1999;@Piazza2004;@Dehaene2008;@Izard2008]. For example, the adjacent (white or black) keys on a (well tempered) piano are perceived as having the same distance, but they are really separated by the same frequency *ratio* rather than by the same frequency difference. The frequency difference between the middle C and the next semitone is 15.55 Hz, while, one octave higher, the frequency difference between C and the next semitone is 31.1 Hz; in both cases, however, the *relative* difference is about 6\%.
The fact that constant relative differences leads to the perception of equally spaced stimuli follows naturally from multiplicative gain control. If we try to discriminate two pairs of *physical* quantities $x_1$ and $x_2$ (e.g., the frequency of the middle C and of the next semi-tone) and $z_1$ and $z_2$ (e.g., the frequency of these tones shifted by one octave), the *internal representations* of these quantities will be scaled by a factor that is roughly proportional to these quantities, that is $\xi_1 \approx \frac{x_1}{x_1} = 1$; $\xi_2 \approx \frac{x_2}{x_1}$; $\zeta_1 \approx \frac{z_1}{z_1} = 1$; $\zeta_2 \approx \frac{z_2}{z_1}$. The difference in terms of the internal representations is thus simply the ratio of the physical quantities. Semitones (or more generally quantities with constant ratios) are therefore perceived as equidistant, which is just to say that the quantities appear to be represented logarithmically.
In line with this view, Cicchini et al. [@Cicchini2014] showed that, in an estimation task (where observers had to estimate rather than compare quantities), quantities on previous trials affected estimates on later trials, and suggested that observers might use information from previous trials to adjust the "gain" for the representations.
Gain control also explains why noise in the internal representations of quantities appears to be log-normally distributed [@Izard2008]: As the internal representations are scaled through gain control, normally distributed noise in these internal representations will appear to be log-normally distributed if the internal representations are transformed back to the original physical quantities.
A simple illustration of this model is given in Figure \ref{fig:number_gain_control}. In this model, I ask an observer to compare two numbers. The first number establishes the gain; I model this by first scaling the number to fit between 0 and 10, and then centering it to five; the same scaling and centering is then applied to the second number. The limited precision is implemented by rounding the numbers to 2 significant digits, and by restricting the possible range of number to between 0 and 10. However, this step is not necessary as the presence of noise (see below) will make the last few digits uninformative in any case; in fact, the results are largely identical when the representations are not rounded.
I then add Gaussian noise , with a mean of zero to both numbers. As mentioned above, this assumption is equivalent to previous assumptions that the noise in a representation is proportional to its magnitude [@Izard2008;@Whalen1999]. Finally, I ask whether the internal representation is greater for the first or the second number.
I ran 500 simulations for every noise-level/ratio combination. Of course, biologically systems do not necessarily operate on a decimal basis, but the choice of the basis just affects the quantitative predictions and needs to be determined empirically.
The results are shown in Figure \ref{fig:number_gain_control}, for when the model needs to compare numbers to 5 (a) or to 500 (b); as mentioned above, not rounding the internal representations would yield identical results. Performance is better for more discrimable ratios, and follows the expected psychometric function [@Halberda2008;@Halberda2012], whose slope reflects the strength of the internal noise. Critically, the results are virtually identical irrespective of whether the reference number is 5 or 500; performance is exclusively driven by the ratio of the numbers.
```{r plot-results, caption='\\label{fig:number_gain_control} XXX'}
res.df5.plot <- res.df5 %>%
display_stuff +
ggtitle (TeX("x_1 = 5"))
res.df500.plot <- res.df500 %>%
display_stuff +
ggtitle (TeX("x_1 = 500"))
#pdf ("weber_model.pdf")
cowplot::plot_grid(res.df5.plot,
res.df500.plot,
nrow=1,
labels = "auto")
#dev.off()
```
There is a simple mathematical reason why gain control necessarily implies a sigmoid response profile. In a comparison between two stimuli $x_1$ and $x_2$ (with $x_2 > x_1$), the probability of a correct decision is simply the probability that the *internal* representation of $x_2$ ($\xi_2$) is greater than the internal representation of $x_1$ ($\xi_1$):
$$
\text{accuracy} = P\left(\xi_2 - \xi_1 > 0\right) = P\left(\frac{x_2 - x_1}{K} + \epsilon_{\sqrt{2} \sigma} > 0\right),
$$
where $K$ is the gain control scaling factor and $\epsilon$ is Gaussian noise with a mean of zero and a standard deviation of $\sqrt{2} \sigma$. (The factor $\sqrt{2}$ is due to the fact that we need to add the variances of $\xi_1$ and $\xi_2$.)
As mentioned above, the scaling factor $K$ will generally be proportional to $x_1$. $\frac{x_2 - x_1}{K}$ becomes $\frac{x_2 - x_1}{\alpha x_1}$ and thus $\frac{1}{\alpha} (R-1)$, where $\alpha$ is the proportionality factor. The accuracy can thus be expressed in terms of the ratio of the quantities and the noise level:
$$
\begin{aligned}
\text{accuracy} & = P\left(\frac{1}{\alpha} (R-1) + \epsilon_{\sqrt{2} \sigma} > 0\right) = P\left(\epsilon_{\sqrt{2} \sigma} > - \frac{1}{\alpha} (R-1) \right) \\
& = P\left(\epsilon_{\sqrt{2} \alpha \sigma} < R-1 \right)
\end{aligned}
$$
This, however, is just an expression for the error function (scaled for a standard deviation of $\sqrt{2} \alpha \sigma$) and thus a sigmoid function.
Multiplicative gain control thus provides a natural explanation of Weber's law, the apparent logarithmic representation of quantities, the apparently log-normal distribution of noise in the representation of quantities and the sigmoid dependency between the ratio of two quantities and discrimination performance. The generality of Weber's law might thus reflect the ubiquity of multiplicative gain control in the brain.
# Unused
```{r plot-errf}
errf_ratio <- function (x, noise_sd, alpha = 1/5){
2 * (pnorm (x, 0, sqrt (2) * alpha * noise_sd) - .5)
}
curve (errf_ratio(x-1, noise_sd =.2), .5, 2)
```