As a digital marketing consultant, I stumbled upon machine learning a couple of years ago when I was searching for an automatic way to optimize advertising budget allocation between different marketing channels. Machine learning appeared to be a promising way to achieve that goal and so I enrolled in some online courses, bought a book about coding with python and over a period of time, I managed to master some basic algorithms. Then, back in early 2019, I found a job description on LinkedIn where a company was looking for a digital marketing manager with experiences in neuromarketing. I started to tinker with this oddly sounding seemingly quite young subject and was so fascinated that I even completed a few neuromarketing related online certificate programs including acquiring neurosciences and neuroeconomics certifications.
If you have already read one or two papers about neurosciences and machine learning, it is highly likely that you have encountered basic similarities between artificial neural networks and biological neural networks. Of course, this is not very surprising at first. After all, artificial neural networks originate from neural networks in the human brain: each artificial neuron is connected with several others and can transmit information along these connections just like neurons in the human brain.
artificial neural networks originate from neural networks in the human brain: each artificial neuron is connected with several others and can transmit information along these connections just like neurons in the human brain.
But are you aware of the fact that apart from those obvious similarities, both disciplines also utilize the same mathematical methods, namely expected utility (EU) and expected reward ( ER), to model decision making processes?
To get an idea about these methods, let us take a quick look at their mathematical notations.
Expected utility in the case of discrete values
In the below depicted equation xi represents the possible outcome of the decision x and pi its probability. EU is then given by the expected value of the utility of that decision:
E[u(x)]= p1 . u(x1) +……….+ pi . ux(i)
In terms of the expected value, “a” denotes a selected action, whereas At represents the estimated value of “a” at time step t, and the corresponding reward Rt.
q(a)= E[Rt| At =a}
In machine learning, ER has been used since the early 1960’s (Yes, machine learning is already that old!) to model and optimize decisions of simple algorithms such as multi-armed bandits. The notion of EU comes from mathematician John von Neumann and economist Oskar Morgenstern. In 1947, they proved that if a decision maker chooses from a set of risky (probabilistic) choices, he behaves as if he is maximizing the expected value of an intrinsic mathematical function.
However, until recently, EU was just a theoretical approach, a framework under which it was possible to explain certain parts of decision making.
But now, empirical evidence reveals that E[u(x)] not only accurately describes decision making on a neural biological level but also that q(a) is stored in the striatum and medial prefrontal cortex, particular areas in the human brain, even if the decision maker is not required to make any choices (Levy, I. et al, 2011) in the form of action potentials per seconds.
The related experiment was introduced by Dr. Paul W. Glimcher at Nobel Conference 47 in 2011. During his presentation, he introduced a groundbreaking experiment he and his colleagues have conducted to examine the brain activity of monkeys during an experimental game called “work or shirk”. The mathematics behind this game has been “developed“ by John von Neuman and John Nash in 1944 and is called Game Theory. For his work in Game Theory, John Nash received the Nobel Prize in Economics in 1994. This is a concept in non-cooperative games which is now known as the Nash Equilibrium.
Non-cooperative games are those where the players compete against each other. If each player decides to execute a strategy – an action plan – based on what has happened so far in the game, for some non-cooperative games, there exists a state where no player has anything to gain by changing only his own strategy. Such a state is called Nash Equilibrium.
Glimcher’s “work or shirk” game setup was as follows. The participating monkeys were playing a simple game where they received different amounts of orange juice, based on their decisions to either play the equilibrium (shirk) or testing if they would receive more juice provided they played another strategy (work). By measuring the monkeys’ neural activities during the play of the games and changing the gains, the monkeys received when they occasionally tested the non-equilibrium strategy from time to time Glimcher’s team discovered in the data of the brain activities that the expected values of the gains the monkeys received from their decisions correlated with the neural activity in the related brain regions stored in the form of action potentials in seconds, or as they called it “spikes”.
This is exactly the way one of the simplest machine learning algorithms chooses its actions. Based on what it learned until the time t it takes the option/action that has the best average return (deploy) and from time to time, it chooses to engage in options that may potentially reveal themselves to be of higher return (explore). In machine learning, we call this algorithm Epsilon Greedy algorithm. It is one of the key algorithms behind decision sciences and the one that I have encountered first when I was searching for an automatic way to optimize advertising budget allocation between different marketing channels.
Levy, I. et al (2011), “Choice from Non-Choice: Predicting Consumer Preferences from Blood Oxygenation Level-Dependent Signals Obtained during Passive Viewing”, https://www.jneurosci.org/content/jneuro/31/1/118.full.pdf
Scottish Qualifications Authority, UK
Get in Touch
Fill your details in the form below and we will be in touch to discuss your learning needs