In biology, when the presynaptic releases a neurotransmitter (a positive amount of them, obviously), this neurotransmitter reaches the postsynaptic vesicles causing an excitatory (depolarization) or inhibitory (hyperpolarization) effect, depending on the kind of postsynaptic vesicle in next cell dendrites. If the total amount of depolarization (all dendrites) is enough bigger than hyperpolarization, the neuron triggers an action potential or similar signal, continuing with the chain.
In the artificial NeuralNet parallelism, when the activation function of previous layer provides an output (say positive one) this value is multiplied by the weights of next layer cell. If the weight is positive, the effect is excitatory, if the weight is negative, the effect is inhibitory.
Thus, these two models are functionally equivalent (same excitatory/inhibitory target is covered), just make the analogy between kind of postsynaptic vesicle with the input weight sign of the artificial neuron.
manpreet
Best Answer
2 years ago
In the brain some synapses are stimulating and some inhibiting. ReLu erases that property to only stimulating once, since in the brain inhibition doesn't mean 0 output, but more precisely - negative input.
In the brain positive and negative potential is summed up and if it passed the threshold - the neuron fires.
There are 2 main non-linearities which came to my mind in the biological unit:
So is there any idea how to implement negative input to the artificial neural network?
I gave examples of non-linearities in biological neuron because the most obvious positive/negative unit is just linear unit. But since it doesn't implement non-linearity - we may consider to implement non-linearities somewhere else in the artificial neuron.