Lets structure this chaos now shall we
AI neural networks draw inspiration from neuroscience. In the brain, neurons are interconnected cells that form networks. These neurons receive and send electrical signals. When the electrical input exceeds a certain threshold, the neuron activates and transmits its signal forward.
An Artificial Neural Network is a mathematical model that learns from biological neural networks. These networks create mathematical functions that map inputs to outputs based on their structure and parameters. The network's structure develops through data training.
In AI implementation, each biological neuron's counterpart is a unit connected to other units. For instance, as discussed in the last lecture, an AI might determine rainfall probability using two inputs, x₁ and x₂. We proposed this hypothesis function: h(x₁, x₂) = w₀ + w₁x₁ + w₂x₂, where w₁ and w₂ are weights that modify the inputs, and w₀ is a constant, or bias, that adjusts the entire expression.
Activation functions are mathematical functions applied to a neuron's output before it is passed to the next layer. They introduce non-linearities, enabling the network to learn complex patterns and approximate any function.
Activation functions are broadly classified into linear and non-linear functions.