Jump to content

10.4.2 Activation Functions

From Computer Science Knowledge Base
Revision as of 19:55, 8 July 2025 by Mr. Goldstein (talk | contribs) (Created page with "=== 10.4.2 Activation Functions === After a perceptron does its math, it needs to decide whether to "fire" or "activate" and send a signal to the next layer. This is where '''activation functions''' come in! Think of it like a light switch. If the total "strength" of the incoming signals is strong enough, the switch turns on, and the perceptron sends a signal forward. If it's not strong enough, the switch stays off. Activation functions are mathematical rules that help...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

10.4.2 Activation Functions

After a perceptron does its math, it needs to decide whether to "fire" or "activate" and send a signal to the next layer. This is where activation functions come in!

Think of it like a light switch. If the total "strength" of the incoming signals is strong enough, the switch turns on, and the perceptron sends a signal forward. If it's not strong enough, the switch stays off.

Activation functions are mathematical rules that help the perceptron make this "on or off" or "how much signal to send" decision. They introduce non-linearity, which means the network can learn more complex patterns than if it just did simple addition. Without them, even a deep network would just be doing simple multiplication and addition, like a straight line. With them, it can learn curves and complex shapes in the data.


Bibliography

10.4.2 Activation Functions