Neural Networks
Table of Contents
1. Overview
1.1. Definition:
Neural Networks are computational models inspired by the human brain's interconnected neuron structure, designed to recognize patterns and relationships in data.
1.2. Components:
- Neurons (Nodes): Basic processing units analogous to neurons in the brain.
- Layers:
- Input Layer: Receiving data input, each neuron represents a feature of the input data.
- Hidden Layers: Intermediate layers that transform inputs through weights and activation functions.
- Output Layer: Produces the final prediction or output of the model.
1.3. Key Concepts:
- Weights and Biases: Parameters that are learned and adjusted during training to minimize error.
- Activation Functions: Introduce non-linearity into the model (e.g., ReLU, Sigmoid, Tanh).
- Backpropagation: Training algorithm that updates weights and biases to minimize the loss function.
- Loss Function: A metric to quantify how well the model's predictions match the target values.
1.4. Challenges:
- Computational Costs: High resource requirements for training, especially with large datasets.