
Mini-batch Gradient Descent
Mini-batch Gradient Descent is a method used in machine learning to train models efficiently. Instead of updating the model using the entire dataset at once (which can be slow), it breaks the data into small groups called mini-batches. The model updates its parameters based on the average error from each mini-batch. This approach balances the speed of updating with the stability of the learning process, making training faster and more scalable, especially with large datasets. It is widely used because it offers a good compromise between the precision of full-batch methods and the speed of stochastic gradient descent.