tech:

taffy

Stochastic gradient descent

Stochastic gradient descent (SGD) is a popular optimization algorithm in machine learning, specifically in training deep learning models.

The goal of SGD, like other optimization algorithms, is to find the optimal parameters (e.g., weights and biases in a neural network) that minimize the loss function, a measure of the model’s error on the training data. Its primary purpose is to minimize the error or loss function of the model, thereby improving the model’s performance.

The term ‘stochastic’ in SGD comes from the fact that the gradient based on a single example is a ‘stochastic approximation’ of the true gradient. This means it’s a noisy but unbiased estimate. This noise can help the algorithm jump out of shallow local minima of the loss function, which can be beneficial for finding better and potentially global minima.

Here’s a simplified explanation of how SGD works:

  1. Initialize the parameters (weights and biases in the case of neural networks) with random values.
  2. Randomly pick a single data point (or a mini-batch) from the dataset.
  3. Compute the gradient of the loss function with respect to the parameters for that data point. The gradient indicates the direction in which the loss is increasing most rapidly.
  4. Update the parameters by a small step in the opposite direction of the gradient. The size of this step is determined by the learning rate, a hyperparameter that controls how quickly the model learns.
  5. Repeat steps 2-4 until the algorithm converges to a minimum, which is when the loss can no longer be significantly reduced.

SGD is computationally efficient, especially for large datasets, because it only uses a single data point (or a small subset) at each iteration. Moreover, the randomness in SGD (from the random selection of data points) can help prevent the algorithm from getting stuck in suboptimal local minima and help it find the global minimum.

It’s important to note that SGD requires careful tuning of the learning rate and other hyperparameters. Furthermore, while SGD’s randomness can be an advantage, it can also cause the loss to fluctuate significantly, leading to a less stable convergence. There are variants of SGD, such as SGD with momentum, AdaGrad, RMSprop, and Adam, which address some of these issues and are often used in practice.


 

Just in

IBM to deploy next-generation quantum computer at RIKEN in Japan

IBM has announced an agreement with RIKEN, a Japanese national research laboratory, to deploy IBM's quantum computer architecture and quantum processor at the RIKEN Center for Computational Science in Kobe, Japan.

Jack Dorsey departs Bluesky board — TC

Dorsey appears to have deleted his Bluesky account at some point last year, though his departure was only acknowledged at the time by a smattering of social media posts, writes Anthony Ha in TC.  

Corelight raises $150M

San Francisco-based network detection and response (NDR) company Corelight has raised $150 million in a Series E funding round.