tech:

taffy

Stochastic gradient descent

Stochastic gradient descent (SGD) is a popular optimization algorithm in machine learning, specifically in training deep learning models.

The goal of SGD, like other optimization algorithms, is to find the optimal parameters (e.g., weights and biases in a neural network) that minimize the loss function, a measure of the model’s error on the training data. Its primary purpose is to minimize the error or loss function of the model, thereby improving the model’s performance.

The term ‘stochastic’ in SGD comes from the fact that the gradient based on a single example is a ‘stochastic approximation’ of the true gradient. This means it’s a noisy but unbiased estimate. This noise can help the algorithm jump out of shallow local minima of the loss function, which can be beneficial for finding better and potentially global minima.

Here’s a simplified explanation of how SGD works:

  1. Initialize the parameters (weights and biases in the case of neural networks) with random values.
  2. Randomly pick a single data point (or a mini-batch) from the dataset.
  3. Compute the gradient of the loss function with respect to the parameters for that data point. The gradient indicates the direction in which the loss is increasing most rapidly.
  4. Update the parameters by a small step in the opposite direction of the gradient. The size of this step is determined by the learning rate, a hyperparameter that controls how quickly the model learns.
  5. Repeat steps 2-4 until the algorithm converges to a minimum, which is when the loss can no longer be significantly reduced.

SGD is computationally efficient, especially for large datasets, because it only uses a single data point (or a small subset) at each iteration. Moreover, the randomness in SGD (from the random selection of data points) can help prevent the algorithm from getting stuck in suboptimal local minima and help it find the global minimum.

It’s important to note that SGD requires careful tuning of the learning rate and other hyperparameters. Furthermore, while SGD’s randomness can be an advantage, it can also cause the loss to fluctuate significantly, leading to a less stable convergence. There are variants of SGD, such as SGD with momentum, AdaGrad, RMSprop, and Adam, which address some of these issues and are often used in practice.


 

Just in

Tembo raises $14M

Cincinnati, Ohio-based Tembo, a Postgres managed service provider, has raised $14 million in a Series A funding round.

Raspberry Pi is now a public company — TC

Raspberry Pi priced its IPO on the London Stock Exchange on Tuesday morning at £2.80 per share, valuing it at £542 million, or $690 million at today’s exchange rate, writes Romain Dillet. 

AlphaSense raises $650M

AlphaSense, a market intelligence and search platform, has raised $650 million in funding, co-led by Viking Global Investors and BDT & MSD Partners.

Elon Musk’s xAI raises $6B to take on OpenAI — VentureBeat

Confirming reports from April, the series B investment comes from the participation of multiple known venture capital firms and investors, including Valor Equity Partners, Vy Capital, Andreessen Horowitz (A16z), Sequoia Capital, Fidelity Management & Research Company, Prince Alwaleed Bin Talal and Kingdom Holding, writes Shubham Sharma. 

Capgemini partners with DARPA to explore quantum computing for carbon capture

Capgemini Government Solutions has launched a new initiative with the Defense Advanced Research Projects Agency (DARPA) to investigate quantum computing's potential in carbon capture.