Image for Fast Gradient Sign Method (FGSM)

Fast Gradient Sign Method (FGSM)

The Fast Gradient Sign Method (FGSM) is a technique used in machine learning, particularly in the field of adversarial attacks on neural networks. It generates small, intentional errors in input data, like images, by slightly altering them in a specific direction that will confuse the model. This is achieved by calculating the gradient, which indicates how the model's prediction would change with minor input modifications. The result is an adversarial example that appears unchanged to humans but misleads the AI, highlighting vulnerabilities in machine learning systems and prompting improvements in their robustness against such attacks.