Defending against adversarial attacks is critical for modern AI applications. Here I describe how we can use response geometry to predict a neuron’s susceptibility to attacks.
Scientists have long summarized neurons in terms of the relationship between their inputs and outputs. Here I describe a technique that allows us to go beyond previous approaches to succinctly describe important and complex neural behavior.
Defending against adversarial attacks is critical for modern AI applications. Here I describe how we can use response geometry to predict a neuron’s susceptibility to attacks.
Scientists have long summarized neurons in terms of the relationship between their inputs and outputs. Here I describe a technique that allows us to go beyond previous approaches to succinctly describe important and complex neural behavior.
Defending against adversarial attacks is critical for modern AI applications. Here I describe how we can use response geometry to predict a neuron’s susceptibility to attacks.
Scientists have long summarized neurons in terms of the relationship between their inputs and outputs. Here I describe a technique that allows us to go beyond previous approaches to succinctly describe important and complex neural behavior.