Research in our Lab focuses on understanding mechanisms that support and facilitate learning, both in AI and in the brain

Lines of research

One line of active research delves into Deep Learning to overcome the limitations it faces Today. Our approach is applying knowledge and techniques at the intersection of machine learning and computational neuroscience; we seek to adapt computationally efficient mechanisms of the brain into AI architectures, aiming for more powerful and explainable learning algorithms. More particularly, we are testing the capabilities of the learning algorithm after introducing changes to the neural network, mainly by strengthening its neurobiological substrate. Using this approach, we expect to improve performance in terms of flexibility, agility and robustness.

Another line of active research focuses in Neuroscience. We are highly interested in identifying physiological mechanisms of brain function and dysfunction, specifically in what relates to learning and cognition. To accomplish this, we simulate reinforcement learning models and biological neural networks. One of these studies focuses on cognitive dynamics, where we combine experimental procedures (psychophysics and neuroimaging) with computational approaches (machine learning and neural circuit modeling) to study the dynamics, development, and decline of human cognition across the life span.

News

Meet the Team

Principal Investigator

Avatar

Salva Ardid, PhD

GenT Distinguished Research Group Leader

Researchers

Alumni

Avatar

David Carrasco Peris

Former Member of the Lab

Avatar

Ester Dionís Martí

Former Member of the Lab

Avatar

Santiago Galella

Former Member of the Lab

Featured Projects

A ML-based direction estimate for ANTARES single-line events

We are currently developing a deep neural network to predict neutrino trajectories in single-line events of the ANTARES telescope

Cognition across the life span

This line of research utilizes a rule-based decision task and combines experimental procedures (psychophysics and neuroimaging) with computational approaches (reinforcement learning and neural circuit modeling) to encompass the dynamics, rise and decline of goal-directed behavior in humans…

Meta-Reinforcement Learning

Meta-reinforcement learning refers to embedding meta-learning (i.e., higher-order learning) mechanisms in reinforcement learning (RL) models. I have used this approach to significantly improve a repertoire of RL models…

Selective Attention

Cortical neurons in sensory areas show patterns of irregular firing that are, however, synchronized to, e.g., gamma oscillations. This regime of noisy oscillations is further reinforced by selective attention, which arises questions such as: why do regular and irregular components appear simultaneously? are oscillations just reducing noise? Our results show that this regime has a more insteresting functionality…

Unbiased Competition

Unlike Biased Competition, Unbiased Competition is resolved internally, i.e., in the absence of external biases. In Ardid et al. PNAS (2019), I identified how dissimilarities in the physiology of competing neural populations determine the direction of the resulting bias according to the characteristics of (unbiased) external inputs. There are three general ways in which Unbiased Competition may occur…

Contact

  • sardid at upv dot es
  • Universitat Politècnica de València
    Edifici D, Despatx D-210
    Paranimf 1,
    46730 Gandia (València),
    Spain