14:30
Eye
Chair: Maartje Leemans
14:30
15 mins
|
Biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses
Jaap de Ruyter van Steveninck, Maureen van der Grinten, Antonio Lozano, Laura Pijnacker, Bodo Rückauer, Pieter Roelfsema, Marcel van Gerven, Richard van Wezel, Umut Güçlü, Yağmur Güçlütürk
Abstract: Blindness affects millions of people around the world, and is expected to become increasingly prevalent in the years to come. For some blind individuals, a promising solution to restoring vision are cortical visual prosthetics, which convert camera input to electrical stimulation of the cortex to bypass part of the impaired visual system. Due to the constrained number of electrodes that can be implanted, the artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') is of limited resolution, and a great portion of the field's research attention is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is the non-invasive functional evaluation in sighted subjects or with computational models by making use of simulated prosthetic vision (SPV) pipelines. Although the SPV literature has provided us with some fundamental insights, an important drawback that researchers and clinicians may encounter is the lack of realism in the simulation of cortical prosthetic vision, which limits the validity for real-life applications. In this study, we developed a PyTorch-based, fast and fully differentiable phosphene simulator. Our simulator transforms specific electrode stimulation patterns into biologically plausible representations of the artificial visual percepts that the prosthesis wearer is expected to see. The simulator integrates a wide range of both classical and recent clinical results with neurophysiological evidence in humans and non-human primates. The implemented pipeline includes a model of the retinotopic organisation and cortical magnification of the visual cortex. Moreover, the quantitative effect of stimulation strength, duration, and frequency on phosphene size and brightness as well as the temporal characteristics of phosphenes are incorporated in the simulator. Our results demonstrate the suitability of the simulator for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioural experiments. The modular approach of our work makes it ideal for further integrating new insights on artificial vision as well as for hypothesis testing. In summary, we present an open-source, fully differentiable, biologically plausible phosphene simulator as an ideal tool for computational, clinical and behavioural neuroscientists working on visual neuroprosthetics.
|
14:45
15 mins
|
The impact of absence seizures on visual attention and eye movements
Valentina Barone, Maria Carla Piastra, Hans van Dijk, Mariette Debeij-van Hall, Michel van Putten
Abstract: Absence seizures impact 10% to 17% of all cases of childhood epilepsy. Literature and clinical evidence show that absence seizures affect visual attention and eye movements variably. Possibly, dissimilarity of symptoms in these patients is reflected by differences in neurophysiological parameters and brain networks activation.
Ten pediatric patients (7-18 years old, 5 females) with generalized absences recruited at Kempenhaeghe performed a 40-minutes-long computerized choice reaction time task, while EEG and eye tracking were synchronously recorded (number of absences 1-14). To study in detail the variability of visual attention and eye movements during absences, we investigated differences in markers of attention (i.e. RT, number of errors), EEG features (i.e. power spectra density, dominant frequency, amplitude), neural sources and brain networks involved in the generation and propagation of seizures (i.e. dipole fit density maps, phase connectivity and graph analysis).
Observing diverse patterns of eye movements during seizures, subjects were divided into two groups: five patients had preserved eye movements (i.e. preserved group) and five patients showed unpreserved eye patterns (i.e. unpreserved group). We further tested the variability of the groups with EEG features, source reconstruction and graph theory. In the unpreserved group, EEG amplitude was higher in the posterior channels (p< .05, mean difference: 193 µV), while peak frequency was 0.3 Hz slower (p< .05). Source reconstruction using dipole fitting indicated an overall higher involvement of fronto-central areas for the unpreserved group (i.e. 35% vs 21% of the total number of dipoles were positioned in fronto-central areas). Lastly, graph analysis revealed different connections density of specific channels, i.e. C4 and Fz for the unpreserved group, and Cz for the preserved group.
We show that the degree of impairment of visual attention and eye movements varies among patients with absences, depending on EEG parameters and networks activation too. In particular, fronto-central areas related to production and maintenance of visual attention seem to be activated and connected differently in the two groups. A fast assessment of visuospatial attention can be usefully employed in clinical practice to deliver advice tailored to the individual patient and for characterization and prognostication of patients with absences.
|
15:00
15 mins
|
A virtual reality simulation for testing gaze-assisted phosphene vision
Mo Nipshagen, Jaap de Ruyter van Steveninck, Yagmur Güçlütürk, Umut Güçlü
Abstract: Building prosthetics for vision impaired people is a challenging task.
One of the most promising approaches is to use implanted electrodes to stimulate the visual cortex, eliciting spheres of light called "Phosphenes" appearing in the visual field.
Few prosthetic devices exist and a common problem with them is that they do not take eye movement into account.
Humans perform saccadic eye movement to scan their surroundings and to focus on particular points in their field of vision.
Not accounting for eye movement leads to a mislocalization of the phosphenes in space, which can be disorienting and nauseating.
In this project, we implemented a state-of-the-art phosphene simulation following biological properties in a virtual reality environment, and let participants experience the current state of prosthetic devices, which do not compensate for eye movement.
We contrasted that with an implementation utilizing eye-tracking for gaze-assisted viewing and enabling controlled saccades.
In our experiment the gaze assisted condition was rated more comfortable and less tiring, and participants performed comparable to a baseline condition, that was in line with common VR experiences.
The results suggest that gaze assisted prosthetic devices bear the possibility to increase comfort and reduce strain, while also enabling more control for the patient and therefore should be considered when building a prosthetic vision device.
|
15:15
15 mins
|
A wireless power and data system for a single-cell resolution artificial retina
Yihan Ouyang, Dante Gabriel Muratore
Abstract: Retinal prostheses have been proposed recently as a promising method to restore partial visual sensation in patients with degenerative diseases by stimulating the remaining healthy neurons in the retina [1-2]. However, current devices do not provide sufficient results for patients to become fully independent. A fundamental problem is that current stimulation strategies fail to respect the different retinal cell types that encode different scene information. Instead, they activate cells indistinctively, sending a scrambled message to the brain. An artificial retina proposed in [3] is under development and aims at restoring vision with more precise control of natural neural codes in the retina. This work focuses on an implantable chip that is clinically viable, fully wireless and has bi-directional capabilities when interfacing with neurons. It operates in three modes: cell calibration (to record the spontaneous activity and learn which cells and cell types are available to the device), dictionary calibration (to learn individual cell responses to the stimulation parameters), and runtime (to stimulate available cells optimally based on a cell-type specific dictionary). The envisioned chip consists of three major building blocks: 1) recording channels that can capture spike activities over a massively parallel microelectrode array; 2) stimulation channels that can activate neurons with single-cell resolution; 3) wireless power and data circuits.
A proof-of-concept chip for the wireless power and data system has been fabricated in CMOS BCD 0.18-µm process. The full-custom chip occupies an area of 1.15-mm2. Compared to other state-of-the-art literature [1-2], this work features a high-speed data uplink capable of 20 Mbps for closed-loop neuromodulation based on the dictionary approach described in [3]. For the power downlink, a high-frequency carrier signal (40.68-MHz) in the ISM band is chosen to minimize the area consumed by the passive components. Also, this work uses a 200 kbps ASK-PPM downlink modulation that can provide a self-synchronized clock and data to solve the inaccurate synchronization problem in conventional approaches. In addition, some overshoot/undershoot reduction approaches are proposed to enhance the load transient response of the capacitor-less voltage regulator. The experimental results of the wireless chip will be presented during the conference. A fully integrated design of the wireless retina implant system will be further explored in future work.
1. M. Monge et al., "A Fully Intraocular High-Density Self-Calibrating Epiretinal Prosthesis," in IEEE Transactions on Biomedical Circuits and Systems, vol. 7, no. 6, pp. 747-760, Dec. 2013.
2. A. Akinin et al., "An Optically Addressed Nanowire-Based Retinal Prosthesis With Wireless Stimulation Waveform Control and Charge Telemetering," in IEEE Journal of Solid-State Circuits, vol. 56, no. 11, pp. 3263-3273, Nov. 2021.
3. Muratore, D.G., & Chichilnisky, E.J. (2020). Artificial Retina: A Future Cellular-Resolution Brain-Machine Interface.
|
15:30
15 mins
|
Optimization of neuroprosthetic vision via and-to-end deep reinforcement learning
Burcu Küçükoğlu, Bodo Rueckauer, Nasir Ahmad, Jaap de Ruyter van Steveninck, Umut Güçlü, Marcel van Gerven
Abstract: Visual neuroprostheses are a promising approach to restore basic sight in visually impaired people. A major challenge is to condense the sensory information contained in a complex environment into meaningful stimulation patterns at low spatial and temporal resolution. Previous approaches considered task-agnostic feature extractors such as edge detectors or semantic segmentation, which are likely suboptimal for specific tasks in complex dynamic environments. As an alternative approach, we propose to optimize stimulation patterns by end-to-end training of a feature extractor using deep reinforcement learning agents in virtual environments. We present a task-oriented evaluation framework to compare different stimulus generation mechanisms, such as static edge-based and adaptive end-to-end approaches like the one introduced here. Our experiments in Atari games show that stimulation patterns obtained via task-dependent end-to-end optimized reinforcement learning result in equivalent or improved performance compared to fixed feature extractors on high difficulty levels. These findings signify the relevance of adaptive reinforcement learning for neuroprosthetic vision in complex environments.
|
15:45
15 mins
|
Bi-directional neural interface for a single-cell resolution artificial retina
Bakr Abdelgaliel, Dante Muratore
Abstract: A retinal prosthesis was recently proposed as a promising method to restore partial vision in degienerative disease patients by stimulating remaining healthy neurons in the retina. [1]. Commercial epiretinal implants are already available and have been implanted in patients. Unfortunately, they currently only provide a low-resolution vision for a variety of reasons, including low channel counts, large electrode sizes, and failure to account for ganglion cell heterogeneity. In order to improve upon this, many more channels - in the order of 104 - are required, so that a large region of the RGC layer can be stimulated [2]. An artificial retina proposed in [2] is under development and aims to restore vision with more precise control of natural neural codes in the retina. This project focuses on an implantable chip that is clinically feasible and has bi-directional capabilities when interfacing with neurons. The conceivable implanted system-on-chip consists of three main building blocks: 1) stimulation channels that can activate neurons with single-cell resolution; 2) recording channels that can capture spike activities over a massively parallel microelectrode array; 3) wireless power and data circuits. However, the specifications for a neural interface that attempts to approach the capability of the retinal neural circuitry pose major challenges in terms of area and power consumption for an implanted device. To overcome these challenges, this work focuses on 1) Developing an optimized algorithm that generates a safe stimulation waveform that reduces the residual artifacts with the lowest computational cost without losing its cell activation capabilities. 2) Designing an Application-Specific Integrated Circuit (ASIC) to record neuron activities and generate specific spiking patterns at the single-cell resolution based on the electrode impedance.
The first step of this work is to explore novel optimization schemes for finding the waveform shaping parameters to minimize computational complexity and memory requirements, refactoring for implementation on-chip. In vitro validation of the computational results is required. To facilitate in vitro studies with novel waveform shapes, a flexible hardware platform was developed. Furthermore, the generated waveforms were tested on a neuron model to check its efficiency in activating the neuron. The results of the hardware platform and waveform testing will be presented during the conference. A fully integrated design of the bidirectional neural interface of the retina implant system will be further explored in future work.
|
|