EEG Data Analysis - Connectivity analysis made easy
12/03/2023 at 05:03:01
Author: Jackson Cionek
12/03/2023 at 05:03:01
Author: Jackson Cionek
EEG Data Analysis - BrainVision Analyzer
Noise and artifacts are an unavoidable aspect of EEG recordings. As such, handling them is an invaluable skill for EEG researchers to make the best of their data. In this article we present an overview of common artifacts together with tools and strategies for managing various types of artifacts in BrainVision Analyzer 2.
As EEG signals typically range in low amplitudes of tens of microvolts, they can be easily blurred by artifacts, reducing the signal to noise ratio. For example, activity from head muscles can overlap with oscillations from the brain, or movement of the cap creates distortions that affect the amplitude of an ERP. Technically speaking, artifacts add uncontrolled variability to the data, which confounds experimental observations. As such, even small artifacts can reduce statistical power in a study and alter results if they happen frequently.
Best efforts should always be made to prevent artifacts from entering EEG recordings. Nonetheless a certain level of artifact intrusion remains unavoidable, especially in the increasing field of mobile EEG and out-of-lab settings. But don’t worry, despite the unavoidability of artifacts, it is possible to obtain good quality signals with the right tools and artifact handling strategies.
The blink artifact is generated by the eye’s potential difference between the positively charged cornea and the negative retina: as the eyelid slides and the eyeball rotates during blinking the polarity is inverted and a positive current towards the scalp is created. The artifact is most prominent at frontal channels close to the eyes, reaching over a hundred microvolts in amplitude. In the frequency domain, it contains frequencies mostly in the EEG delta and theta bands.
Similar to blinks, lateral eye movements such as saccades generate a current away from the eyeballs, but this time towards the sides of the head, producing a box-shaped deflection with opposite polarity on each side. They are most prominent over channels close to the temples, but also affect channels outside these regions. In the frequency spectrum, the box shape created by eye movements peaks in the delta and theta bands, but has effects up to 20 Hz.
Most notably, activation during teeth clenching generates large noise that extends to the whole scalp. However, slighter artifacts are produced by other muscle groups of the head such as jaw or forehead.
Shoulder and neck tension lead to a persistent artifact that reaches lower electrodes including the mastoid region. In this case, if mastoid channels are used for re-referencing, the muscular artifact is introduced into all other channels. In frequency space, muscular artifacts are most prominent above 20 Hz, and up to 300 Hz.
Pulsation of the head arteries generated by the heartbeat can lead to a slight rhythmical movement of the electrodes. In simultaneous EEG-fMRI recordings, the participant’s supine position and scanner environment magnify the artifact. However, pulse artifacts also occur sporadically under normal laboratory conditions, in participants with hypertension, or related to physical activity.
Even mild sweating leads to changes in the conductivity of the skin. This produces a drifting voltage on the scalp that shows as slow drifts in the recording. The drifts can vary widely in frequency and magnitude. This artifact is caused by warm environments, during physical activity, or during stress. The fluctuation induced by these drifts affects the timing and amplitude of signals in the time domain such as ERPs. In frequency space, this artifact contains power mostly in slow frequencies.
Movement of the body leads to slight displacement of the EEG cap over the scalp, especially when the cap is loose. This alters electrode impedance levels in the process, leading to artifacts. Gross movements produce large shifting voltages that can even saturate amplifiers momentarily. Slight body sway leads to gradual drift in the channels. Complex movements in certain tasks produce equally complex movements of the cap involving pulling, sliding and shaking, which affect all channels.
Line noise / Electromagnetic interference
As alternating current flows through the room’s electric wiring, it generates electromagnetic noise called “line noise”, that is picked up by the EEG cables with a frequency that depends on the local grid: 50 or 60 Hz depending on the country. Modern amplifiers do a great job in reducing this noise, however it can still enter the recording.
When the contact between the scalp and the electrode is disrupted, the signal becomes unstable. This can happen because of a loose-fitting cap, body movement or hair pushing the cap away. Loose contact more often leads to slow drifts in the signal. However, the electrochemical instabilities produced by the loose contact can lead to sudden conductance changes which manifest as an “electrode pop” in the data. This artifact can also affect ground and reference electrodes, which would consequently affect all other channels.
Movement of the EEG cables alters their conductive properties momentarily. This produces transient signal alterations with varying shapes which depend on the type of cable movement. Most notably, cable swinging introduces oscillations at the frequency of the swing, which may overlap with EEG frequencies of interest. Modern EEG devices such as the actiCAP active electrode system, incorporate elements like amplification at the electrodes, which reduce cable movement artifacts
In simultaneous EEG-fMRI, the scanner environment induces diverse artifacts. Most notably, the magnetic gradient switching induces large currents in the EEG leads, which produce large artifactual voltages. Vibrations of the scanner further produce motion artifacts that are enhanced by the scanner’s magnetic field. Artifact handling in EEG-fMRI requires special techniques that cope with the particular nature of these artifacts.
TMS pulses generate large spikes that may lead to the saturation of the EEG amplifier. Effects on the hardware furthermore lead to a decay artifact for a few milliseconds. If there is amplifier saturation, it means that the EEG data is lost for a brief period, so the spike and decay are most frequently replaced by interpolated data.
Manual Inspection allows the user to scroll through the data and mark artifacts. This mode offers great flexibility but the artifact demarcation is subjective and can be very time consuming. Manual inspection is most useful for marking conspicuous artifacts, or for short data sets.
Automatic Inspection lets the user define a set of objective criteria to search for artifacts automatically:
Gradient: detects steep changes in voltage within one millisecond (e.g. electrode pops).
Amplitude: looks for absolute voltages that surpass a threshold relative to zero (e.g. ocular artifacts, large muscular artifacts).
Max-Min: searches for relative changes in amplitude beyond a defined range. Useful to find artifacts within data that already contains an offset (e.g. within drifts, DC offset).
Low Activity: searches for flat lines by finding stretches of data with unnaturally little variation
Semi-Automatic Inspection lets the user verify the artifact detection achieved through automatic criteria, by adding or removing artifact markings to the selection. It then combines the objectivity of automatic inspection with the flexibility of the manual mode.
Sometimes a channel has an irreparably noisy signal throughout the entire recording. In this unfortunate situation, one option is to reject the channel entirely. In Analyzer this can be done with the Edit Channels transformation, where the channel in question can be disabled. An alternative approach is to replace the channel with an interpolated signal based on all other channels. This can be achieved through the Topographic Interpolation transformation. However, here the interpolated signal is only an estimation and should only be carefully interpreted, if at all.
EEG recordings include a mixture of signals from brain and non-brain sources. The ICA transformation in Analyzer seeks to statistically separate this mixture into its components. With the Inverse ICA transformation, the components that represent artifacts can be discarded, to reconstruct the EEG without them. These are most commonly ocular artifacts, but can also be heartbeat, localized muscular tension or other artifacts that have a single source.
For optimal component separation, at least 64 channels are recommended. You must further be careful to adequately select the components to discard without removing valid EEG. Analyzer offers a user-friendly visual display to facilitate these decisions. Even after removing components, Inverse ICA can be reprocessed to change the component selection.
Analyzer 2.2.0 reflects the first outcomes of this renewed commitment to excellence in our customer service. In this article we intend to familiarize you with the most salient features coming with Analyzer 2.2.0, hoping they have a positive impact on your research work.
A typical workflow in EEG analysis starts with reading the data. Together with our well-established and widely supported BrainVision data format, Analyzer 2 supports more than 25 other EEG data formats from different manufacturers.
To better support our community of EGI data users, we also provide a completely redesigned EGI reader in Analyzer 2.2.0, expanding its support for continuous and segmented EGI simple binary format (*.raw) with integer precision (format versions 2 and 3) and floating-point single precision (format versions 4 and 5). The new EGI reader also achieves a higher precision of EEG voltage values by supporting the additional channel gain and zero files that comes along with the *.raw file.
Data preprocessing tools
After reading and inspecting the data, preprocessing steps usually follow, based on a variety of tools for data cleaning, extraction of the data portions of interest, etc.
In Analyzer 2.2.0 the IIR Filters transform has been extensively rewritten to improve calculations and performance. This module now makes use of a new algorithm integrated in a dedicated filter mathlib where all calculations are standardized. Together with the IIR Filters transform, the data preprocessing modules Edit Channels, Topographic Interpolation and Linear Derivation have undergone important enhancements in their user interface, as well as in the parameter validations in the GUI and template-based computations. Besides, the handling of unsupported conditions, channel properties and data units has been improved in the new version of these transforms.
Average and Grand Average
The event-related potential (ERP) methodology has proven to be a robust approach to elucidate the time course and neural basis of cognitive process. Together with methods for data preprocessing, averaging across segments and across subjects are crucial analysis steps of the ERP methodology. These operations are possible with the transforms Average and Grand Average.
In Analyzer 2.2.0, the Average transform has been rewritten and ported to the .NET platform. Its user interface provides you with a new standard GUI control for segment selection, as well as a robust set of GUI validations to check the integrity of used parameters. Performance of numerical computations across segments is improved with the implementation of new algorithms. The Average – Grand Average workflow has been considerably optimized with the introduction of a new property to store the number of segments used for averaging in each channel. Also, the Operation Infos has been expanded to include detailed information about used parameters and channel specific information.
Grand Average consists of a rather complicated set of operations for selecting input history files, nodes and channels, which lead to the computation of valid output data. In Analyzer 2.2.0, Grand Average has been extensively rewritten to enhance the usability of the user interface and to improve vital aspects of this module. This includes the identification of the reference history file, the selection of valid nodes and channels, the logic of data aggregation when Individual Channel Mode option is on or off, the mathematical computations based on the number of segments/weights and the correct matching of input nodes, channels and units.
The new user interface is utterly improved by a standard GUI control for history file selection, which makes the selection process more flexible and effective. Both parameters in the GUI and data properties during calculations are submitted to a stricter validation, in order to guarantee the validity of Grand Average output data.
You will find a new structure of the Operation Infos, which includes a detailed report of the analysis steps. It addresses relevant questions such as: what’s my reference node? which nodes are included and excluded? how many segments contributed to the grand average calculation, in each channel? why is a particular node excluded?
Alongside this, the Operation Infos includes warnings on minor issues affecting the input data.
Visualization tools in 2D and 3D
While continuing our Analyzer 2.2.0 tour, let’s take a pause to explore what is new regarding visualization tools.
Analyzer 2.2.0 shines with its new set of color maps that will considerably enhance the visual experience of data exploration and representation (see Figure 1). These features are of vital importance in revealing the underlying patterns in your neurophysiological data and to better communicate your research results to the broader scientific community.
In addition, two new color maps Paruly (similar to Parula in MATLAB®) and Plasma are implemented in Analyzer 2.2.0. Paruly and Plasma are of superior quality and overcome well-known issues in other color maps. They are more perceptually uniform, colorblind friendly and their grayscale conversion is printer-friendly.
3D data visualization is also improved by replacing the heads (Adam, Anna, Liza and Baby) of the 3D Head View with four newly created heads which have a 4x denser mesh in the scalp area. This change entails a smoother distribution of the data on the head surface, and a homogenous visual experience of light/shading effects across the four heads.
The visualization of the frequency spectrum is also expanded to include spectral values in the negative frequency domain. This new feature makes the visualization of auto- and cross-correlation of spectral data for negative frequency lags possible.
Time-frequency data visualization is significantly enhanced by the new color maps. Besides, if connectivity data (e.g. Coherence) is computed in the time-frequency domain, its time-frequency representation in Analyzer 2.2.0 displays the corresponding connectivity graph for the selected time-frequency point (see section: Connectivity analysis made easy).
For your quick access to all these new visualization features, the user interface of the View Settings has been rearranged and further extended, including a preview of the selected color map.
Event-related synchronization/desynchronization
Decades of EEG research based on the ERP methodology has shown, together with its various merits, that it captures a narrow fragment of the dynamical repertoire of underlying neural activity. As ERP analysis is predicated on the existence of time-locked and phase-locked activity, the pursuit of elucidating the non-stationary, non-phase-locked character of brain oscillatory processes have paved the way to investigate patterns of synchronization and desynchronization of neural activity beyond the ERPs.
In Analyzer 2.2.0, event-related synchronization and desynchronization analysis has been strongly reinforced by providing you with new versions of the modules ERS/ERD and Complex Demodulation, and by incorporating frequently requested features in the Wavelets transform.
Both Complex Demodulation and ERS/ERD have been ported to the .NET platform. Their user interfaces have been redesigned to improve usability, e.g. it makes use of the standard GUI control for segment selection. Moreover, both transforms make use of the new standard filter mathlib to accomplish the underlying IIR filtering operations.
The new version of Complex Demodulation also provides you with a wider spectrum of output options, including Amplitude, Power and Phase.
Likewise, several new options significantly increase the versatility of the ERS/ERD and Wavelets transforms in Analyzer 2.2.0:
Connectivity analysis made easy
One of the dominant hypotheses in modern neuroscience claims that the dynamics of cognitive processes is reflected in (and highly correlated with) the interplay of many neural oscillators which interact and evolve over time. Not surprisingly, we are witnessing an explosion of novel experimental paradigms and connectivity analysis methods which have been advanced to elucidate the complex organization of dynamic neural networks underlying cognitive operations on a timescale of milliseconds.
Along these trends, Analyzer 2.2.0 introduces a major overhaul of the modules devoted to connectivity analysis.
The Covariance transform has been deprecated and will not be further developed. It has been superseded by the new .NET transform Correlation Measures, which includes several covariance and correlation methods. Likewise, the Cross-Correlationtransform has been extensively rewritten in order to include new cross-covariance and cross-correlation methods.
The three connectivity transforms Coherence, Correlation Measures and Cross-Correlation have adopted new standard GUI controls for the automatic and manual selection of channel pairs (connectivity graphs or networks). These controls make the selection of predefined networks or the construction of custom network configurations (see Figure 2) very easy. Besides, the handling of unsupported conditions has been improved in the new version of these transforms.
On-site support with Troubleshooting
Since the release of Analyzer 1 we have been committed to offer free high-quality scientific support for BrainVision Analyzerto all our customers. Our experience over the past two decades has shown that a successful analysis workflow requires several ingredients such as: quality and integrity of your data, careful selection of the correct Analyzer modules, and identification of optimal parameters for data processing. Hence, your choices can make the difference between a reliable data analysis pipeline and an erroneous approach to your research question.
The new Add In, Troubleshooting, included under the group Diagnostics in Analyzer 2.2.0, aims to help you in this difficult endeavor. You can think of it as an on-site support module, that will help you to scan/diagnose existing workspaces, history trees and history nodes. Troubleshooting contains a set of predefined tests to detect automatically data integrity problems, inconsistencies or wrong settings. In addition, it conveniently reports detected issues in a customizable format and proposes potential solutions. Test reports can be saved to an *.html file and shared with your colleagues or our scientific support team for further inspection.
Troubleshooting can be applied to diagnose complete workspaces, a given history file, or just one history node. In Analyzer 2.2.0, this Add In is released with an initial set of tests including:
Stay tuned for upcoming releases! The set of Troubleshooting tests will increase in Analyzer 2, to cover most of the possible issues and provide potential solutions.
Analyzer Solutions
The solution Wavelet Data Export has been commonly applied by our Analyzer 2 users to export the cumulative sum or average data within a given time-frequency range, as generated by the Wavelets transform. Given that the Wavelets transform has been enhanced in Analyzer 2.2.0 to include additional output options, the solution Wavelet Data Export was also updated to assure compatibility with the new Wavelets transform.
In Analyzer 2.2.0 the computations pertaining the solution Phase Locking Factor are integrated within a workflow consisting of three transforms, namely Wavelets (using the option Wavelet Phase – Complex Values), followed by Average and subsequently Rectify. A demo pipeline illustrating this workflow is provided as a history template here. All functionalities existing in the solution Phase Locking Factor are included in this pipeline. Therefore, this solution is deprecated and hence excluded from the official solutions package of Analyzer 2.2.0 and later versions.
Likewise, the solutions Complex Data Measures, Easi Export, ICA BackTransform, ICA Topographies, CBC Parameters, Slice Volume Align and Slice2Volume Triggers have become outdated and/or their functionalities are integrated in existing modules of Analyzer 2. Therefore, they are also excluded from the official solutions package of Analyzer 2.2.0 and later versions.
All excluded solutions in Analyzer 2.2.0 will not be further developed and will be excluded from the Solutions download section. However, they will remain available upon request via our Scientific Support team.
Concluding remarks
What a journey! From the release of Analyzer 1, to the success of Analyzer 2, to the upcoming release of Analyzer 2.2.0!
We have taken this unique opportunity to frame our recent efforts for Analyzer 2.2.0 in the larger history of our company and our commitment to providing high-quality products. In this spirit, working for Analyzer 2.2.0 has been really thrilling for us. Most of the announced new features and enhancements in Analyzer 2.2.0 have been inspired by your feedback and suggestions. We would like to express our gratitude to all of you who have provided constructive suggestions and requests for improvements over the years.
We are proud of the strength and the merits of our market-leading BrainVision Analyzer 2. Our hard work will pay off if Analyzer 2.2.0 makes you proud as well.
Welcome BrainVision Analyzer 2.2.0!