15 resultados para Individual Variability
em Aston University Research Archive
Resumo:
We evaluated inter-individual variability in optimal current direction for biphasic transcranial magnetic stimulation (TMS) of the motor cortex. Motor threshold for first dorsal interosseus was detected visually at eight coil orientations in 45° increments. Each participant (n = 13) completed two experimental sessions. One participant with low test–retest correlation (Pearson's r < 0.5) was excluded. In four subjects, visual detection of motor threshold was compared to EMG detection; motor thresholds were very similar and highly correlated (0.94–0.99). Similar with previous studies, stimulation in the majority of participants was most effective when the first current pulse flowed towards postero-lateral in the brain. However, in four participants, the optimal coil orientation deviated from this pattern. A principal component analysis using all eight orientations suggests that in our sample the optimal orientation of current direction was normally distributed around the postero-lateral orientation with a range of 63° (S.D. = 13.70°). Whenever the intensity of stimulation at the target site is calculated as a percentage from the motor threshold, in order to minimize intensity and side-effects it may be worthwhile to check whether rotating the coil 45° from the traditional posterior–lateral orientation decreases motor threshold.
Resumo:
Fluorescence spectroscopy has recently become more common in clinical medicine. However, there are still many unresolved issues related to the methodology and implementation of instruments with this technology. In this study, we aimed to assess individual variability of fluorescence parameters of endogenous markers (NADH, FAD, etc.) measured by fluorescent spectroscopy (FS) in situ and to analyse the factors that lead to a significant scatter of results. Most studied fluorophores have an acceptable scatter of values (mostly up to 30%) for diagnostic purposes. Here we provide evidence that the level of blood volume in tissue impacts FS data with a significant inverse correlation. The distribution function of the fluorescence intensity and the fluorescent contrast coefficient values are a function of the normal distribution for most of the studied fluorophores and the redox ratio. The effects of various physiological (different content of skin melanin) and technical (characteristics of optical filters) factors on the measurement results were additionally studied.The data on the variability of the measurement results in FS should be considered when interpreting the diagnostic parameters, as well as when developing new algorithms for data processing and FS devices.
Resumo:
Background: During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. Methods: We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. Results: 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Conclusions: Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals features, such as the ECG traits, needs further improvements. ECG features have the potential to be used in daily activities such as access control and patient handling as well as in wearable electronics applications. However, some barriers still limit its growth. Further analysis should be addressed on the use of single lead recordings and the study of features which are not dependent on the recording sites (e.g. fingers, hand palms). Moreover, it is expected that new techniques will be developed using fiducials and non-fiducial based features in order to catch the best of both approaches. ECG recognition in pathological subjects is also worth of additional investigations.
Resumo:
We contend that powerful group studies can be conducted using magnetoencephalography (MEG), which can provide useful insights into the approximate distribution of the neural activity detected with MEG without requiring magnetic resonance imaging (MRI) for each participant. Instead, a participant's MRI is approximated with one chosen as a best match on the basis of the scalp surface from a database of available MRIs. Because large inter-individual variability in sulcal and gyral patterns is an inherent source of blurring in studies using grouped functional activity, the additional error introduced by this approximation procedure has little effect on the group results, and offers a sufficiently close approximation to that of the participants to yield a good indication of the true distribution of the grouped neural activity. T1-weighted MRIs of 28 adults were acquired in a variety of MR systems. An artificial functional image was prepared for each person in which eight 5 × 5 × 5 mm regions of brain activation were simulated. Spatial normalisation was applied to each image using transformations calculated using SPM99 with (1) the participant's actual MRI, and (2) the best matched MRI substituted from those of the other 27 participants. The distribution of distances between the locations of points using real and substituted MRIs had a modal value of 6 mm with 90% of cases falling below 12.5 mm. The effects of this -approach on real grouped SAM source imaging of MEG data in a verbal fluency task are also shown. The distribution of MEG activity in the estimated average response is very similar to that produced when using the real MRIs. © 2003 Wiley-Liss, Inc.
Resumo:
Neuroimaging studies have consistently shown that working memory (WM) tasks engage a distributed neural network that primarily includes the dorsolateral prefrontal cortex, the parietal cortex, and the anterior cingulate cortex. The current challenge is to provide a mechanistic account of the changes observed in regional activity. To achieve this, we characterized neuroplastic responses in effective connectivity between these regions at increasing WM loads using dynamic causal modeling of functional magnetic resonance imaging data obtained from healthy individuals during a verbal n-back task. Our data demonstrate that increasing memory load was associated with (a) right-hemisphere dominance, (b) increasing forward (i.e., posterior to anterior) effective connectivity within the WM network, and (c) reduction in individual variability in WM network architecture resulting in the right-hemisphere forward model reaching an exceedance probability of 99% in the most demanding condition. Our results provide direct empirical support that task difficulty, in our case WM load, is a significant moderator of short-term plasticity, complementing existing theories of task-related reduction in variability in neural networks. Hum Brain Mapp, 2013. © 2013 Wiley Periodicals, Inc.
Resumo:
Aims - To build a population pharmacokinetic model that describes the apparent clearance of tacrolimus and the potential demographic, clinical and genetically controlled factors that could lead to inter-patient pharmacokinetic variability within children following liver transplantation. Methods - The present study retrospectively examined tacrolimus whole blood pre-dose concentrations (n = 628) of 43 children during their first year post-liver transplantation. Population pharmacokinetic analysis was performed using the non-linear mixed effects modelling program (nonmem) to determine the population mean parameter estimate of clearance and influential covariates. Results - The final model identified time post-transplantation and CYP3A5*1 allele as influential covariates on tacrolimus apparent clearance according to the following equation: TVCL = 12.9 x (Weight/13.2)0.35 x EXP (-0.0058 x TPT) x EXP (0.428 x CYP3A5) where TVCL is the typical value for apparent clearance, TPT is time post-transplantation in days and the CYP3A5 is 1 where *1 allele is present and 0 otherwise. The population estimate and inter-individual variability (%CV) of tacrolimus apparent clearance were found to be 0.977 l h−1 kg−1 (95% CI 0.958, 0.996) and 40.0%, respectively, while the residual variability between the observed and predicted concentrations was 35.4%. Conclusion Tacrolimus apparent clearance was influenced by time post-transplantation and CYP3A5 genotypes. The results of this study, once confirmed by a large scale prospective study, can be used in conjunction with therapeutic drug monitoring to recommend tacrolimus dose adjustments that take into account not only body weight but also genetic and time-related changes in tacrolimus clearance.
Resumo:
One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.
Resumo:
The aims of this thesis were to investigate the neuropsychological, neurophysiological, and cognitive contributors to mobility changes with increasing age. In a series of studies with adults aged 45-88 years, unsafe pedestrian behaviour and falls were investigated in relation to i) cognitive functions (including response time variability, executive function, and visual attention tests), ii) mobility assessments (including gait and balance and using motion capture cameras), iii) motor initiation and pedestrian road crossing behavior (using a simulated pedestrian road scene), iv) neuronal and functional brain changes (using a computer based crossing task with magnetoencephalography), and v) quality of life questionnaires (including fear of falling and restricted range of travel). Older adults are more likely to be fatally injured at the far-side of the road compared to the near-side of the road, however, the underlying mobility and cognitive processes related to lane-specific (i.e. near-side or far-side) pedestrian crossing errors in older adults is currently unknown. The first study explored cognitive, motor initiation, and mobility predictors of unsafe pedestrian crossing behaviours. The purpose of the first study (Chapter 2) was to determine whether collisions at the near-side and far-side would be differentially predicted by mobility indices (such as walking speed and postural sway), motor initiation, and cognitive function (including spatial planning, visual attention, and within participant variability) with increasing age. The results suggest that near-side unsafe pedestrian crossing errors are related to processing speed, whereas far-side errors are related to spatial planning difficulties. Both near-side and far-side crossing errors were related to walking speed and motor initiation measures (specifically motor initiation variability). The salient mobility predictors of unsafe pedestrian crossings determined in the above study were examined in Chapter 3 in conjunction with the presence of a history of falls. The purpose of this study was to determine the extent to which walking speed (indicated as a salient predictor of unsafe crossings and start-up delay in Chapter 2), and previous falls can be predicted and explained by age-related changes in mobility and cognitive function changes (specifically within participant variability and spatial ability). 53.2% of walking speed variance was found to be predicted by self-rated mobility score, sit-to-stand time, motor initiation, and within participant variability. Although a significant model was not found to predict fall history variance, postural sway and attentional set shifting ability was found to be strongly related to the occurrence of falls within the last year. Next in Chapter 4, unsafe pedestrian crossing behaviour and pedestrian predictors (both mobility and cognitive measures) from Chapter 2 were explored in terms of increasing hemispheric laterality of attentional functions and inter-hemispheric oscillatory beta power changes associated with increasing age. Elevated beta (15-35 Hz) power in the motor cortex prior to movement, and reduced beta power post-movement has been linked to age-related changes in mobility. In addition, increasing recruitment of both hemispheres has been shown to occur and be beneficial to perform similarly to younger adults in cognitive tasks (Cabeza, Anderson, Locantore, & McIntosh, 2002). It has been hypothesised that changes in hemispheric neural beta power may explain the presence of more pedestrian errors at the farside of the road in older adults. The purpose of the study was to determine whether changes in age-related cortical oscillatory beta power and hemispheric laterality are linked to unsafe pedestrian behaviour in older adults. Results indicated that pedestrian errors at the near-side are linked to hemispheric bilateralisation, and neural overcompensation post-movement, 4 whereas far-side unsafe errors are linked to not employing neural compensation methods (hemispheric bilateralisation). Finally, in Chapter 5, fear of falling, life space mobility, and quality of life in old age were examined to determine their relationships with cognition, mobility (including fall history and pedestrian behaviour), and motor initiation. In addition to death and injury, mobility decline (such as pedestrian errors in Chapter 2, and falls in Chapter 3) and cognition can negatively affect quality of life and result in activity avoidance. Further, number of falls in Chapter 3 was not significantly linked to mobility and cognition alone, and may be further explained by a fear of falling. The objective of the above study (Study 2, Chapter 3) was to determine the role of mobility and cognition on fear of falling and life space mobility, and the impact on quality of life measures. Results indicated that missing safe pedestrian crossing gaps (potentially indicating crossing anxiety) and mobility decline were consistent predictors of fear of falling, reduced life space mobility, and quality of life variance. Social community (total number of close family and friends) was also linked to life space mobility and quality of life. Lower cognitive functions (particularly processing speed and reaction time) were found to predict variance in fear of falling and quality of life in old age. Overall, the findings indicated that mobility decline (particularly walking speed or walking difficulty), processing speed, and intra-individual variability in attention (including motor initiation variability) are salient predictors of participant safety (mainly pedestrian crossing errors) and wellbeing with increasing age. More research is required to produce a significant model to explain the number of falls.
Resumo:
We compared reading acquisition in English and Italian children up to late primary school analyzing RTs and errors as a function of various psycholinguistic variables and changes due to experience. Our results show that reading becomes progressively more reliant on larger processing units with age, but that this is modulated by consistency of the language. In English, an inconsistent orthography, reliance on larger units occurs earlier on and it is demonstrated by faster RTs, a stronger effect of lexical variables and lack of length effect (by fifth grade). However, not all English children are able to master this mode of processing yielding larger inter-individual variability. In Italian, a consistent orthography, reliance on larger units occurs later and it is less pronounced. This is demonstrated by larger length effects which remain significant even in older children and by larger effects of a global factor (related to speed of orthographic decoding) explaining changes of performance across ages. Our results show the importance of considering not only overall performance, but inter-individual variability and variability between conditions when interpreting cross-linguistic differences.
Resumo:
The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses.
Resumo:
Transcranial direct current stimulation (tDCS) is a method of non-invasive brain stimulation widely used to modulate cognitive functions. Recent studies, however, suggests that effects are unreliable, small and often non-significant at least when stimulation is applied in a single session to healthy individuals. We examined the effects of frontal and temporal lobe anodal tDCS on naming and reading tasks and considered possible interactions with linguistic activation and selection mechanisms as well possible interactions with item difficulty and participant individual variability. Across four separate experiments (N, Exp 1A = 18; 1B = 20; 1C = 18; 2 = 17), we failed to find any difference between real and sham stimulation. Moreover, we found no evidence of significant effects limited to particular conditions (i.e., those requiring suppression of semantic interference), to a subset of participants or to longer RTs. Our findings sound a cautionary note on using tDCS as a means to modulate cognitive performance. Consistent effects of tDCS may be difficult to demonstrate in healthy participants in reading and naming tasks, and be limited to cases of pathological neurophysiology and/or to the use of learning paradigms.
Resumo:
A fundamental tenet of Leader–Member Exchange (LMX) theory is that leaders develop different quality relationships with their employees; however, little research has investigated the impact of LMX differentiation on employee reactions. The current research investigates whether perceptions of LMX variability (the extent to which LMX relationships are perceived to vary within a team) affects employee job satisfaction and wellbeing beyond the effects of personal LMX quality. As LMX variability runs counter to principles of equality and consistency, which are important for maintaining social harmony in groups, it is hypothesized that perceptions of LMX variability will have a negative effect on employee reactions, via its negative impact on perceived team relations. Two samples of employed individuals were used to investigate the hypothesized relationships. In both samples, an individual's perception of LMX variability in their team was negatively related to employee job satisfaction and wellbeing (above the effects of LMX), and this relationship was mediated by reports of relational team conflict.
Resumo:
Increased awareness of the crucial role of leadership as a competitive advantage for organisations (McCall, 1998; Petrick, Scherer, Brodzinski, Quinn, & Ainina, 1999) has led to billions spent on leadership development programmes and training (Avolio & Hannah, 2008). However, research reports confusing and contradictory evidence regarding return on investment and developmental outcomes, and a lot of variance has been observed across studies (Avolio, Reichard, Hannah, Walumbwa, & Chan, 2009). The purpose of this thesis is to understand the mechanisms underlying this variability in leadership development. Of the many factors at play in the process, such as programme design and delivery, organisational support, and perceptions of relevance (Mabey, 2002; Day, Harrison, & Halpin, 2009), individual differences and characteristics stand out. One way in which individuals differ is in their Developmental Readiness (DR), a concept recently introduced in the literature that may well explain this variance and which has been proposed to accelerate development (Avolio & Hannah, 2008, 2009). Building on previous work, DR is introduced and conceptualised somewhat differently. In this study, DR is construed of self-awareness, self-regulation, and self-motivation, proposed by Day (2000) to be the backbones of leadership development. DR is suggested to moderate the developmental process. Furthermore, personality dispositions and individual values are proposed to be precursors of DR. The empirical research conducted uses a pre-test post-test quasi-experimental design. Before conducting the study, though, both a measure of Developmental Readiness and a competency profiling measure are tested in two pilot studies. Results do not find evidence of a direct effect of leadership development programmes on development, but do support an interactive effect between DR and leadership development programmes. Personality dispositions Agreeableness, Conscientiousness, and Openness to Experience and value orientations Conservation, Open, and Closed Orientation are found to significantly predict DR. Finally, the theoretical and practical implications of findings are discussed.
Resumo:
Mesenchymal stem cells (MSCs) stimulate angiogenesis within a wound environment and this effect is mediated through paracrine interactions with the endothelial cells present. Here we report that human MSC-conditioned medium (n=3 donors) significantly increased EaHy-926 endothelial cell adhesion and cell migration, but that this stimulatory effect was markedly donor-dependent. MALDI-TOF/TOF mass spectrometry demonstrated that whilst collagen type I and fibronectin were secreted by all of the MSC cultures, the small leucine rich proteoglycan, decorin was secreted only by the MSC culture that was least effective upon EaHy-926 cells. These individual extracellular matrix components were then tested as culture substrata. EaHy-926 cell adherence was greatest on fibronectin-coated surfaces with least adherence on decorin-coated surfaces. Scratch wound assays were used to examine cell migration. EaHy-926 cell scratch wound closure was quickest on substrates of fibronectin and slowest on decorin. However, EaHy-926 cell migration was stimulated by the addition of MSC-conditioned medium irrespective of the types of culture substrates. These data suggest that whilst the MSC secretome may generally be considered angiogenic, the composition of the secretome is variable and this variation probably contributes to donor-donor differences in activity. Hence, screening and optimizing MSC secretomes will improve the clinical effectiveness of pro-angiogenic MSC-based therapies.
Resumo:
One of the reasons for using variability in the software product line (SPL) approach (see Apel et al., 2006; Figueiredo et al., 2008; Kastner et al., 2007; Mezini & Ostermann, 2004) is to delay a design decision (Svahnberg et al., 2005). Instead of deciding on what system to develop in advance, with the SPL approach a set of components and a reference architecture are specified and implemented (during domain engineering, see Czarnecki & Eisenecker, 2000) out of which individual systems are composed at a later stage (during application engineering, see Czarnecki & Eisenecker, 2000). By postponing the design decisions in such a manner, it is possible to better fit the resultant system in its intended environment, for instance, to allow selection of the system interaction mode to be made after the customers have purchased particular hardware, such as a PDA vs. a laptop. Such variability is expressed through variation points which are locations in a software-based system where choices are available for defining a specific instance of a system (Svahnberg et al., 2005). Until recently it had sufficed to postpone committing to a specific system instance till before the system runtime. However, in the recent years the use and expectations of software systems in human society has undergone significant changes.Today's software systems need to be always available, highly interactive, and able to continuously adapt according to the varying environment conditions, user characteristics and characteristics of other systems that interact with them. Such systems, called adaptive systems, are expected to be long-lived and able to undertake adaptations with little or no human intervention (Cheng et al., 2009). Therefore, the variability now needs to be present also at system runtime, which leads to the emergence of a new type of system: adaptive systems with dynamic variability.