892 resultados para Differenzial Imaging, Principal Component Analysis, esopianeti, SPHERE, IFS
Resumo:
Deep Brain Stimulation (DBS) has been successfully used throughout the world for the treatment of Parkinson's disease symptoms. To control abnormal spontaneous electrical activity in target brain areas DBS utilizes a continuous stimulation signal. This continuous power draw means that its implanted battery power source needs to be replaced every 18–24 months. To prolong the life span of the battery, a technique to accurately recognize and predict the onset of the Parkinson's disease tremors in human subjects and thus implement an on-demand stimulator is discussed here. The approach is to use a radial basis function neural network (RBFNN) based on particle swarm optimization (PSO) and principal component analysis (PCA) with Local Field Potential (LFP) data recorded via the stimulation electrodes to predict activity related to tremor onset. To test this approach, LFPs from the subthalamic nucleus (STN) obtained through deep brain electrodes implanted in a Parkinson patient are used to train the network. To validate the network's performance, electromyographic (EMG) signals from the patient's forearm are recorded in parallel with the LFPs to accurately determine occurrences of tremor, and these are compared to the performance of the network. It has been found that detection accuracies of up to 89% are possible. Performance comparisons have also been made between a conventional RBFNN and an RBFNN based on PSO which show a marginal decrease in performance but with notable reduction in computational overhead.
Resumo:
A new state estimator algorithm is based on a neurofuzzy network and the Kalman filter algorithm. The major contribution of the paper is recognition of a bias problem in the parameter estimation of the state-space model and the introduction of a simple, effective prefiltering method to achieve unbiased parameter estimates in the state-space model, which will then be applied for state estimation using the Kalman filtering algorithm. Fundamental to this method is a simple prefiltering procedure using a nonlinear principal component analysis method based on the neurofuzzy basis set. This prefiltering can be performed without prior system structure knowledge. Numerical examples demonstrate the effectiveness of the new approach.
Resumo:
This paper presents recent developments to a vision-based traffic surveillance system which relies extensively on the use of geometrical and scene context. Firstly, a highly parametrised 3-D model is reported, able to adopt the shape of a wide variety of different classes of vehicle (e.g. cars, vans, buses etc.), and its subsequent specialisation to a generic car class which accounts for commonly encountered types of car (including saloon, batchback and estate cars). Sample data collected from video images, by means of an interactive tool, have been subjected to principal component analysis (PCA) to define a deformable model having 6 degrees of freedom. Secondly, a new pose refinement technique using “active” models is described, able to recover both the pose of a rigid object, and the structure of a deformable model; an assessment of its performance is examined in comparison with previously reported “passive” model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence. Typical applications for this work include robot surveillance and navigation tasks.
Resumo:
Changes in climate variability and, in particular, changes in extreme climate events are likely to be of far more significance for environmentally vulnerable regions than changes in the mean state. It is generally accepted that sea-surface temperatures (SSTs) play an important role in modulating rainfall variability. Consequently, SSTs can be prescribed in global and regional climate modelling in order to study the physical mechanisms behind rainfall and its extremes. Using a satellite-based daily rainfall historical data set, this paper describes the main patterns of rainfall variability over southern Africa, identifies the dates when extreme rainfall occurs within these patterns, and shows the effect of resolution in trying to identify the location and intensity of SST anomalies associated with these extremes in the Atlantic and southwest Indian Ocean. Derived from a Principal Component Analysis (PCA), the results also suggest that, for the spatial pattern accounting for the highest amount of variability, extremes extracted at a higher spatial resolution do give a clearer indication regarding the location and intensity of anomalous SST regions. As the amount of variability explained by each spatial pattern defined by the PCA decreases, it would appear that extremes extracted at a lower resolution give a clearer indication of anomalous SST regions.
Resumo:
The coarse spacing of automatic rain gauges complicates near-real- time spatial analyses of precipitation. We test the possibility of improving such analyses by considering, in addition to the in situ measurements, the spatial covariance structure inferred from past observations with a denser network. To this end, a statistical reconstruction technique, reduced space optimal interpolation (RSOI), is applied over Switzerland, a region of complex topography. RSOI consists of two main parts. First, principal component analysis (PCA) is applied to obtain a reduced space representation of gridded high- resolution precipitation fields available for a multiyear calibration period in the past. Second, sparse real-time rain gauge observations are used to estimate the principal component scores and to reconstruct the precipitation field. In this way, climatological information at higher resolution than the near-real-time measurements is incorporated into the spatial analysis. PCA is found to efficiently reduce the dimensionality of the calibration fields, and RSOI is successful despite the difficulties associated with the statistical distribution of daily precipitation (skewness, dry days). Examples and a systematic evaluation show substantial added value over a simple interpolation technique that uses near-real-time observations only. The benefit is particularly strong for larger- scale precipitation and prominent topographic effects. Small-scale precipitation features are reconstructed at a skill comparable to that of the simple technique. Stratifying the reconstruction method by the types of weather type classifications yields little added skill. Apart from application in near real time, RSOI may also be valuable for enhancing instrumental precipitation analyses for the historic past when direct observations were sparse.
Resumo:
This report presents the canonical Hamiltonian formulation of relative satellite motion. The unperturbed Hamiltonian model is shown to be equivalent to the well known Hill-Clohessy-Wilshire (HCW) linear formulation. The in°uence of perturbations of the nonlinear Gravitational potential and the oblateness of the Earth; J2 perturbations are also modelled within the Hamiltonian formulation. The modelling incorporates eccentricity of the reference orbit. The corresponding Hamiltonian vector ¯elds are computed and implemented in Simulink. A numerical method is presented aimed at locating periodic or quasi-periodic relative satellite motion. The numerical method outlined in this paper is applied to the Hamiltonian system. Although the orbits considered here are weakly unstable at best, in the case of eccentricity only, the method ¯nds exact periodic orbits. When other perturbations such as nonlinear gravitational terms are added, drift is signicantly reduced and in the case of the J2 perturbation with and without the nonlinear gravitational potential term, bounded quasi-periodic solutions are found. Advantages of using Newton's method to search for periodic or quasi-periodic relative satellite motion include simplicity of implementation, repeatability of solutions due to its non-random nature, and fast convergence. Given that the use of bounded or drifting trajectories as control references carries practical di±culties over long-term missions, Principal Component Analysis (PCA) is applied to the quasi-periodic or slowly drifting trajectories to help provide a closed reference trajectory for the implementation of closed loop control. In order to evaluate the e®ect of the quality of the model used to generate the periodic reference trajectory, a study involving closed loop control of a simulated master/follower formation was performed. 2 The results of the closed loop control study indicate that the quality of the model employed for generating the reference trajectory used for control purposes has an important in°uence on the resulting amount of fuel required to track the reference trajectory. The model used to generate LQR controller gains also has an e®ect on the e±ciency of the controller.
Resumo:
Analysis of observed ozone profiles in Northern Hemisphere low and middle latitudes reveals the seasonal persistence of ozone anomalies in both the lower and upper stratosphere. Principal component analysis is used to detect that above 16 hPa the persistence is strongest in the latitude band 15–45°N, while below 16 hPa the strongest persistence is found over 45–60°N. In both cases, ozone anomalies persist through the entire year from November to October. The persistence of ozone anomalies in the lower stratosphere is presumably related to the wintertime ozone buildup with subsequent photochemical relaxation through summer, as previously found for total ozone. The persistence in the upper stratosphere is more surprising, given the short lifetime of Ox at these altitudes. It is hypothesized that this “seasonal memory” in the upper stratospheric ozone anomalies arises from the seasonal persistence of transport-induced wintertime NOy anomalies, which then perturb the ozone chemistry throughout the rest of the year. This hypothesis is confirmed by analysis of observations of NO2, NOx, and various long-lived trace gases in the upper stratosphere, which are found to exhibit the same seasonal persistence. Previous studies have attributed much of the year-to-year variability in wintertime extratropical upper stratospheric ozone to the Quasi-Biennial Oscillation (QBO) through transport-induced NOy (and hence NO2) anomalies but have not identified any statistical connection between the QBO and summertime ozone variability. Our results imply that through this “seasonal memory,” the QBO has an asynchronous effect on ozone in the low to midlatitude upper stratosphere during summer and early autumn.
Resumo:
Temperature and precipitation are major forcing factors influencing grapevine phenology and yield, as well as wine quality. Bioclimatic indices describing the suitability of a particular region for wine production are a commonly used tool for viticultural zoning. For this research these indices were computed for Europe by using the E-OBS gridded daily temperature and precipitation data set for the period from 1950 to 2009. Results showed strong regional contrasts based on the different index patterns and reproduced the wide diversity of local conditions that largely explain the quality and diversity of grapevines being grown across Europe. Owing to the strong inter-annual variability in the indices, a trend analysis and a principal component analysis were applied together with an assessment of their mean patterns. Significant trends were identified in the Winkler and Huglin indices, particularly for southwestern Europe. Four statistically significant orthogonal modes of variability were isolated for the Huglin index (HI), jointly representing 82% of the total variance in Europe. The leading mode was largely dominant (48% of variance) and mainly reflected the observed historical long-term changes. The other 3 modes corresponded to regional dipoles within Europe. Despite the relevance of local and regional climatic characteristics to grapevines, it was demonstrated via canonical correlation analysis that the observed inter-annual variability of the HI was strongly controlled by the large-scale atmospheric circulation during the growing season (April to September).
Resumo:
The link between the Pacific/North American pattern (PNA) and the North Atlantic Oscillation (NAO) is investigated in reanalysis data (NCEP, ERA40) and multi-century CGCM runs for present day climate using three versions of the ECHAM model. PNA and NAO patterns and indices are determined via rotated principal component analysis on monthly mean 500 hPa geopotential height fields using the varimax criteria. On average, the multi-century CGCM simulations show a significant anti-correlation between PNA and NAO. Further, multi-decadal periods with significantly enhanced (high anti-correlation, active phase) or weakened (low correlations, inactive phase) coupling are found in all CGCMs. In the simulated active phases, the storm track activity near Newfoundland has a stronger link with the PNA variability than during the inactive phases. On average, the reanalysis datasets show no significant anti-correlation between PNA and NAO indices, but during the sub-period 1973–1994 a significant anti-correlation is detected, suggesting that the present climate could correspond to an inactive period as detected in the CGCMs. An analysis of possible physical mechanisms suggests that the link between the patterns is established by the baroclinic waves forming the North Atlantic storm track. The geopotential height anomalies associated with negative PNA phases induce an increased advection of warm and moist air from the Gulf of Mexico and cold air from Canada. Both types of advection contribute to increase baroclinicity over eastern North America and also to increase the low level latent heat content of the warm air masses. Thus, growth conditions for eddies at the entrance of the North Atlantic storm track are enhanced. Considering the average temporal development during winter for the CGCM, results show an enhanced Newfoundland storm track maximum in the early winter for negative PNA, followed by a downstream enhancement of the Atlantic storm track in the subsequent months. In active (passive) phases, this seasonal development is enhanced (suppressed). As the storm track over the central and eastern Atlantic is closely related to the NAO variability, this development can be explained by the shift of the NAO index to more positive values.
Resumo:
The interpretation of Neotropical fossil phytolith assemblages for palaeoenvironmental and archaeological reconstructions relies on the development of appropriate modern analogues. We analyzed modern phytolith assemblages from the soils of ten distinctive tropical vegetation communities in eastern lowland Bolivia, ranging from terra firme humid evergreen forest to seasonally-inundated savannah. Results show that broad ecosystems – evergreen tropical forest, semi-deciduous dry tropical forest, and savannah – can be clearly differentiated by examination of their phytolith spectra and the application of Principal Component Analysis (PCA). Differences in phytolith assemblages between particular vegetation communities within each of these ecosystems are more subtle, but can still be identified. Comparison of phytolith assemblages with pollen rain data and stable carbon isotope analyses from the same vegetation plots show that these proxies are not only complementary, but significantly improve taxonomic and ecosystem resolution, and therefore our ability to interpret palaeoenvironmental and archaeological records. Our data underline the utility of phytolith analyses for reconstructing Amazon Holocene vegetation histories and pre-Columbian land use, particularly the high spatial resolution possible with terrestrial soil-based phytolith studies.
Resumo:
A recently developed capillary electrophoresis (CE)-negative-ionisation mass spectrometry (MS) method was used to profile anionic metabolites in a microbial-host co-metabolism study. Urine samples from rats receiving antibiotics (penicillin G and streptomycin sulfate) for 0, 4, or 8 days were analysed. A quality control sample was measured repeatedly to monitor the performance of the applied CE-MS method. After peak alignment, relative standard deviations (RSDs) for migration time of five representative compounds were below 0.4 %, whereas RSDs for peak area were 7.9–13.5 %. Using univariate and principal component analysis of obtained urinary metabolic profiles, groups of rats receiving different antibiotic treatment could be distinguished based on 17 discriminatory compounds, of which 15 were downregulated and 2 were upregulated upon treatment. Eleven compounds remained down- or upregulated after discontinuation of the antibiotics administration, whereas a recovery effect was observed for others. Based on accurate mass, nine compounds were putatively identified; these included the microbial-mammalian co-metabolites hippuric acid and indoxyl sulfate. Some discriminatory compounds were also observed by other analytical techniques, but CE-MS uniquely revealed ten metabolites modulated by antibiotic exposure, including aconitic acid and an oxocholic acid. This clearly demonstrates the added value of CE-MS for nontargeted profiling of small anionic metabolites in biological samples.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
Sainfoin is a non-bloating temperate forage legume with a moderate-to-high condensed tannin (CT) content. This study investigated whether the diversity of sainfoin accessions in terms of CT structures and contents could be related to rumen in vitro gas and methane (CH4) production and fermentation characteristics. The aim was to identify promising accessions for future investigations. Accessions differed (P < 0·0001) in terms of total gas and CH4 productions. Fermentation kinetics (i.e. parameters describing the shape of the gas production curve and half-time gas production) for CH4 production were influenced by accession (P ≤ 0·038), but not by PEG. Accession, PEG and time affected (P < 0·001) CH4 production, but accession and PEG interaction showed only a tendency (P = 0·08). Increase in CH4 due to PEG addition was not related to CT content. Further analysis of the relationships among multiple traits (nutritional composition, CT structure and CH4 production) using principal component analysis (PCA) based on optimally weighted variables revealed differences among accessions. The first two principal component axes, PC1 (57·6%) and PC2 (18·4%), explained 76·0% of the total variation among accessions. Loading of biplots derived from both PCAs made it possible to establish a relationship between the ratio of prodelphinidin:procyanidin (PD:PC) tannins and CH4 production in some accessions. The PD:PC ratio seems to be an important source of variation that is negatively related to CH4 production. These results suggested that sainfoin accessions collected from across the world exhibited substantial variation in terms of their effects on rumen in vitro CH4 production, revealing some promising accessions for future investigations.
Resumo:
Hydrophilic interaction chromatography–mass spectrometry (HILIC–MS) was used for anionic metabolic profiling of urine from antibiotic-treated rats to study microbial–host co-metabolism. Rats were treated with the antibiotics penicillin G and streptomycin sulfate for four or eight days and compared to a control group. Urine samples were collected at day zero, four and eight, and analyzed by HILIC–MS. Multivariate data analysis was applied to the urinary metabolic profiles to identify biochemical variation between the treatment groups. Principal component analysis found a clear distinction between those animals receiving antibiotics and the control animals, with twenty-nine discriminatory compounds of which twenty were down-regulated and nine up-regulated upon treatment. In the treatment group receiving antibiotics for four days, a recovery effect was observed for seven compounds after cessation of antibiotic administration. Thirteen discriminatory compounds could be putatively identified based on their accurate mass, including aconitic acid, benzenediol sulfate, ferulic acid sulfate, hippuric acid, indoxyl sulfate, penicillin G, phenol and vanillin 4-sulfate. The rat urine samples had previously been analyzed by capillary electrophoresis (CE) with MS detection and proton nuclear magnetic resonance (1H NMR) spectroscopy. Using CE–MS and 1H NMR spectroscopy seventeen and twenty-five discriminatory compounds were found, respectively. Both hippuric acid and indoxyl sulfate were detected across all three platforms. Additionally, eight compounds were observed with both HILIC–MS and CE–MS. Overall, HILIC–MS appears to be highly complementary to CE–MS and 1H NMR spectroscopy, identifying additional compounds that discriminate the urine samples from antibiotic-treated and control rats.
Resumo:
Animal models are invaluable tools which allow us to investigate the microbiome-host dialogue. However, experimental design introduces biases in the data that we collect, also potentially leading to biased conclusions. With obesity at pandemic levels animal models of this disease have been developed; we investigated the role of experimental design on one such rodent model. We used 454 pyrosequencing to profile the faecal bacteria of obese (n = 6) and lean (homozygous n = 6; heterozygous n = 6) Zucker rats over a 10 week period, maintained in mixed-genotype cages, to further understand the relationships between the composition of the intestinal bacteria and age, obesity progression, genetic background and cage environment. Phylogenetic and taxon-based univariate and multivariate analyses (non-metric multidimensional scaling, principal component analysis) showed that age was the most significant source of variation in the composition of the faecal microbiota. Second to this, cage environment was found to clearly impact the composition of the faecal microbiota, with samples from animals from within the same cage showing high community structure concordance, but large differences seen between cages. Importantly, the genetically induced obese phenotype was not found to impact the faecal bacterial profiles. These findings demonstrate that the age and local environmental cage variables were driving the composition of the faecal bacteria and were more deterministically important than the host genotype. These findings have major implications for understanding the significance of functional metagenomic data in experimental studies and beg the question; what is being measured in animal experiments in which different strains are housed separately, nature or nurture?