927 resultados para electrochemical noise analysis
Resumo:
Microsecond long Molecular Dynamics (MD) trajectories of biomolecular processes are now possible due to advances in computer technology. Soon, trajectories long enough to probe dynamics over many milliseconds will become available. Since these timescales match the physiological timescales over which many small proteins fold, all atom MD simulations of protein folding are now becoming popular. To distill features of such large folding trajectories, we must develop methods that can both compress trajectory data to enable visualization, and that can yield themselves to further analysis, such as the finding of collective coordinates and reduction of the dynamics. Conventionally, clustering has been the most popular MD trajectory analysis technique, followed by principal component analysis (PCA). Simple clustering used in MD trajectory analysis suffers from various serious drawbacks, namely, (i) it is not data driven, (ii) it is unstable to noise and change in cutoff parameters, and (iii) since it does not take into account interrelationships amongst data points, the separation of data into clusters can often be artificial. Usually, partitions generated by clustering techniques are validated visually, but such validation is not possible for MD trajectories of protein folding, as the underlying structural transitions are not well understood. Rigorous cluster validation techniques may be adapted, but it is more crucial to reduce the dimensions in which MD trajectories reside, while still preserving their salient features. PCA has often been used for dimension reduction and while it is computationally inexpensive, being a linear method, it does not achieve good data compression. In this thesis, I propose a different method, a nonmetric multidimensional scaling (nMDS) technique, which achieves superior data compression by virtue of being nonlinear, and also provides a clear insight into the structural processes underlying MD trajectories. I illustrate the capabilities of nMDS by analyzing three complete villin headpiece folding and six norleucine mutant (NLE) folding trajectories simulated by Freddolino and Schulten [1]. Using these trajectories, I make comparisons between nMDS, PCA and clustering to demonstrate the superiority of nMDS. The three villin headpiece trajectories showed great structural heterogeneity. Apart from a few trivial features like early formation of secondary structure, no commonalities between trajectories were found. There were no units of residues or atoms found moving in concert across the trajectories. A flipping transition, corresponding to the flipping of helix 1 relative to the plane formed by helices 2 and 3 was observed towards the end of the folding process in all trajectories, when nearly all native contacts had been formed. However, the transition occurred through a different series of steps in all trajectories, indicating that it may not be a common transition in villin folding. The trajectories showed competition between local structure formation/hydrophobic collapse and global structure formation in all trajectories. Our analysis on the NLE trajectories confirms the notion that a tight hydrophobic core inhibits correct 3-D rearrangement. Only one of the six NLE trajectories folded, and it showed no flipping transition. All the other trajectories get trapped in hydrophobically collapsed states. The NLE residues were found to be buried deeply into the core, compared to the corresponding lysines in the villin headpiece, thereby making the core tighter and harder to undo for 3-D rearrangement. Our results suggest that the NLE may not be a fast folder as experiments suggest. The tightness of the hydrophobic core may be a very important factor in the folding of larger proteins. It is likely that chaperones like GroEL act to undo the tight hydrophobic core of proteins, after most secondary structure elements have been formed, so that global rearrangement is easier. I conclude by presenting facts about chaperone-protein complexes and propose further directions for the study of protein folding.
Resumo:
Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.
Resumo:
We present the first image of the Madeira upper crustal structure, using ambient seismic noise tomography. 16 months of ambient noise, recorded in a dense network of 26 seismometers deployed across Madeira, allowed reconstructing Rayleigh wave Green's functions between receivers. Dispersion analysis was performed in the short period band from 1.0 to 4.0 s. Group velocity measurements were regionalized to obtain 2D tomographic images, with a lateral resolution of 2.0 km in central Madeira. Afterwards, the dispersion curves, extracted from each cell of the 2D group velocity maps, were inverted as a function of depth to obtain a 3D shear wave velocity model of the upper crust, from the surface to a depth of 2.0 km. The obtained 3D velocity model reveals features throughout the island that correlates well with surface geology and island evolution.
Resumo:
Magnetically-induced forces on the inertial masses on-board LISA Path finder are expected to be one of the dominant contributions to the mission noise budget, accounting for up to 40%. The origin of this disturbance is the coupling of the residual magnetization and susceptibility of the test masses with the environmental magnetic field. In order to fully understand this important part of the noise model, a set of coils and magnetometers are integrated as a part of the diagnostics subsystem. During operations a sequence of magnetic excitations will be applied to precisely determine the coupling of the magnetic environment to the test mass displacement using the on-board magnetometers. Since no direct measurement of the magnetic field in the test mass position will be available, an extrapolation of the magnetic measurements to the test mass position will be carried out as a part of the data analysis activities. In this paper we show the first results on the magnetic experiments during an end-to-end LISA Path finder simulation, and we describe the methods under development to map the magnetic field on-board.
Resumo:
Thermal Diagnostics experiments to be carried out on board LISA Pathfinder (LPF) will yield a detailed characterisation of how temperature fluctuations affect the LTP (LISA Technology Package) instrument performance, a crucial information for future space based gravitational wave detectors as the proposed eLISA. Amongst them, the study of temperature gradient fluctuations around the test masses of the Inertial Sensors will provide as well information regarding the contribution of the Brownian noise, which is expected to limit the LTP sensitivity at frequencies close to 1mHz during some LTP experiments. In this paper we report on how these kind of Thermal Diagnostics experiments were simulated in the last LPF Simulation Campaign (November, 2013) involving all the LPF Data Analysis team and using an end-to-end simulator of the whole spacecraft. Such simulation campaign was conducted under the framework of the preparation for LPF operations.
Resumo:
Living organisms are open dissipative thermodynamic systems that rely on mechanothermo-electrochemical interactions to survive. Plant physiological processes allow plants to survive by converting solar radiation into chemical energy, and store that energy in form that can be used. Mammals catabolize food to obtain energy that is used to fuel, build and repair the cellular components. The exergy balance is a combined statement of the first and second laws of thermodynamics. It provides insight into the performance of systems. In this paper, exergy balance equations for both mammal’s and green plants are presented and analyzed.
Resumo:
Analysis methods for electrochemical etching baths consisting of various concentrations of hydrofluoric acid (HF) and an additional organic surface wetting agent are presented. These electrolytes are used for the formation of meso- and macroporous silicon. Monitoring the etching bath composition requires at least one method each for the determination of the HF concentration and the organic content of the bath. However, it is a precondition that the analysis equipment withstands the aggressive HF. Titration and a fluoride ion-selective electrode are used for the determination of the HF and a cuvette test method for the analysis of the organic content, respectively. The most suitable analysis method is identified depending on the components in the electrolyte with the focus on capability of resistance against the aggressive HF.
Resumo:
Voice acoustic analysis is becoming more and more usefúl in diagnosis of voice disorders or laryngological pathologies. The facility to record a voice sigiial is an advantage over other invasive techniques. This paper presents the statistical analyzes ofa set of voice parameters like jitter, shimmer and HNR over a 4 groups of subjects vvith dysphonia, fünctional dysphonia, hyperfünctional dysphonia, and psychogenic dysphonia and a control group. No statistical signifícance differences over pathologic groups were found but clear tendencies can be seen between pathologic and control group. The tendencies indicates this parameters as a good features to be used in an intelligent diagnosis system, moreover the jitter and shimmer parameters measured over different tones and vowels.
Resumo:
Breast cancer is one of the most prevalent forms of cancer in women. Despite all recent advances in early diagnosis and therapy, mortality data is not decreasing. This is an outcome of the inexistence of validated serum biomarkers allowing an early prognosis, out coming from the limited understanding of the natural history of the disease. In this context, miRNAs have been attracting a special interest throughout the scientific community as promising biomarkers in the early diagnosis of cancer. In breast cancer, several miRNAs and their levels of expression are significantly different between normal tissue and tissue with neoplasia, as well as between different molecular subtypes of breast cancer, also associated with prognosis. Thus, this these presents a meta-analysis that allows identifying a reliable miRNA biomarker for the early detection of breast cancer. In this, miRNA-155 was identified as the best one and an electrochemical biosensor was developed for its detection in serum samples. The biosensor was assembled by following three button-up stages: (1) the complementary miRNA sequence thiol terminated (anti-miRNA-155) was immobilized on a commercial gold screen-printed electrode (Au-SPE), followed by (2) blocking non-specific binding with mercaptosuccinic acid and by (3) miRNA hybridization. The biosensor was able to detect miRNA concentrations lying in the 10-18 mol/L (aM) range, displaying a linear response from 10 aM to 1nM. The device showed a limit of detection of 5.7 aM in human serum samples and good selectivity against other biomolecules in serum, such as cancer antigen CA-15.3 and bovine serum albumin (BSA). Overall, this simple and sensitive strategy is a promising approach for the quantitative and/or simultaneous analysis of multiple miRNA in physiological fluids, aiming at further biomedical research devoted to biomarker monitoring and point-of-care diagnosis.
Resumo:
Developments in theory and experiment have raised the prospect of an electronic technology based on the discrete nature of electron tunnelling through a potential barrier. This thesis deals with novel design and analysis tools developed to study such systems. Possible devices include those constructed from ultrasmall normal tunnelling junctions. These exhibit charging effects including the Coulomb blockade and correlated electron tunnelling. They allow transistor-like control of the transfer of single carriers, and present the prospect of digital systems operating at the information theoretic limit. As such, they are often referred to as single electronic devices. Single electronic devices exhibit self quantising logic and good structural tolerance. Their speed, immunity to thermal noise, and operating voltage all scale beneficially with junction capacitance. For ultrasmall junctions the possibility of room temperature operation at sub picosecond timescales seems feasible. However, they are sensitive to external charge; whether from trapping-detrapping events, externally gated potentials, or system cross-talk. Quantum effects such as charge macroscopic quantum tunnelling may degrade performance. Finally, any practical system will be complex and spatially extended (amplifying the above problems), and prone to fabrication imperfection. This summarises why new design and analysis tools are required. Simulation tools are developed, concentrating on the basic building blocks of single electronic systems; the tunnelling junction array and gated turnstile device. Three main points are considered: the best method of estimating capacitance values from physical system geometry; the mathematical model which should represent electron tunnelling based on this data; application of this model to the investigation of single electronic systems. (DXN004909)
Resumo:
In this contribution, a system identification procedure of a two-input Wiener model suitable for the analysis of the disturbance behavior of integrated nonlinear circuits is presented. The identified block model is comprised of two linear dynamic and one static nonlinear block, which are determined using an parameterized approach. In order to characterize the linear blocks, an correlation analysis using a white noise input in combination with a model reduction scheme is adopted. After having characterized the linear blocks, from the output spectrum under single tone excitation at each input a linear set of equations will be set up, whose solution gives the coefficients of the nonlinear block. By this data based black box approach, the distortion behavior of a nonlinear circuit under the influence of an interfering signal at an arbitrary input port can be determined. Such an interfering signal can be, for example, an electromagnetic interference signal which conductively couples into the port of consideration. © 2011 Author(s).
Resumo:
We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O&Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O&Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath.
Resumo:
Background: Vibroacoustic disease (VAD) is a systematic pathology characterized by the abnormal growth of extra-cellular matrices in the absence of infl ammatory processes, namely collagen and elastin, both of which are abundant in the basement membrane zone of the vocal folds. VAD can develop due to long-term exposure to infrasound and low-frequency noise (ILFN, <500 Hz). Mendes et al. (2006, 2008 and 2012) revealed that ILFN-exposed males and females presented an increased fundamental frequency (F0), decreased jitter %, and reduced maximum phonation frequency range, when compared with normative data. Temporal measures of maximum phonation time and S/Z ratio were generally reduced. Study Aims: Herein, the same voice acoustic parameters of 48 males, 36 airline pilots and 12 cabin crewmembers (age range 25-60 years) were studied, and the effects and interaction of age and years of ILFN exposure were investigated within those parameters. ILFN-exposure time (i.e. years of professional activity) ranged from 3.5 to 36 years. Materials and Methods: Spoken and sung phonatory tasks were recorded with a DA-P1 Tascam DAT and a C420III PP AKG head-worn microphone, positioned at 3 cm from the mouth. Acoustic analyses were performed using KayPENTAX Computer Speech Lab and Multi-Dimensional Voice Program. Results: Results revealed that even though pilots and cabin crewmembers were exposed to occupational environments with distinct (ILFN-rich) acoustical frequency distributions and sound pressure levels, differences in the vocal acoustic parameters were not evident. Analyzing data from both professional groups (N = 48) revealed that F0 increased signifi cantly with the number of years of professional activity. Conclusion: These results strongly suggest that the number of years of professional activity (i.e. total ILFN exposure time) had a signifi cant effect on F0. Furthermore, they may refl ect the histological changes specifi cally observed on the vocal folds of ILFN-exposed professionals.
Resumo:
Background: Financial abuse of elders is an under acknowledged problem and professionals' judgements contribute to both the prevalence of abuse and the ability to prevent and intervene. In the absence of a definitive "gold standard" for the judgement, it is desirable to try and bring novice professionals' judgemental risk thresholds to the level of competent professionals as quickly and effectively as possible. This study aimed to test if a training intervention was able to bring novices' risk thresholds for financial abuse in line with expert opinion. Methods: A signal detection analysis, within a randomised controlled trial of an educational intervention, was undertaken to examine the effect on the ability of novices to efficiently detect financial abuse. Novices (n = 154) and experts (n = 33) judged "certainty of risk" across 43 scenarios; whether a scenario constituted a case of financial abuse or not was a function of expert opinion. Novices (n = 154) were randomised to receive either an on-line educational intervention to improve financial abuse detection (n = 78) or a control group (no on-line educational intervention, n = 76). Both groups examined 28 scenarios of abuse (11 "signal" scenarios of risk and 17 "noise" scenarios of no risk). After the intervention group had received the on-line training, both groups then examined 15 further scenarios (5 "signal" and 10 "noise" scenarios). Results: Experts were more certain than the novices, pre (Mean 70.61 vs. 58.04) and post intervention (Mean 70.84 vs. 63.04); and more consistent. The intervention group (mean 64.64) were more certain of abuse post-intervention than the control group (mean 61.41, p = 0.02). Signal detection analysis of sensitivity (Á) and bias (C) revealed that this was due to the intervention shifting the novices' tendency towards saying "at risk" (C post intervention -.34) and away from their pre intervention levels of bias (C-.12). Receiver operating curves revealed more efficient judgments in the intervention group. Conclusion: An educational intervention can improve judgements of financial abuse amongst novice professionals.
Resumo:
In this work, we further extend the recently developed adaptive data analysis method, the Sparse Time-Frequency Representation (STFR) method. This method is based on the assumption that many physical signals inherently contain AM-FM representations. We propose a sparse optimization method to extract the AM-FM representations of such signals. We prove the convergence of the method for periodic signals under certain assumptions and provide practical algorithms specifically for the non-periodic STFR, which extends the method to tackle problems that former STFR methods could not handle, including stability to noise and non-periodic data analysis. This is a significant improvement since many adaptive and non-adaptive signal processing methods are not fully capable of handling non-periodic signals. Moreover, we propose a new STFR algorithm to study intrawave signals with strong frequency modulation and analyze the convergence of this new algorithm for periodic signals. Such signals have previously remained a bottleneck for all signal processing methods. Furthermore, we propose a modified version of STFR that facilitates the extraction of intrawaves that have overlaping frequency content. We show that the STFR methods can be applied to the realm of dynamical systems and cardiovascular signals. In particular, we present a simplified and modified version of the STFR algorithm that is potentially useful for the diagnosis of some cardiovascular diseases. We further explain some preliminary work on the nature of Intrinsic Mode Functions (IMFs) and how they can have different representations in different phase coordinates. This analysis shows that the uncertainty principle is fundamental to all oscillating signals.