976 resultados para Fast methods
Resumo:
The general strategy to perform anti-doping analyses of urine samples starts with the screening for a wide range of compounds. This step should be fast, generic and able to detect any sample that may contain a prohibited substance while avoiding false negatives and reducing false positive results. The experiments presented in this work were based on ultra-high-pressure liquid chromatography coupled to hybrid quadrupole time-of-flight mass spectrometry. Thanks to the high sensitivity of the method, urine samples could be diluted 2-fold prior to injection. One hundred and three forbidden substances from various classes (such as stimulants, diuretics, narcotics, anti-estrogens) were analysed on a C(18) reversed-phase column in two gradients of 9min (including two 3min equilibration periods) for positive and negative electrospray ionisation and detected in the MS full scan mode. The automatic identification of analytes was based on retention time and mass accuracy, with an automated tool for peak picking. The method was validated according to the International Standard for Laboratories described in the World Anti-Doping Code and was selective enough to comply with the World Anti-Doping Agency recommendations. In addition, the matrix effect on MS response was measured on all investigated analytes spiked in urine samples. The limits of detection ranged from 1 to 500ng/mL, allowing the identification of all tested compounds in urine. When a sample was reported positive during the screening, a fast additional pre-confirmatory step was performed to reduce the number of confirmatory analyses.
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Resumo:
Comparative analyses of survival senescence by using life tables have identified generalizations including the observation that mammals senesce faster than similar-sized birds. These generalizations have been challenged because of limitations of life-table approaches and the growing appreciation that senescence is more than an increasing probability of death. Without using life tables, we examine senescence rates in annual individual fitness using 20 individual-based data sets of terrestrial vertebrates with contrasting life histories and body size. We find that senescence is widespread in the wild and equally likely to occur in survival and reproduction. Additionally, mammals senesce faster than birds because they have a faster life history for a given body size. By allowing us to disentangle the effects of two major fitness components our methods allow an assessment of the robustness of the prevalent life-table approach. Focusing on one aspect of life history - survival or recruitment - can provide reliable information on overall senescence.
Resumo:
BACKGROUND AND OBJECTIVE: Key factors of Fast Track (FT) programs are fluid restriction and epidural analgesia (EDA). We aimed to challenge the preconception that the combination of fluid restriction and EDA might induce hypotension and renal dysfunction. METHODS: A recent randomized trial (NCT00556790) showed reduced complications after colectomy in FT patients compared with standard care (SC). Patients with an effective EDA were compared with regard to hemodynamics and renal function. RESULTS: 61/76 FT patients and 59/75 patients in the SC group had an effective EDA. Both groups were comparable regarding demographics and surgery-related characteristics. FT patients received significantly less i.v. fluids intraoperatively (1900 mL [range 1100-4100] versus 2900 mL [1600-5900], P < 0.0001) and postoperatively (700 mL [400-1500] versus 2300 mL [1800-3800], P < 0.0001). Intraoperatively, 30 FT compared with 19 SC patients needed colloids or vasopressors, but this was statistically not significant (P = 0.066). Postoperative requirements were low in both groups (3 versus 5 patients; P = 0.487). Pre- and postoperative values for creatinine, hematocrit, sodium, and potassium were similar, and no patient developed renal dysfunction in either group. Only one of 82 patients having an EDA without a bladder catheter had urinary retention. Overall, FT patients had fewer postoperative complications (6 versus 20 patients; P = 0.002) and a shorter median hospital stay (5 [2-30] versus 9 d [6-30]; P< 0.0001) compared with the SC group. CONCLUSIONS: Fluid restriction and EDA in FT programs are not associated with clinically relevant hemodynamic instability or renal dysfunction.
Resumo:
Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.
Resumo:
The present work describes a fast gas chromatography/negative-ion chemical ionization tandem mass spectrometric assay (Fast GC/NICI-MS/MS) for analysis of tetrahydrocannabinol (THC), 11-hydroxy-tetrahydrocannabinol (THC-OH) and 11-nor-9-carboxy-tetrahydrocannabinol (THC-COOH) in whole blood. The cannabinoids were extracted from 500 microL of whole blood by a simple liquid-liquid extraction (LLE) and then derivatized by using trifluoroacetic anhydride (TFAA) and hexafluoro-2-propanol (HFIP) as fluorinated agents. Mass spectrometric detection of the analytes was performed in the selected reaction-monitoring mode on a triple quadrupole instrument after negative-ion chemical ionization. The assay was found to be linear in the concentration range of 0.5-20 ng/mL for THC and THC-OH, and of 2.5-100 ng/mL for THC-COOH. Repeatability and intermediate precision were found less than 12% for all concentrations tested. Under standard chromatographic conditions, the run cycle time would have been 15 min. By using fast conditions of separation, the assay analysis time has been reduced to 5 min, without compromising the chromatographic resolution. Finally, a simple approach for estimating the uncertainty measurement is presented.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.
Resumo:
Water is vital to humans and each of us needs at least 1.5 L of safe water a day to drink. Beginning as long ago as 1958 the World Health Organization (WHO) has published guidelines to help ensure water is safe to drink. Focused from the start on monitoring radionuclides in water, and continually cooperating with WHO, the International Standardization Organization (ISO) has been publishing standards on radioactivity test methods since 1978. As reliable, comparable and"fit for purpose" results are an essential requirement for any public health decision based on radioactivity measurements, international standards of tested and validated radionuclide test methods are an important tool for production of such measurements. This paper presents the ISO standards already published that could be used as normative references by testing laboratories in charge of radioactivity monitoring of drinking water as well as those currently under drafting and the prospect of standardized fast test methods in response to a nuclear accident.
Resumo:
The objective of this work was to combine the advantages of the dried blood spot (DBS) sampling process with the highly sensitive and selective negative-ion chemical ionization tandem mass spectrometry (NICI-MS-MS) to analyze for recent antidepressants including fluoxetine, norfluoxetine, reboxetine, and paroxetine from micro whole blood samples (i.e., 10 microL). Before analysis, DBS samples were punched out, and antidepressants were simultaneously extracted and derivatized in a single step by use of pentafluoropropionic acid anhydride and 0.02% triethylamine in butyl chloride for 30 min at 60 degrees C under ultrasonication. Derivatives were then separated on a gas chromatograph coupled with a triple-quadrupole mass spectrometer operating in negative selected reaction monitoring mode for a total run time of 5 min. To establish the validity of the method, trueness, precision, and selectivity were determined on the basis of the guidelines of the "Société Française des Sciences et des Techniques Pharmaceutiques" (SFSTP). The assay was found to be linear in the concentration ranges 1 to 500 ng mL(-1) for fluoxetine and norfluoxetine and 20 to 500 ng mL(-1) for reboxetine and paroxetine. Despite the small sampling volume, the limit of detection was estimated at 20 pg mL(-1) for all the analytes. The stability of DBS was also evaluated at -20 degrees C, 4 degrees C, 25 degrees C, and 40 degrees C for up to 30 days. Furthermore, the method was successfully applied to a pharmacokinetic investigation performed on a healthy volunteer after oral administration of a single 40-mg dose of fluoxetine. Thus, this validated DBS method combines an extractive-derivative single step with a fast and sensitive GC-NICI-MS-MS technique. Using microliter blood samples, this procedure offers a patient-friendly tool in many biomedical fields such as checking treatment adherence, therapeutic drug monitoring, toxicological analyses, or pharmacokinetic studies.
Resumo:
The purpose of the research is to define practical profit which can be achieved using neural network methods as a prediction instrument. The thesis investigates the ability of neural networks to forecast future events. This capability is checked on the example of price prediction during intraday trading on stock market. The executed experiments show predictions of average 1, 2, 5 and 10 minutes’ prices based on data of one day and made by two different types of forecasting systems. These systems are based on the recurrent neural networks and back propagation neural nets. The precision of the predictions is controlled by the absolute error and the error of market direction. The economical effectiveness is estimated by a special trading system. In conclusion, the best structures of neural nets are tested with data of 31 days’ interval. The best results of the average percent of profit from one transaction (buying + selling) are 0.06668654, 0.188299453, 0.349854787 and 0.453178626, they were achieved for prediction periods 1, 2, 5 and 10 minutes. The investigation can be interesting for the investors who have access to a fast information channel with a possibility of every-minute data refreshment.
Resumo:
BACKGROUND: Oxidative stress and the specific impairment of perisomatic gamma-aminobutyric acid circuits are hallmarks of the schizophrenic brain and its animal models. Proper maturation of these fast-spiking inhibitory interneurons normally defines critical periods of experience-dependent cortical plasticity. METHODS: Here, we linked these processes by genetically inducing a redox dysregulation restricted to such parvalbumin-positive cells and examined the impact on critical period plasticity using the visual system as a model (3-6 mice/group). RESULTS: Oxidative stress was accompanied by a significant loss of perineuronal nets, which normally enwrap mature fast-spiking cells to limit adult plasticity. Accordingly, the neocortex remained plastic even beyond the peak of its natural critical period. These effects were not seen when redox dysregulation was targeted in excitatory principal cells. CONCLUSIONS: A cell-specific regulation of redox state thus balances plasticity and stability of cortical networks. Mistimed developmental trajectories of brain plasticity may underlie, in part, the pathophysiology of mental illness. Such prolonged developmental plasticity may, in turn, offer a therapeutic opportunity for cognitive interventions targeting brain plasticity in schizophrenia.
Resumo:
Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.
Resumo:
We describe methods for the fast production of highly coherent-spin-squeezed many-body states in bosonic Josephson junctions. We start from the known mapping of the two-site Bose-Hubbard (BH) Hamiltonian to that of a single effective particle evolving according to a Schrödinger-like equation in Fock space. Since, for repulsive interactions, the effective potential in Fock space is nearly parabolic, we extend recently derived protocols for shortcuts to adiabatic evolution in harmonic potentials to the many-body BH Hamiltonian. A comparison with current experiments shows that our methods allow for an important reduction in the preparation times of highly squeezed spin states.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
PURPOSE: We propose the use of a retrospectively gated cine fast spin echo (FSE) sequence for characterization of carotid artery dynamics. The aim of this study was to compare cine FSE measures of carotid dynamics with measures obtained on prospectively gated FSE images. METHODS: The common carotid arteries in 10 volunteers were imaged using two temporally resolved sequences: (i) cine FSE and (ii) prospectively gated FSE. Three raters manually traced a common carotid artery area for all cardiac phases on both sequences. Measured areas and systolic-diastolic area changes were calculated and compared. Inter- and intra-rater reliability were assessed for both sequences. RESULTS: No significant difference between cine FSE and prospectively gated FSE areas were observed (P = 0.36). Both sequences produced repeatable cross-sectional area measurements: inter-rater intraclass correlation coefficient (ICC) = 0.88 on cine FSE images and 0.87 on prospectively gated FSE images. Minimum detectable difference (MDD) in systolic-diastolic area was 4.9 mm(2) with cine FSE and 6.4 mm(2) with prospectively gated FSE. CONCLUSION: This cine FSE method produced repeatable dynamic carotid artery measurements with less artifact and greater temporal efficiency compared with prospectively gated FSE. Magn Reson Med 74:1103-1109, 2015. © 2014 Wiley Periodicals, Inc.