913 resultados para MS-based methods


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Identificación y caracterización del problema. Uno de los problemas más importantes asociados con la construcción de software es la corrección del mismo. En busca de proveer garantías del correcto funcionamiento del software, han surgido una variedad de técnicas de desarrollo con sólidas bases matemáticas y lógicas conocidas como métodos formales. Debido a su naturaleza, la aplicación de métodos formales requiere gran experiencia y conocimientos, sobre todo en lo concerniente a matemáticas y lógica, por lo cual su aplicación resulta costosa en la práctica. Esto ha provocado que su principal aplicación se limite a sistemas críticos, es decir, sistemas cuyo mal funcionamiento puede causar daños de magnitud, aunque los beneficios que sus técnicas proveen son relevantes a todo tipo de software. Poder trasladar los beneficios de los métodos formales a contextos de desarrollo de software más amplios que los sistemas críticos tendría un alto impacto en la productividad en tales contextos. Hipótesis. Contar con herramientas de análisis automático es un elemento de gran importancia. Ejemplos de esto son varias herramientas potentes de análisis basadas en métodos formales, cuya aplicación apunta directamente a código fuente. En la amplia mayoría de estas herramientas, la brecha entre las nociones a las cuales están acostumbrados los desarrolladores y aquellas necesarias para la aplicación de estas herramientas de análisis formal sigue siendo demasiado amplia. Muchas herramientas utilizan lenguajes de aserciones que escapan a los conocimientos y las costumbres usuales de los desarrolladores. Además, en muchos casos la salida brindada por la herramienta de análisis requiere cierto manejo del método formal subyacente. Este problema puede aliviarse mediante la producción de herramientas adecuadas. Otro problema intrínseco a las técnicas automáticas de análisis es cómo se comportan las mismas a medida que el tamaño y complejidad de los elementos a analizar crece (escalabilidad). Esta limitación es ampliamente conocida y es considerada crítica en la aplicabilidad de métodos formales de análisis en la práctica. Una forma de atacar este problema es el aprovechamiento de información y características de dominios específicos de aplicación. Planteo de objetivos. Este proyecto apunta a la construcción de herramientas de análisis formal para contribuir a la calidad, en cuanto a su corrección funcional, de especificaciones, modelos o código, en el contexto del desarrollo de software. Más precisamente, se busca, por un lado, identificar ambientes específicos en los cuales ciertas técnicas de análisis automático, como el análisis basado en SMT o SAT solving, o el model checking, puedan llevarse a niveles de escalabilidad superiores a los conocidos para estas técnicas en ámbitos generales. Se intentará implementar las adaptaciones a las técnicas elegidas en herramientas que permitan su uso a desarrolladores familiarizados con el contexto de aplicación, pero no necesariamente conocedores de los métodos o técnicas subyacentes. Materiales y métodos a utilizar. Los materiales a emplear serán bibliografía relevante al área y equipamiento informático. Métodos. Se emplearán los métodos propios de la matemática discreta, la lógica y la ingeniería de software. Resultados esperados. Uno de los resultados esperados del proyecto es la individualización de ámbitos específicos de aplicación de métodos formales de análisis. Se espera que como resultado del desarrollo del proyecto surjan herramientas de análisis cuyo nivel de usabilidad sea adecuado para su aplicación por parte de desarrolladores sin formación específica en los métodos formales utilizados. Importancia del proyecto. El principal impacto de este proyecto será la contribución a la aplicación práctica de técnicas formales de análisis en diferentes etapas del desarrollo de software, con la finalidad de incrementar su calidad y confiabilidad. A crucial factor for software quality is correcteness. Traditionally, formal approaches to software development concentrate on functional correctness, and tackle this problem basically by being based on well defined notations founded on solid mathematical grounds. This makes formal methods better suited for analysis, due to their precise semantics, but they are usually more complex, and require familiarity and experience with the manipulation of mathematical definitions. So, their acceptance by software engineers is rather restricted, and formal methods applications have been confined to critical systems. Nevertheless, it is obvious that the advantages that formal methods provide apply to any kind of software system. It is accepted that appropriate software tool support for formal analysis is essential, if one seeks providing support for software development based on formal methods. Indeed, some of the relatively recent sucesses of formal methods are accompanied by good quality tools that automate powerful analysis mechanisms, and are even integrated in widely used development environments. Still, most of these tools either concentrate on code analysis, and in many cases are still far from being simple enough to be employed by software engineers without experience in formal methods. Another important problem for the adoption of tool support for formal methods is scalability. Automated software analysis is intrinsically complex, and thus techniques do not scale well in the general case. In this project, we will attempt to identify particular modelling, design, specification or coding activities in software development processes where to apply automated formal analysis techniques. By focusing in very specific application domains, we expect to find characteristics that might be exploited to increase the scalability of the corresponding analyses, compared to the general case.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Informatik, Diss., 2013

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Captan and folpet are fungicides largely used in agriculture. They have similar chemical structures, except that folpet has an aromatic ring unlike captan. Their half-lives in blood are very short, given that they are readily broken down to tetrahydrophthalimide (THPI) and phthalimide (PI), respectively. Few authors measured these biomarkers in plasma or urine, and analysis was conducted either by gas chromatography coupled to mass spectrometry or liquid chromatography with UV detection. The objective of this study was thus to develop simple, sensitive and specific liquid chromatography-atmospheric pressure chemical ionization-tandem mass spectrometry (LC/APCI-MS/MS) methods to quantify both THPI and PI in human plasma and urine. Briefly, deuterated THPI was added as an internal standard and purification was performed by solid-phase extraction followed by LC/APCI-MS/MS analysis in negative ion mode for both compounds. Validation of the methods was conducted using spiked blank plasma and urine samples at concentrations ranging from 1 to 250 μg/L and 1 to 50 μg/L, respectively, along with samples of volunteers and workers exposed to captan or folpet. The methods showed a good linearity (R (2) > 0.99), recovery (on average 90% for THPI and 75% for PI), intra- and inter-day precision (RSD, <15%) and accuracy (<20%), and stability. The limit of detection was 0.58 μg/L in urine and 1.47 μg/L in plasma for THPI and 1.14 and 2.17 μg/L, respectively, for PI. The described methods proved to be accurate and suitable to determine the toxicokinetics of both metabolites in human plasma and urine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction: The general strategy to perform anti-doping analysis starts with a screening followed by a confirmatory step when a sample is suspected to be positive. The screening step should be fast, generic and able to highlight any sample that may contain a prohibited substance by avoiding false negative and reducing false positive results. The confirmatory step is a dedicated procedure comprising a selective sample preparation and detection mode. Aim: The purpose of the study is to develop rapid screening and selective confirmatory strategies to detect and identify 103 doping agents in urine. Methods: For the screening, urine samples were simply diluted by a factor 2 with ultra-pure water and directly injected ("dilute and shoot") in the ultrahigh- pressure liquid chromatography (UHPLC). The UHPLC separation was performed in two gradients (ESI positive and negative) from 5/95 to 95/5% of MeCN/Water containing 0.1% formic acid. The gradient analysis time is 9 min including 3 min reequilibration. Analytes detection was performed in full scan mode on a quadrupole time-of-flight (QTOF) mass spectrometer by acquiring the exact mass of the protonated (ESI positive) or deprotonated (ESI negative) molecular ion. For the confirmatory analysis, urine samples were extracted on SPE 96-well plate with mixed-mode cation (MCX) for basic and neutral compounds or anion exchange (MAX) sorbents for acidic molecules. The analytes were eluted in 3 min (including 1.5 min reequilibration) with a S1-25 Ann Toxicol Anal. 2009; 21(S1) Abstracts gradient from 5/95 to 95/5% of MeCN/Water containing 0.1% formic acid. Analytes confirmation was performed in MS and MS/MS mode on a QTOF mass spectrometer. Results: In the screening and confirmatory analysis, basic and neutral analytes were analysed in the positive ESI mode, whereas acidic compounds were analysed in the negative mode. The analyte identification was based on retention time (tR) and exact mass measurement. "Dilute and shoot" was used as a generic sample treatment in the screening procedure, but matrix effect (e.g., ion suppression) cannot be avoided. However, the sensitivity was sufficient for all analytes to reach the minimal required performance limit (MRPL) required by the World Anti Doping Agency (WADA). To avoid time-consuming confirmatory analysis of false positive samples, a pre-confirmatory step was added. It consists of the sample re-injection, the acquisition of MS/MS spectra and the comparison to reference material. For the confirmatory analysis, urine samples were extracted by SPE allowing a pre-concentration of the analyte. A fast chromatographic separation was developed as a single analyte has to be confirmed. A dedicated QTOF-MS and MS/MS acquisition was performed to acquire within the same run a parallel scanning of two functions. Low collision energy was applied in the first channel to obtain the protonated molecular ion (QTOF-MS), while dedicated collision energy was set in the second channel to obtain fragmented ions (QTOF-MS/MS). Enough identification points were obtained to compare the spectra with reference material and negative urine sample. Finally, the entire process was validated and matrix effects quantified. Conclusion: Thanks to the coupling of UHPLC with the QTOF mass spectrometer, high tR repeatability, sensitivity, mass accuracy and mass resolution over a broad mass range were obtained. The method was sensitive, robust and reliable enough to detect and identify doping agents in urine. Keywords: screening, confirmatory analysis, UHPLC, QTOF, doping agents

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Electrophoretic studies of multilocus-enzymes (MLEE) and whole-cell protein (SDS-PAGE) were carried out in order to evaluate the parity between different methods for the characterization of five Candida species commonly isolated from oral cavity of humans by numerical taxonomy methods. The obtained data revealed that sodium dodecyl sulfate polyacrylamide gel electrophoresis is more efficient in grouping strains in their respective species while MLEE has much limited resolution in organizing all strains in their respective species-specific clusters. MLEE technique must be regarded for surveys in which just one species of Candida is involved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Imaging mass spectrometry (IMS) is an emergent and innovative approach for measuring the composition, abundance and regioselectivity of molecules within an investigated area of fixed dimension. Although providing unprecedented molecular information compared with conventional MS techniques, enhancement of protein signature by IMS is still necessary and challenging. This paper demonstrates the combination of conventional organic washes with an optimized aqueous-based buffer for tissue section preparation before matrix-assisted laser desorption/ionization (MALDI) IMS of proteins. Based on a 500 mM ammonium formate in water-acetonitrile (9:1; v/v, 0.1% trifluororacetic acid, 0.1% Triton) solution, this buffer wash has shown to significantly enhance protein signature by profiling and IMS (~fourfold) when used after organic washes (70% EtOH followed by 90% EtOH), improving the quality and number of ion images obtained from mouse kidney and a 14-day mouse fetus whole-body tissue sections, while maintaining a similar reproducibility with conventional tissue rinsing. Even if some protein losses were observed, the data mining has demonstrated that it was primarily low abundant signals and that the number of new peaks found is greater with the described procedure. The proposed buffer has thus demonstrated to be of high efficiency for tissue section preparation providing novel and complementary information for direct on-tissue MALDI analysis compared with solely conventional organic rinsing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A field survey on schistosomiais was carried out in 1998, in the municipality of Pedro de Toledo, a low endemic area in the state of São Paulo, Brazil. According to the parasitologic Kato-Katz method, the prevalence rate was 1.6%, with an infection intensity of 40.9 eggs per gram of stool. By the immunofluorescence test (IFT) for detection of IgG and IgM antibodies in the serum, IgG-IFT and IgM-IFT, respectively, prevalence indices of 33.2% and 33.5% were observed. To assess the impact of the schistosomiasis control program in the area, parasitologic and serologic data obtained in 1998, analyzed according to the age, sex, and residence zone, were compared to previous data obtained in a epidemiologic study carried out in 1980, when prevalence indices were of 22.8% and 55.5%, respectively by Kato-Katz and IgG-IFT. A significant fall of the prevalence was observed, indicating that the control measures were effective. Nonetheless, residual transmission was observed, demonstrating the need for a joint effort to include new approaches for better understanding the real situation and improving the control of the disease in low endemic areas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

INTRODUCTION. The role of turbine-based NIV ventilators (TBV) versus ICU ventilators with NIV mode activated (ICUV) to deliver NIV in case of severe respiratory failure remains debated. OBJECTIVES. To compare the response time and pressurization capacity of TBV and ICUV during simulated NIV with normal and increased respiratory demand, in condition of normal and obstructive respiratory mechanics. METHODS. In a two-chamber lung model, a ventilator simulated normal (P0.1 = 2 mbar, respiratory rate RR = 15/min) or increased (P0.1 = 6 mbar, RR = 25/min) respiratory demand. NIV was simulated by connecting the lung model (compliance 100 ml/mbar; resistance 5 or 20 l/mbar) to a dummy head equipped with a naso-buccal mask. Connections allowed intentional leaks (29 ± 5 % of insufflated volume). Ventilators to test: Servo-i (Maquet), V60 and Vision (Philips Respironics) were connected via a standard circuit to the mask. Applied pressure support levels (PSL) were 7 mbar for normal and 14 mbar for increased demand. Airway pressure and flow were measured in the ventilator circuit and in the simulated airway. Ventilator performance was assessed by determining trigger delay (Td, ms), pressure time product at 300 ms (PTP300, mbar s) and inspiratory tidal volume (VT, ml) and compared by three-way ANOVA for the effect of inspiratory effort, resistance and the ventilator. Differences between ventilators for each condition were tested by oneway ANOVA and contrast (JMP 8.0.1, p\0.05). RESULTS. Inspiratory demand and resistance had a significant effect throughout all comparisons. Ventilator data figure in Table 1 (normal demand) and 2 (increased demand): (a) different from Servo-i, (b) different from V60.CONCLUSION. In this NIV bench study, with leaks, trigger delay was shorter for TBV with normal respiratory demand. By contrast, it was shorter for ICUV when respiratory demand was high. ICUV afforded better pressurization (PTP 300) with increased demand and PSL, particularly with increased resistance. TBV provided a higher inspiratory VT (i.e., downstream from the leaks) with normal demand, and a significantly (although minimally) lower VT with increased demand and PSL.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The generation of an antigen-specific T-lymphocyte response is a complex multi-step process. Upon T-cell receptor-mediated recognition of antigen presented by activated dendritic cells, naive T-lymphocytes enter a program of proliferation and differentiation, during the course of which they acquire effector functions and may ultimately become memory T-cells. A major goal of modern immunology is to precisely identify and characterize effector and memory T-cell subpopulations that may be most efficient in disease protection. Sensitive methods are required to address these questions in exceedingly low numbers of antigen-specific lymphocytes recovered from clinical samples, and not manipulated in vitro. We have developed new techniques to dissect immune responses against viral or tumor antigens. These allow the isolation of various subsets of antigen-specific T-cells (with major histocompatibility complex [MHC]-peptide multimers and five-color FACS sorting) and the monitoring of gene expression in individual cells (by five-cell reverse transcription-polymerase chain reaction [RT-PCR]). We can also follow their proliferative life history by flow-fluorescence in situ hybridization (FISH) analysis of average telomere length. Recently, using these tools, we have identified subpopulations of CD8+ T-lymphocytes with distinct proliferative history and partial effector-like properties. Our data suggest that these subsets descend from recently activated T-cells and are committed to become differentiated effector T-lymphocytes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El programa experimental s’ha portat a terme dins el marc de les activitats del projecte TRUEFOOD, finançat per la UE per als anys 2007-2010. L’objectiu principal d’aquesta activitat ha estat l’avaluació dels continguts en àcid ascòrbic (vitamina C), polifenols totals, àcids fenòlics i flavonoides en mostres de tomàquet i enciam, produïts sota diferents condicions de camp (producció ecològica i convencional). Per aconseguir els resultats s’han utilitzat mètodes analítics basats en tècniques de cromatografia líquida d’alta eficàcia (HPLC) i d’ultra-alta eficàcia (UHPLC) acoblades a sistemes de detecció de diode array (DAD) i espectrometria de masses (MSn). Per a l’àcid ascòrbic, s’ha desenvolupat un mètode ràpid que ha permès la determinació d’aquest compost en diferents matrius vegetals amb el mínim pretractament de les mostres, utilitzant una fase estacionària HILIC (Fluorinated Stationary Phase). Els mètodes desenvolupats d’anàlisi de compostos fenòlics han permès realitzar les anàlisi de forma ràpida, a fi de processar el màxim nombre de mostres per a obtenir resultats representatius. S’ha realitzat una completa caracterització dels extractes de tomàquet i enciam, ampliant el coneixement descrit en la bibliografia sobre la seva composició fenòlica. En el cas de l’enciam, s’ha identificat quatre compostos fenòlics que mai abans han estat descrits i quantificats en aquesta hortalissa. La definició, amb precisió, dels continguts en vitamina C i compostos fenòlics en les mostres analitzades ha permès comparar els efectes de diferents tècniques de cultiu sobre les característiques nutricionals dels vegetals objecte de l’estudi. Els mètodes d’anàlisi desenvolupats i els resultats derivats del projecte seran publicats properament en revistes científiques de reconegut prestigi.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aim  Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location  World-wide.Methods  Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results  Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions  By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

AbstractText BACKGROUND: Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim's epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim's DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim's fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim's fraction, and then digest the residual victim's DNA with a nuclease. METHODS: The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. RESULTS: For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. CONCLUSIONS: In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This review covers two important techniques, high resolution nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (MS), used to characterize food products and detect possible adulteration of wine, fruit juices, and olive oil, all important products of the Mediterranean Basin. Emphasis is placed on the complementary use of SNIF-NMR (site-specific natural isotopic fractionation nuclear magnetic resonance) and IRMS (isotope-ratio mass spectrometry) in association with chemometric methods for detecting the adulteration.