909 resultados para automated NOE assignment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A complete analysis of H-1 and C-13 NMR spectra of the trypanocidal sesquiterpene lactone eremantholide C and two of its analogues is described. These structurally similar sesquiterpene lactones were submitted to H-1 NMR, C-13 (H-1) NMR, gCOSY, gHSQC, gHMBC, J-resolved and DPFGSE-NOE NMR techniques. The detailed analysis of those results, correlated to some computational calculations (molecular mechanics), led to the total and unequivocal assignment of all H-1 and C-13 NMR data. The determination of all H-1/H-1 coupling constants and all signal multiplicities, together with the elimination of previous ambiguities were also achieved. Copyright (C) 2008 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: The correct identification of the underlying cause of death and its precise assignment to a code from the International Classification of Diseases are important issues to achieve accurate and universally comparable mortality statistics These factors, among other ones, led to the development of computer software programs in order to automatically identify the underlying cause of death. OBJECTIVE: This work was conceived to compare the underlying causes of death processed respectively by the Automated Classification of Medical Entities (ACME) and the "Sistema de Seleção de Causa Básica de Morte" (SCB) programs. MATERIAL AND METHOD: The comparative evaluation of the underlying causes of death processed respectively by ACME and SCB systems was performed using the input data file for the ACME system that included deaths which occurred in the State of S. Paulo from June to December 1993, totalling 129,104 records of the corresponding death certificates. The differences between underlying causes selected by ACME and SCB systems verified in the month of June, when considered as SCB errors, were used to correct and improve SCB processing logic and its decision tables. RESULTS: The processing of the underlying causes of death by the ACME and SCB systems resulted in 3,278 differences, that were analysed and ascribed to lack of answer to dialogue boxes during processing, to deaths due to human immunodeficiency virus [HIV] disease for which there was no specific provision in any of the systems, to coding and/or keying errors and to actual problems. The detailed analysis of these latter disclosed that the majority of the underlying causes of death processed by the SCB system were correct and that different interpretations were given to the mortality coding rules by each system, that some particular problems could not be explained with the available documentation and that a smaller proportion of problems were identified as SCB errors. CONCLUSION: These results, disclosing a very low and insignificant number of actual problems, guarantees the use of the version of the SCB system for the Ninth Revision of the International Classification of Diseases and assures the continuity of the work which is being undertaken for the Tenth Revision version.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to assess and apply a microsatellite multiplex system for parentage determination in alpacas. An approach for parentage testing based on 10 microsatellites was evaluated in a population of 329 unrelated alpacas from different geographical zones in Peru. All microsatellite markers, which amplified in two multiplex reactions, were highly polymorphic with a mean of 14.5 alleles per locus (six to 28 alleles per locus) and an average expected heterozygosity (H-E) of 0.8185 (range of 0.698-0.946). The total parentage exclusion probability was 0.999456 for excluding a candidate parent from parentage of an arbitrary offspring, given only the genotype of the offspring, and 0.999991 for excluding a candidate parent from parentage of an arbitrary offspring, given the genotype of the offspring and the other parent. In a case test of parentage assignment, the microsatellite panel assigned 38 (from 45 cases) offspring parentage to 10 sires with LOD scores ranging from 2.19 x 10(+13) to 1.34 x 10(+15) and Delta values ranging from 2.80 x 10(+12) to 1.34 x 10(+15) with an estimated pedigree error rate of 15.5%. The performance of this multiplex panel of markers suggests that it will be useful in parentage testing of alpacas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protozoan parasites cause thousands of deaths each year in developing countries. The genome projects of these parasites opened a new era in the identification of therapeutic targets. However, the putative function could be predicted for fewer than half of the protein-coding genes. In this work, all Trypanosoma cruzi proteins containing predicted transmembrane spans were processed through an automated computational routine and further analyzed in order to assign the most probable function. The analysis consisted of dissecting the whole predicted protein in different regions. More than 5,000 sequences were processed, and the predicted biological functions were grouped into 19 categories according to the hits obtained after analysis. One focus of interest, due to the scarce information available on trypanosomatids, is the proteins involved in signal-transduction processes. In the present work, we identified 54 proteins belonging to this group, which were individually analyzed. The results show that by means of a simple pipeline it was possible to attribute probable functions to sequences annotated as coding for "hypothetical proteins.'' Also, we successfully identified the majority of candidates participating in the signal-transduction pathways in T. cruzi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Manual curation has long been held to be the gold standard for functional annotation of DNA sequence. Our experience with the annotation of more than 20,000 full-length cDNA sequences revealed problems with this approach, including inaccurate and inconsistent assignment of gene names, as well as many good assignments that were difficult to reproduce using only computational methods. For the FANTOM2 annotation of more than 60,000 cDNA clones, we developed a number of methods and tools to circumvent some of these problems, including an automated annotation pipeline that provides high-quality preliminary annotation for each sequence by introducing an uninformative filter that eliminates uninformative annotations, controlled vocabularies to accurately reflect both the functional assignments and the evidence supporting them, and a highly refined, Web-based manual annotation tool that allows users to view a wide array of sequence analyses and to assign gene names and putative functions using a consistent nomenclature. The ultimate utility of our approach is reflected in the low rate of reassignment of automated assignments by manual curation. Based on these results, we propose a new standard for large-scale annotation, in which the initial automated annotations are manually investigated and then computational methods are iteratively modified and improved based on the results of manual curation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integer programming, simulation, and rules of thumb have been integrated to develop a simulation-based heuristic for short-term assignment of fleet in the car rental industry. It generates a plan for car movements, and a set of booking limits to produce high revenue for a given planning horizon. Three different scenarios were used to validate the heuristic. The heuristic's mean revenue was significant higher than the historical ones, in all three scenarios. Time to run the heuristic for each experiment was within the time limits of three hours set for the decision making process even though it is not fully automated. These findings demonstrated that the heuristic provides better plans (plans that yield higher profit) for the dynamic allocation of fleet than the historical decision processes. Another contribution of this effort is the integration of IP and rules of thumb to search for better performance under stochastic conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate the sensitivity and specificity of machine learning classifiers (MLCs) for glaucoma diagnosis using Spectral Domain OCT (SD-OCT) and standard automated perimetry (SAP). METHODS: Observational cross-sectional study. Sixty two glaucoma patients and 48 healthy individuals were included. All patients underwent a complete ophthalmologic examination, achromatic standard automated perimetry (SAP) and retinal nerve fiber layer (RNFL) imaging with SD-OCT (Cirrus HD-OCT; Carl Zeiss Meditec Inc., Dublin, California). Receiver operating characteristic (ROC) curves were obtained for all SD-OCT parameters and global indices of SAP. Subsequently, the following MLCs were tested using parameters from the SD-OCT and SAP: Bagging (BAG), Naive-Bayes (NB), Multilayer Perceptron (MLP), Radial Basis Function (RBF), Random Forest (RAN), Ensemble Selection (ENS), Classification Tree (CTREE), Ada Boost M1(ADA),Support Vector Machine Linear (SVML) and Support Vector Machine Gaussian (SVMG). Areas under the receiver operating characteristic curves (aROC) obtained for isolated SAP and OCT parameters were compared with MLCs using OCT+SAP data. RESULTS: Combining OCT and SAP data, MLCs' aROCs varied from 0.777(CTREE) to 0.946 (RAN).The best OCT+SAP aROC obtained with RAN (0.946) was significantly larger the best single OCT parameter (p<0.05), but was not significantly different from the aROC obtained with the best single SAP parameter (p=0.19). CONCLUSION: Machine learning classifiers trained on OCT and SAP data can successfully discriminate between healthy and glaucomatous eyes. The combination of OCT and SAP measurements improved the diagnostic accuracy compared with OCT data alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A practical method for the structural assignment of 3,4-O-benzylidene-D-ribono-1,5-lactones and analogues using conventional NMR techniques and NOESY measurements in solution is described. 2-O-Acyl-3,4-O-benzylidene-D-ribono-1,5-lactones were prepared in good yields by acylation of Zinner’s lactone with acyl chlorides under mildly basic conditions. Structural determination of 2-O-(4-nitrobenzoyl)-3,4-O-benzylidene-D-ribono-1,5-lactone was achieved by single crystal x-ray diffraction, which supports the results based on spectroscopic data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the synthesis and total NMR characterization of 5-thia-1-azabicyclo[4.2.0]oct-2-ene-2-carboxylic acid-3-[[[(4''-nitrophenoxy)carbonyl]oxy]-methyl]-8-oxo-7[(2-thienyloxoacetyl)amino]-diphenylmethyl ester-5-dioxide (5), a new cephalosporin derivative. This compound can be used as the carrier of a wide range of drugs containing an amino group. The preparation of the intermediate product, 5-thia-1-azabicyclo[4.2.0]oct-2-ene-2-carboxylic acid-3-[methyl-4-(6-methoxyquinolin-8-ylamino) pentylcarbamate]-8-oxo-7-[(2-thienyloxoacetyl)amino]-diphenylmethyl ester-5-dioxide (6), as well as the synthesis of the antimalarial primaquine prodrug 5-thia-1-azabicyclo[4.2.0]oct-2-ene-2-carboxylic acid-3-[methyl-4-(6-methoxyquinolin-8-ylamino) pentylcarbamate]-8-oxo-7-[(2-thienyloxoacetyl)amino]-5-dioxide (7) are also described, together with their total H-1- and C-13-NMR assignments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims. In this work, we describe the pipeline for the fast supervised classification of light curves observed by the CoRoT exoplanet CCDs. We present the classification results obtained for the first four measured fields, which represent a one-year in-orbit operation. Methods. The basis of the adopted supervised classification methodology has been described in detail in a previous paper, as is its application to the OGLE database. Here, we present the modifications of the algorithms and of the training set to optimize the performance when applied to the CoRoT data. Results. Classification results are presented for the observed fields IRa01, SRc01, LRc01, and LRa01 of the CoRoT mission. Statistics on the number of variables and the number of objects per class are given and typical light curves of high-probability candidates are shown. We also report on new stellar variability types discovered in the CoRoT data. The full classification results are publicly available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop an automated spectral synthesis technique for the estimation of metallicities ([Fe/H]) and carbon abundances ([C/Fe]) for metal-poor stars, including carbon-enhanced metal-poor stars, for which other methods may prove insufficient. This technique, autoMOOG, is designed to operate on relatively strong features visible in even low- to medium-resolution spectra, yielding results comparable to much more telescope-intensive high-resolution studies. We validate this method by comparison with 913 stars which have existing high-resolution and low- to medium-resolution to medium-resolution spectra, and that cover a wide range of stellar parameters. We find that at low metallicities ([Fe/H] less than or similar to -2.0), we successfully recover both the metallicity and carbon abundance, where possible, with an accuracy of similar to 0.20 dex. At higher metallicities, due to issues of continuum placement in spectral normalization done prior to the running of autoMOOG, a general underestimate of the overall metallicity of a star is seen, although the carbon abundance is still successfully recovered. As a result, this method is only recommended for use on samples of stars of known sufficiently low metallicity. For these low- metallicity stars, however, autoMOOG performs much more consistently and quickly than similar, existing techniques, which should allow for analyses of large samples of metal-poor stars in the near future. Steps to improve and correct the continuum placement difficulties are being pursued.