853 resultados para automated correlation optimized warping


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of automated correlation optimized warping (ACOW) to the correction of retention time shift in the chromatographic fingerprints of Radix Puerariae thomsonii (RPT) was investigated. Twenty-seven samples were extracted from 9 batches of RPT products. The fingerprints of the 27 samples were established by the HPLC method. Because there is a retention time shift in the established fingerprints, the quality of these samples cannot be correctly evaluated by using similarity estimation and principal component analysis (PCA). Thus, the ACOW method was used to align these fingerprints. In the ACOW procedure, the warping parameters, which have a significant influence on the alignment result, were optimized by an automated algorithm. After correcting the retention time shift, the quality of these RPT samples was correctly evaluated by similarity estimation and PCA. It is demonstrated that ACOW is a practical method for aligning the chromatographic fingerprints of RPT. The combination of ACOW, similarity estimation, and PCA is shown to be a promising method for evaluating the quality of Traditional Chinese Medicine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the volatile chromatographic profiles of roasted Arabica coffees, previously analyzed for their sensorial attributes, were explored by principal component analysis. The volatile extraction technique used was the solid phase microextraction. The correlation optimized warping algorithm was used to align the gas chromatographic profiles. Fifty four compounds were found to be related to the sensorial attributes investigated. The volatiles pyrrole, 1-methyl-pyrrole, cyclopentanone, dihydro-2-methyl-3-furanone, furfural, 2-ethyl-5-methyl-pyrazine, 2-etenyl-n-methyl-pyrazine, 5-methyl-2-propionyl-furan compounds were important for the differentiation of coffee beverage according to the flavour, cleanliness and overall quality. Two figures of merit, sensitivity and specificity (or selectivity), were used to interpret the sensory attributes studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the volatile chromatographic profiles of roasted Arabica coffees, previously analyzed for their sensorial attributes, were explored by principal component analysis. The volatile extraction technique used was the solid phase microextraction. The correlation optimized warping algorithm was used to align the gas chromatographic profiles. Fifty four compounds were found to be related to the sensorial attributes investigated. The volatiles pyrrole, 1-methyl-pyrrole, cyclopentanone, dihydro-2-methyl-3-furanone, furfural, 2-ethyl-5-methyl-pyrazine, 2-etenyl-n-methyl-pyrazine, 5-methyl-2-propionyl-furan compounds were important for the differentiation of coffee beverage according to the flavour, cleanliness and overall quality. Two figures of merit, sensitivity and specificity (or selectivity), were used to interpret the sensory attributes studied.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A network of Kuramoto oscillators with different natural frequencies is optimized for enhanced synchronizability. All node inputs are normalized by the node connectivity and some important properties of the network Structure are determined in this case: (i) optimized networks present a strong anti-correlation between natural frequencies of adjacent nodes: (ii) this anti-correlation should be as high as possible since the average path length between nodes is maintained as small as in random networks: and (iii) high anti-correlation is obtained without any relation between nodes natural frequencies and the degree of connectivity. We also propose a network construction model with which it is shown that high anti-correlation and small average paths may be achieved by randomly rewiring a fraction of the links of a totally anti-correlated network, and that these networks present optimal synchronization properties. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose The aim of this study was to test the correlation between Fourier-domain (FD) optical coherence tomography (OCT) macular and retinal nerve fibre layer (RNFL) thickness and visual field (VF) loss on standard automated perimetry (SAP) in chiasmal compression. Methods A total of 35 eyes with permanent temporal VF defects and 35 controls underwent SAP and FD-OCT (3D OCT-1000; Topcon Corp.) examinations. Macular thickness measurements were averaged for the central area and for each quadrant and half of that area, whereas RNFL thickness was determined for six sectors around the optic disc. VF loss was estimated in six sectors of the VF and in the central 16 test points in the VF. The correlation between VF loss and OCT measurements was tested with Spearman`s correlation coefficients and with linear regression analysis. Results Macular and RNFL thickness parameters correlated strongly with SAP VF loss. Correlations were generally stronger between VF loss and quadrantic or hemianopic macular thickness than with sectoral RNFL thickness. For the macular parameters, we observed the strongest correlation between macular thickness in the inferonasal quadrant and VF loss in the superior temporal central quadrant (rho=0.78; P<0.001) whereas for the RNFL parameters the strongest correlation was observed between the superonasal optic disc sector and the central temporal VF defect (rho=0.60; P<0.001).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. To evaluate the relationship between pattern electroretinogram (PERG) amplitude, macular and retinal nerve fiber layer (RNFL) thickness by optical coherence tomography (OCT), and visual field (VF) loss on standard automated perimetry (SAP) in eyes with temporal hemianopia from chiasmal compression. METHODS. Forty-one eyes from 41 patients with permanent temporal VF defects from chiasmal compression and 41 healthy subjects underwent transient full-field and hemifield (temporal or nasal) stimulation PERG, SAP and time domain-OCT macular and RNFL thickness measurements. Comparisons were made using Student`s t-test. Deviation from normal VF sensitivity for the central 18 of VF was expressed in 1/Lambert units. Correlations between measurements were verified by linear regression analysis. RESULTS. PERG and OCT measurements were significantly lower in eyes with temporal hemianopia than in normal eyes. A significant correlation was found between VF sensitivity loss and fullfield or nasal, but not temporal, hemifield PERG amplitude. Likewise a significant correlation was found between VF sensitivity loss and most OCT parameters. No significant correlation was observed between OCT and PERG parameters, except for nasal hemifield amplitude. A significant correlation was observed between several macular and RNFL thickness parameters. CONCLUSIONS. In patients with chiasmal compression, PERG amplitude and OCT thickness measurements were significant related to VF loss, but not to each other. OCT and PERG quantify neuronal loss differently, but both technologies are useful in understanding structure-function relationship in patients with chiasmal compression. (ClinicalTrials.gov number, NCT00553761.) (Invest Ophthalmol Vis Sci. 2009; 50: 3535-3541) DOI:10.1167/iovs.08-3093

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concerns have been raised about the reproducibility of brachial artery reactivity (BAR), because subjective decisions regarding the location of interfaces may influence the measurement of very small changes in lumen diameter. We studied 120 consecutive patients with BAR to address if an automated technique could be applied, and if experience influenced reproducibility between two observers, one experienced and one inexperienced. Digital cineloops were measured automatically, using software that measures the leading edge of the endothelium and tracks this in sequential frames and also manually, where a set of three point-to-point measurements were averaged. There was a high correlation between automated and manual techniques for both observers, although less variability was present with expert readers. The limits of agreement overall for interobserver concordance were 0.13 +/-0.65 mm for the manual and 0.03 +/-0.74 mm for the automated measurement. For intraobserver concordance, the limits of agreement were -0.07 +/-0.38 mm for observer 1 and -0.16 +/-0.55 mm for observer 2. We concluded that BAR measurements were highly concordant between observers, although more concordant using the automated method, and that experience does affect concordance. Care must be taken to ensure that the same segments are measured between observers and serially.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os serviços baseados em localização vieram dar um novo alento à criatividade dos programadores de aplicações móveis. A vulgarização de dispositivos com capacidades de localização integradas deu origem ao desenvolvimento de aplicações que gerem e apresentam informação baseada na posição do utilizador. Desde então, o mercado móvel tem assistido ao aparecimento de novas categorias de aplicações que tiram proveito desta capacidade. Entre elas, destaca-se a monitorização remota de dispositivos, que tem vindo a assumir uma importância crescente, tanto no sector particular como no sector empresarial. Esta dissertação começa por apresentar o estado da arte sobre os diferentes sistemas de posicionamento, categorizados pela sua eficácia em ambientes internos ou externos, assim como diferentes protocolos de comunicação em tempo quase-real. É também feita uma análise ao estado actual do mercado móvel. Actualmente o mercado possui diferentes plataformas móveis com características únicas que as fazem rivalizar entre si, com vista a expandirem a sua quota de mercado. É por isso elaborado um breve estudo sobre os sistemas operativos móveis mais relevantes da actualidade. É igualmente feita uma abordagem mais profunda à arquitectura da plataforma móvel da Apple - o iOS – que serviu de base ao desenvolvimento de uma solução optimizada para localização e monitorização de dispositivos móveis. A monitorização implica uma utilização intensiva de recursos energéticos e de largura de banda que os dispositivos móveis da actualidade não estão aptos a suportar. Dado o grande consumo energético do GPS face à precária autonomia destes dispositivos, é apresentado um estudo em que se expõem soluções que permitem gerir de forma optimizada a utilização do GPS. O elevado custo dos planos de dados facultados pelas operadoras móveis é também considerado, pelo que são exploradas soluções que visam minimizar a utilização de largura de banda. Deste trabalho, nasce a aplicação EyeGotcha, que para além de permitir localizar outros utilizadores de dispositivos móveis de forma optimizada, permite também monitorizar as suas acções baseando-se num conjunto de regras pré-definidas. Estas acções são reportadas às entidades monitoras, de modo automatizado e sob a forma de alertas. Visionando-se a comercialização da aplicação, é portanto apresentado um modelo de negócio que permite obter receitas capazes de cobrirem os custos de manutenção de serviços, aos quais o funcionamento da aplicação móvel está subjugado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The solubilities of two C-tetraalkylcalix[4]resorcinarenes, namely C-tetramethylcalix[4]resorcinarene and C-tetrapentylcalix[4]resorcinarene, in supercritical carbon dioxide (SCCO2) were measured in a flow-type apparatus at a temperature range from (313.2 to 333.2) K and at pressures from (12.0 to 35.0) MPa. The C-tetraalkylcalix[4]resorcinarenes were synthesized applying our optimized procedure and fully characterized by means of gel permeation chromatography, infrared and nuclear magnetic resonance spectroscopy. The solubilities of the C-tetraalkylcalix[4]resorcinarenes in SCCO2 were determined by analysis of the extracts obtained by HPLC with ultraviolet (UV) detection methodology adapted by our team. Four semiempirical density-based models, and the SoaveRedlichKwong cubic equation of state (SRK CEoS) with classical mixing rules, were applied to correlate the solubility of the calix[4]resorcinarenes in the SC CO2. The physical properties required for the modeling were estimated and reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sulfadiazine is an antibiotic of the sulfonamide group and is used as a veterinary drug in fish farming. Monitoring it in the tanks is fundamental to control the applied doses and avoid environmental dissemination. Pursuing this goal, we included a novel potentiometric design in a flow-injection assembly. The electrode body was a stainless steel needle veterinary syringe of 0.8-mm inner diameter. A selective membrane of PVC acted as a sensory surface. Its composition, the length of the electrode, and other flow variables were optimized. The best performance was obtained for sensors of 1.5-cm length and a membrane composition of 33% PVC, 66% onitrophenyloctyl ether, 1% ion exchanger, and a small amount of a cationic additive. It exhibited Nernstian slopes of 61.0 mV decade-1 down to 1.0×10-5 mol L-1, with a limit of detection of 3.1×10-6 mol L-1 in flowing media. All necessary pH/ionic strength adjustments were performed online by merging the sample plug with a buffer carrier of 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid, pH 4.9. The sensor exhibited the advantages of a fast response time (less than 15 s), long operational lifetime (60 days), and good selectivity for chloride, nitrite, acetate, tartrate, citrate, and ascorbate. The flow setup was successfully applied to the analysis of aquaculture waters. The analytical results were validated against those obtained with liquid chromatography–tandem mass spectrometry procedures. The sampling rate was about 84 samples per hour and recoveries ranged from 95.9 to 106.9%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enterococci are increasingly responsible for nosocomial infections worldwide. This study was undertaken to compare the identification and susceptibility profile using an automated MicrosScan system, PCR-based assay and disk diffusion assay of Enterococcus spp. We evaluated 30 clinical isolates of Enterococcus spp. Isolates were identified by MicrosScan system and PCR-based assay. The detection of antibiotic resistance genes (vancomycin, gentamicin, tetracycline and erythromycin) was also determined by PCR. Antimicrobial susceptibilities to vancomycin (30 µg), gentamicin (120 µg), tetracycline (30 µg) and erythromycin (15 µg) were tested by the automated system and disk diffusion method, and were interpreted according to the criteria recommended in CLSI guidelines. Concerning Enterococcus identification the general agreement between data obtained by the PCR method and by the automatic system was 90.0% (27/30). For all isolates of E. faecium and E. faecalis we observed 100% agreement. Resistance frequencies were higher in E. faecium than E. faecalis. The resistance rates obtained were higher for erythromycin (86.7%), vancomycin (80.0%), tetracycline (43.35) and gentamicin (33.3%). The correlation between disk diffusion and automation revealed an agreement for the majority of the antibiotics with category agreement rates of > 80%. The PCR-based assay, the van(A) gene was detected in 100% of vancomycin resistant enterococci. This assay is simple to conduct and reliable in the identification of clinically relevant enterococci. The data obtained reinforced the need for an improvement of the automated system to identify some enterococci.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study was performed at OCAS, the Steel Research Centre of ArcelorMittal for the Industry market. The major aim of this research was to obtain an optimized tensile testing methodology with in-situ H-charging to reveal the hydrogen embrittlement in various high strength steels. The second aim of this study has been the mechanical characterization of the hydrogen effect on hight strength carbon steels with varying microstructure, i.e. ferrite-martensite and ferrite-bainite grades. The optimal parameters for H-charging - which influence the tensile test results (sample geometry type of electrolyte, charging methods effect of steel type, etc.) - were defined and applied to Slow Strain Rate testing, Incremental Step Loading and Constant Load Testing. To better understand the initiation and propagation of cracks during tensile testing with in-situ H-charging, and to make the correlation with crystallographic orientation, some materials have been analyzed in the SEM in combination with the EBSD technique. The introduction of a notch on the tensile samples permits to reach a significantly improved reproducibility of the results. Comparing the various steel grades reveals that Dual Phase (ferrite-martensite) steels are more sensitive to hydrogen induced cracking than the FB (ferritic-bainitic) ones. This higher sensitivity to hydrogen was found back in the reduced failure times, increased creep rates and enhanced crack initiation (SEM) for the Dual Phase steels in comparison with the FB steels.