928 resultados para multi-way analysis
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Associée à d'autres techniques observationnelles, la polarimétrie dans le visible ou dans le proche infrarouge permet d'étudier la morphologie des champs magnétiques à la périphérie de nombreuses régions de formation stellaire. A l'intérieur des nuages molécualires la morphologie des champs est connue par polarimétrie submillimétrique, mais rarement pour les mêmes régions. Habituellement, il manque une échelle spatiale intermédiaire pour pouvoir comparer correctement la morphologie du champ magnétique galactique avec celle située à l'intérieur des nuages moléculaires. -- Cette thèse propose les moyens nécessaires pour réaliser ce type d'analyse multi-échelle afin de mieux comprendre le rôle que peuvent jouer les champs magnétiques dans les processus de formation stellaire. La première analyse traite de la région GF 9. Vient ensuite une étude de la morphologie du champ magnétique dans les filaments OMC-2 et OMC-3 suivie d'une analyse multi-échelle dans le complexe de nuages moléculaires Orion A dont OMC-2 et OMC-3 font partie. -- La synthèse des résultats couvrant GF 9 et Orion A est la suivante. Les approches statistiques employées montrent qu'aux grandes échelles spatiales la morphologie des champs magnétiques est poloïdale dans la région GF 9, et probablement hélicoïdale dans la région Orion A. A l'échelle spatiale des enveloppes des nuages moléculaires, les champs magnétiques apparaissent alignés avec les champs situés à leur périphérie. A l'échelle spatiale des coeurs, le champ magnétique poloïdal environnant la région GF 9 est apparemment entraîné par le coeur en rotation, et la diffusion ambipolaire n'y semble pas effective actuellement. Dans Orion A, la morphologie des champs est difficilement détectable dans les sites actifs de formation d'OMC-2, ou bien très fortement contrainte par les effets de la gravité dans OMC-1. Des effets probables de la turbulence ne seont détectés dans aucune des régions observées. -- Les analyses multi-échelles suggèrent donc qu'indépendamment du stade évolutif et de la gamme de masse des régions de formation stellaires, le champ magnétique galactique subit des modifications de sa morphologie aux échelles spatiales comparables à celles des coeurs protostellaires, de la même façon que les propriétés structurelles des nuages moléculaires suivent des lois d'autosimilarité jusqu'à des échelles comparables à celles des coeurs.
Resumo:
Recent bonding systems have been advocated as multi-purpose bonding agents. The aim of this study was to determine if some of these bonding systems could be associated to composite resins from different manufacturers. This investigation was conducted to test lhe shear bond strength of three bonding systems: Scotchbond Multi-Purpose (3M Dental Products), Optibond Light Cure (Kerr) and Optibond Dual Cure (Kerr), when each of them was associated to lhe composite resins: Z1 00 (3M Dental Products), Prisma - APH (Dentsply) and Herculite XRV (Kerr). Seventy-two flat dentin bonding sites were prepared to 600 grit on human premolars mounted using acrilic resins. The teeth were assigned at random to 9 groups of 8 samples each. A split die with a 3mm diameter was placed over lhe surface of lhe dentin treated with one of lhe adhesive systems, and lhe selected composite resin was inserted and light cured. The split mold was removed and all samples were termocycled and stored in 37ºC water for 24 hours before testing. Shear bond strength was determined using an lnstron Universal testing machine. Some failures were examined under lhe S.E.M. Data was analysed by one-way analysis of variance, that demonstrated a significant difference (p<0,05) in the mean shear bond strength among Optibond Light Cure (15,446 MPa), Scotchbond Multi-Purpose (13,339 MPa) and Optibond Dual Cure (10,019 MPa). These values did not depend on the composite resin used. The association between bonding system/composite resin was statistycally significant (p<0,05) and the best results were obtained when the composite resins Z100 and Herculite were used with the adhesive system Optibond Light Cure, and when the composite resin APH was used with the adhesive system Scotchbond Multi-Purpose
Resumo:
PURPOSE: To prospectively evaluate, for the depiction of simulated hypervascular liver lesions in a phantom, the effect of a low tube voltage, high tube current computed tomographic (CT) technique on image noise, contrast-to-noise ratio (CNR), lesion conspicuity, and radiation dose. MATERIALS AND METHODS: A custom liver phantom containing 16 cylindric cavities (four cavities each of 3, 5, 8, and 15 mm in diameter) filled with various iodinated solutions to simulate hypervascular liver lesions was scanned with a 64-section multi-detector row CT scanner at 140, 120, 100, and 80 kVp, with corresponding tube current-time product settings at 225, 275, 420, and 675 mAs, respectively. The CNRs for six simulated lesions filled with different iodinated solutions were calculated. A figure of merit (FOM) for each lesion was computed as the ratio of CNR2 to effective dose (ED). Three radiologists independently graded the conspicuity of 16 simulated lesions. An anthropomorphic phantom was scanned to evaluate the ED. Statistical analysis included one-way analysis of variance. RESULTS: Image noise increased by 45% with the 80-kVp protocol compared with the 140-kVp protocol (P < .001). However, the lowest ED and the highest CNR were achieved with the 80-kVp protocol. The FOM results indicated that at a constant ED, a reduction of tube voltage from 140 to 120, 100, and 80 kVp increased the CNR by factors of at least 1.6, 2.4, and 3.6, respectively (P < .001). At a constant CNR, corresponding reductions in ED were by a factor of 2.5, 5.5, and 12.7, respectively (P < .001). The highest lesion conspicuity was achieved with the 80-kVp protocol. CONCLUSION: The CNR of simulated hypervascular liver lesions can be substantially increased and the radiation dose reduced by using an 80-kVp, high tube current CT technique.
Resumo:
Hypothyroidism is a complex clinical condition found in both humans and dogs, thought to be caused by a combination of genetic and environmental factors. In this study we present a multi-breed analysis of predisposing genetic risk factors for hypothyroidism in dogs using three high-risk breeds-the Gordon Setter, Hovawart and the Rhodesian Ridgeback. Using a genome-wide association approach and meta-analysis, we identified a major hypothyroidism risk locus shared by these breeds on chromosome 12 (p = 2.1x10-11). Further characterisation of the candidate region revealed a shared ~167 kb risk haplotype (4,915,018-5,081,823 bp), tagged by two SNPs in almost complete linkage disequilibrium. This breed-shared risk haplotype includes three genes (LHFPL5, SRPK1 and SLC26A8) and does not extend to the dog leukocyte antigen (DLA) class II gene cluster located in the vicinity. These three genes have not been identified as candidate genes for hypothyroid disease previously, but have functions that could potentially contribute to the development of the disease. Our results implicate the potential involvement of novel genes and pathways for the development of canine hypothyroidism, raising new possibilities for screening, breeding programmes and treatments in dogs. This study may also contribute to our understanding of the genetic etiology of human hypothyroid disease, which is one of the most common endocrine disorders in humans.
Resumo:
This work explores the multi-element capabilities of inductively coupled plasma - mass spectrometry with collision/reaction cell technology (CCT-ICP-MS) for the simultaneous determination of both spectrally interfered and non-interfered nuclides in wine samples using a single set of experimental conditions. The influence of the cell gas type (i.e. He, He+H2 and He+NH3), cell gas flow rate and sample pre-treatment (i.e. water dilution or acid digestion) on the background-equivalent concentration (BEC) of several nuclides covering the mass range from 7 to 238 u has been studied. Results obtained in this work show that, operating the collision/reaction cell with a compromise cell gas flow rate (i.e. 4 mL min−1) improves BEC values for interfered nuclides without a significant effect on the BECs for non-interfered nuclides, with the exception of the light elements Li and Be. Among the different cell gas mixtures tested, the use of He or He+H2 is preferred over He+NH3 because NH3 generates new spectral interferences. No significant influence of the sample pre-treatment methodology (i.e. dilution or digestion) on the multi-element capabilities of CCT-ICP-MS in the context of simultaneous analysis of interfered and non-interfered nuclides was observed. Nonetheless, sample dilution should be kept at minimum to ensure that light nuclides (e.g. Li and Be) could be quantified in wine. Finally, a direct 5-fold aqueous dilution is recommended for the simultaneous trace and ultra-trace determination of spectrally interfered and non-interfered elements in wine by means of CCT-ICP-MS. The use of the CCT is mandatory for interference-free ultra-trace determination of Ti and Cr. Only Be could not be determined when using the CCT due to a deteriorated limit of detection when compared to conventional ICP-MS.
Resumo:
Several studies have analyzed discretionary accruals to address earnings-smoothing behaviors in the banking industry. We argue that the characteristic link between accruals and earnings may be nonlinear, since both the incentives to manipulate income and the practical way to do so depend partially on the relative size of earnings. Given a sample of 15,268 US banks over the period 1996–2011, the main results in this paper suggest that, depending on the size of earnings, bank managers tend to engage in earnings-decreasing strategies when earnings are negative (“big-bath”), use earnings-increasing strategies when earnings are positive, and use provisions as a smoothing device when earnings are positive and substantial (“cookie-jar” accounting). This evidence, which cannot be explained by the earnings-smoothing hypothesis, is consistent with the compensation theory. Neglecting nonlinear patterns in the econometric modeling of these accruals may lead to misleading conclusions regarding the characteristic strategies used in earnings management.
Resumo:
Entrepreneurship education has emerged as one popular research domain in academic fields given its aim at enhancing and developing certain entrepreneurial qualities of undergraduates that change their state of behavior, even their entrepreneurial inclination and finally may result in the formation of new businesses as well as new job opportunities. This study attempts to investigate the Colombian student´s entrepreneurial qualities and the influence of entrepreneurial education during their studies.
Resumo:
In this work, we discuss the use of multi-way principal component analysis combined with comprehensive two-dimensional gas chromatography to study the volatile metabolites of the saprophytic fungus Memnoniella sp. isolated in vivo by headspace solid-phase microextraction. This fungus has been identified as having the ability to induce plant resistance against pathogens, possibly through its volatile metabolites. Adequate culture media was inoculated, and its headspace was then sampled with a solid-phase microextraction fiber and chromatographed every 24 h over seven days. The raw chromatogram processing using multi-way principal component analysis allowed the determination of the inoculation period, during which the concentration of volatile metabolites was maximized, as well as the discrimination of the appropriate peaks from the complex culture media background. Several volatile metabolites not previously described in the literature on biocontrol fungi were observed, as well as sesquiterpenes and aliphatic alcohols. These results stress that, due to the complexity of multidimensional chromatographic data, multivariate tools might be mandatory even for apparently trivial tasks, such as the determination of the temporal profile of metabolite production and extinction. However, when compared with conventional gas chromatography, the complex data processing yields a considerable improvement in the information obtained from the samples. This article is protected by copyright. All rights reserved.
Resumo:
Objectives: To analyze the effects of low-level laser therapy (LLLT), 670 nm, with doses of 4 and 7 J/cm(2), on the repair of surgical wounds covered by occlusive dressings. Background Data: The effect of LLLT on the healing process of covered wounds is not well defined. Materials and Methods: For the histologic analysis with HE staining, 50 male Wistar rats were submitted to surgical incisions and divided into 10 groups (n=5): control; stimulated with 4 and 7 J/cm(2) daily, for 7 and 14 days, with or without occlusion. Reepithelization and the number of leukocytes, fibroblasts, and fibrocytes were obtained with an image processor. For the biomechanical analysis, 25 rats were submitted to a surgical incision and divided into five groups (n=5): treated for 14 days with and without occlusive dressing, and the sham group. Samples of the lesions were collected and submitted to the tensile test. One-way analysis of variance was performed, followed by post hoc analysis. A Tukey test was used on the biomechanical data, and the Tamhane test on the histologic data. A significance level of 5% was chosen (p <= 0.05). Results: The 4 and 7J/cm(2) laser with and without occlusive dressing did not alter significantly the reepithelization rate of the wounds. The 7 J/cm(2) laser reduced the number of leukocytes significantly. The number of fibroblasts was higher in the groups treated with laser for 7 days, and was significant in the covered 4 J/cm(2) laser group. Conclusions: Greater interference of the laser-treatment procedure was noted with 7 days of stimulation, and the occlusive dressing did not alter its biostimulatory effects.
Resumo:
Mixed martial arts (MMA) have become a fast-growing worldwide expansion of martial arts competition, requiring high level of skill, physical conditioning, and strategy, and involving a synthesis of combat while standing or on the ground. This study quantified the effort-pause ratio (EP), and classified effort segments of stand-up or groundwork development to identify the number of actions performed per round in MMA matches. 52 MMA athletes participated in the study (M age = 24 yr., SD = 5; average experience in MMA = 5 yr., SD = 3). A one-way analysis of variance with repeated measurements was conducted to compare the type of action across the rounds. A chi-squared test was applied across the percentages to compare proportions of different events. Only one significant difference (p < .05) was observed among rounds: time in groundwork of low intensity was longer in the second compared to the third round. When the interval between rounds was not considered, the EP ratio (between high-intensity effort to low-intensity effort plus pauses) WE S 1:2 to 1:4. This ratio is between ratios typical for judo, wrestling, karate, and taekwondo and reflects the combination of ground and standup techniques. Most of the matches ended in the third round, involving high-intensity actions, predominantly executed during groundwork combat.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia e Gestão Industrial
Resumo:
Based on the presentation and discussion at the 3rd Winter School on Technology Assessment, December 2012, Universidade Nova de Lisboa (Portugal), Caparica Campus, PhD programme on Technology Assessment
Resumo:
In this paper I review a series of theoretical concepts that are relevant for the integrated assessment of agricultural sustainability but that are not generally included in the curriculum of the various scientific disciplines dealing with quantitative analysis of agriculture. I first illustrate with plain narratives and concrete examples that sustainability is an extremely complex issue requiring the simultaneous consideration of several aspects, which cannot be reduced into a single indicator of performance. Following, I justify this obvious need for multi-criteria analysis with theoretical concepts dealing with the epistemological predicament of complexity, starting from classic philosophical lessons to arrive to recent developments in complex system theory, in particular Rosen´s theory of modelling relation which is essential to analyze the quality of any quantitative representation. The implications of these theoretical concepts are then illustrated with applications of multi-criteria analysis to the sustainability of agriculture. I wrap up by pointing out the crucial difference between "integrated assessment" and "integrated analysis". An integrated analysis is a set of indicators and analytical models generating an analytical output. An integrated assessment is much more than that. It is about finding an effective way to deal with three key issues: (i) legitimacy – how to handle the unavoidable existence of legitimate but contrasting points of view about different meanings given by social actors to the word "development"; (ii) pertinence – how to handle in a coherent way scientific analyses referring to different scales and dimensions; and (iii) credibility – how to handle the unavoidable existence of uncertainty and genuine ignorance, when dealing with the analysis of future scenarios.