957 resultados para Statistical approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mulch materials of different origins have been introduced into the agricultural sector in recent years alternatively to the standard polyethylene due to its environmental impact. This study aimed to evaluate the multivariate response of mulch materials over three consecutive years in a processing tomato (Solanum lycopersicon L.) crop in Central Spain. Two biodegradable plastic mulches (BD1, BD2), one oxo-biodegradable material (OB), two types of paper (PP1, PP2), and one barley straw cover (BS) were compared using two control treatments (standard black polyethylene [PE] and manual weed control [MW]). A total of 17 variables relating to yield, fruit quality, and weed control were investigated. Several multivariate statistical techniques were applied, including principal component analysis, cluster analysis, and discriminant analysis. A group of mulch materials comprised of OB and BD2 was found to be comparable to black polyethylene regarding all the variables considered. The weed control variables were found to be an important source of discrimination. The two paper mulches tested did not share the same treatment group membership in any case: PP2 presented a multivariate response more similar to the biodegradable plastics, while PP1 was more similar to BS and MW. Based on our multivariate approach, the materials OB and BD2 can be used as an effective, more environmentally friendly alternative to polyethylene mulches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so called shared resources), rather than on the specific constructs of some programming language. Using a resource-centric rather than a language-centric approach has some benefits for both teachers and students. Besides the obvious advantage of being independent of the programming language, the models help in the early validation of concurrent software design, provide students and teachers with a lingua franca that greatly simplifies communication at the classroom and during supervision, and help in the automatic generation of tests for the practical assignments. This method has been in use, with slight variations, for some 15 years, surviving changes in the programming language and course length. In this article, we describe the components and structure of the current incarnation of the course?which uses Java as target language?and some tools used to support our method. We provide a detailed description of the different outcomes that the model-driven approach delivers (validation of the initial design, automatic generation of tests, and mechanical generation of code) from a teaching perspective. A critical discussion on the perceived advantages and risks of our approach follows, including some proposals on how these risks can be minimized. We include a statistical analysis to show that our method has a positive impact in the student ability to understand concurrency and to generate correct code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mechanical behavior of granular materials has been traditionally approached through two theoretical and computational frameworks: macromechanics and micromechanics. Macromechanics focuses on continuum based models. In consequence it is assumed that the matter in the granular material is homogeneous and continuously distributed over its volume so that the smallest element cut from the body possesses the same physical properties as the body. In particular, it has some equivalent mechanical properties, represented by complex and non-linear constitutive relationships. Engineering problems are usually solved using computational methods such as FEM or FDM. On the other hand, micromechanics is the analysis of heterogeneous materials on the level of their individual constituents. In granular materials, if the properties of particles are known, a micromechanical approach can lead to a predictive response of the whole heterogeneous material. Two classes of numerical techniques can be differentiated: computational micromechanics, which consists on applying continuum mechanics on each of the phases of a representative volume element and then solving numerically the equations, and atomistic methods (DEM), which consist on applying rigid body dynamics together with interaction potentials to the particles. Statistical mechanics approaches arise between micro and macromechanics. It tries to state which the expected macroscopic properties of a granular system are, by starting from a micromechanical analysis of the features of the particles and the interactions. The main objective of this paper is to introduce this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-stranded regions in RNA secondary structure are important for RNA–RNA and RNA–protein interactions. We present a probability profile approach for the prediction of these regions based on a statistical algorithm for sampling RNA secondary structures. For the prediction of phylogenetically-determined single-stranded regions in secondary structures of representative RNA sequences, the probability profile offers substantial improvement over the minimum free energy structure. In designing antisense oligonucleotides, a practical problem is how to select a secondary structure for the target mRNA from the optimal structure(s) and many suboptimal structures with similar free energies. By summarizing the information from a statistical sample of probable secondary structures in a single plot, the probability profile not only presents a solution to this dilemma, but also reveals ‘well-determined’ single-stranded regions through the assignment of probabilities as measures of confidence in predictions. In antisense application to the rabbit β-globin mRNA, a significant correlation between hybridization potential predicted by the probability profile and the degree of inhibition of in vitro translation suggests that the probability profile approach is valuable for the identification of effective antisense target sites. Coupling computational design with DNA–RNA array technique provides a rational, efficient framework for antisense oligonucleotide screening. This framework has the potential for high-throughput applications to functional genomics and drug target validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A statistical modeling approach is proposed for use in searching large microarray data sets for genes that have a transcriptional response to a stimulus. The approach is unrestricted with respect to the timing, magnitude or duration of the response, or the overall abundance of the transcript. The statistical model makes an accommodation for systematic heterogeneity in expression levels. Corresponding data analyses provide gene-specific information, and the approach provides a means for evaluating the statistical significance of such information. To illustrate this strategy we have derived a model to depict the profile expected for a periodically transcribed gene and used it to look for budding yeast transcripts that adhere to this profile. Using objective criteria, this method identifies 81% of the known periodic transcripts and 1,088 genes, which show significant periodicity in at least one of the three data sets analyzed. However, only one-quarter of these genes show significant oscillations in at least two data sets and can be classified as periodic with high confidence. The method provides estimates of the mean activation and deactivation times, induced and basal expression levels, and statistical measures of the precision of these estimates for each periodic transcript.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an approach for assessing the significance of sequence and structure comparisons by using nearly identical statistical formalisms for both sequence and structure. Doing so involves an all-vs.-all comparison of protein domains [taken here from the Structural Classification of Proteins (scop) database] and then fitting a simple distribution function to the observed scores. By using this distribution, we can attach a statistical significance to each comparison score in the form of a P value, the probability that a better score would occur by chance. As expected, we find that the scores for sequence matching follow an extreme-value distribution. The agreement, moreover, between the P values that we derive from this distribution and those reported by standard programs (e.g., blast and fasta validates our approach. Structure comparison scores also follow an extreme-value distribution when the statistics are expressed in terms of a structural alignment score (essentially the sum of reciprocated distances between aligned atoms minus gap penalties). We find that the traditional metric of structural similarity, the rms deviation in atom positions after fitting aligned atoms, follows a different distribution of scores and does not perform as well as the structural alignment score. Comparison of the sequence and structure statistics for pairs of proteins known to be related distantly shows that structural comparison is able to detect approximately twice as many distant relationships as sequence comparison at the same error rate. The comparison also indicates that there are very few pairs with significant similarity in terms of sequence but not structure whereas many pairs have significant similarity in terms of structure but not sequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to the delineation of local labour markets based on evolutionary computation. The main objective is the regionalisation of a given territory into functional regions based on commuting flows. According to the relevant literature, such regions are defined so that (a) their boundaries are rarely crossed in daily journeys to work, and (b) a high degree of intra-area movement exists. This proposal merges municipalities into functional regions by maximizing a fitness function that measures aggregate intra-region interaction under constraints of inter-region separation and minimum size. Real results are presented based on the latest database from the Census of Population in the Region of Valencia. Comparison between the results obtained through the official method which currently is most widely used (that of British Travel-to-Work Areas) and those from our approach is also presented, showing important improvements in terms of both the number of different market areas identified that meet the statistical criteria and the degree of aggregate intra-market interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of foundation embedment on settlement calculation is a widely researched topic in which there is no scientific consensus regarding the magnitude of settlement reduction. In this paper, a non-linear three dimensional Finite Element analysis has been performed with the aim of evaluating the aforementioned effect. For this purpose, 1800 models were run considering different variables, such as the depth and dimensions of the foundation and the Young’s modulus and Poisson’s ratio of the soil. The settlements from models with foundations at surface level and at depth were then compared and the relationship between them established. The statistical analysis of this data allowed two new expressions, with a mean maximum error of 1.80%, for the embedment influence factor of a foundation to be proposed and these to be compared with commonly used corrections. The proposed equations were validated by comparing the settlements calculated with the proposed influence factors and the true settlements measured in several real foundations. From the comprehensive study of all modelled cases, an improved approach, when compared to those proposed by other authors, for the calculation of the true elastic settlements of an embedded foundation is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The elemental analysis of Spanish palm dates by inductively coupled plasma atomic emission spectrometry and inductively coupled plasma mass spectrometry is reported for the first time. To complete the information about the mineral composition of the samples, C, H, and N are determined by elemental analysis. Dates from Israel, Tunisia, Saudi Arabia, Algeria and Iran have also been analyzed. The elemental composition have been used in multivariate statistical analysis to discriminate the dates according to its geographical origin. A total of 23 elements (As, Ba, C, Ca, Cd, Co, Cr, Cu, Fe, H, In, K, Li, Mg, Mn, N, Na, Ni, Pb, Se, Sr, V, and Zn) at concentrations from major to ultra-trace levels have been determined in 13 date samples (flesh and seeds). A careful inspection of the results indicate that Spanish samples show higher concentrations of Cd, Co, Cr, and Ni than the remaining ones. Multivariate statistical analysis of the obtained results, both in flesh and seed, indicate that the proposed approach can be successfully applied to discriminate the Spanish date samples from the rest of the samples tested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical machine translation (SMT) is an approach to Machine Translation (MT) that uses statistical models whose parameter estimation is based on the analysis of existing human translations (contained in bilingual corpora). From a translation student’s standpoint, this dissertation aims to explain how a phrase-based SMT system works, to determine the role of the statistical models it uses in the translation process and to assess the quality of the translations provided that system is trained with in-domain goodquality corpora. To that end, a phrase-based SMT system based on Moses has been trained and subsequently used for the English to Spanish translation of two texts related in topic to the training data. Finally, the quality of this output texts produced by the system has been assessed through a quantitative evaluation carried out with three different automatic evaluation measures and a qualitative evaluation based on the Multidimensional Quality Metrics (MQM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the determinants of foreign direct investment (FDI) under free trade agreements (FTAs) from a new institutional perspective. First, the determinants of FDI are theoretically discussed from a new institutional perspective. Then, FDI is statistically analyzed at the aggregate level. Kernel density estimation of firm-size reveals some evidence of "structural changes" after FTAs, as characterized by the investing firms' paid-up capital stock. Statistical tests of the average and variance of the size distribution confirm this in the case of FTAs with Asian partner countries. For FTAs with South American partner countries, the presence of FTAs seems to promote larger-scale FDIs. These results remain correlational instead of causal, and more statistical analyses would be needed to infer causality. Policy implications suggest that participants should consider "institutional" aspects of FTAs, that is, the size matters as a determinant of FDI. Future work along this line is needed to study "firm heterogeneity."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing organization: Dept. of Statistics, University of Michigan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many studies on birds focus on the collection of data through an experimental design, suitable for investigation in a classical analysis of variance (ANOVA) framework. Although many findings are confirmed by one or more experts, expert information is rarely used in conjunction with the survey data to enhance the explanatory and predictive power of the model. We explore this neglected aspect of ecological modelling through a study on Australian woodland birds, focusing on the potential impact of different intensities of commercial cattle grazing on bird density in woodland habitat. We examine a number of Bayesian hierarchical random effects models, which cater for overdispersion and a high frequency of zeros in the data using WinBUGS and explore the variation between and within different grazing regimes and species. The impact and value of expert information is investigated through the inclusion of priors that reflect the experience of 20 experts in the field of bird responses to disturbance. Results indicate that expert information moderates the survey data, especially in situations where there are little or no data. When experts agreed, credible intervals for predictions were tightened considerably. When experts failed to agree, results were similar to those evaluated in the absence of expert information. Overall, we found that without expert opinion our knowledge was quite weak. The fact that the survey data is quite consistent, in general, with expert opinion shows that we do know something about birds and grazing and we could learn a lot faster if we used this approach more in ecology, where data are scarce. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis-Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models-a fixed effects model and a random effects model-are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed. (c) 2004 Elsevier B.V. All rights reserved.