965 resultados para Computational time
Resumo:
The estimation of power losses due to wind turbine wakes is crucial to understanding overall wind farm economics. This is especially true for large offshore wind farms, as it represents the primary source of losses in available power, given the regular arrangement of rotors, their generally largerdiameter and the lower ambient turbulence level, all of which conspire to dramatically affect wake expansion and, consequently, the power deficit. Simulation of wake effects in offshore wind farms (in reasonable computational time) is currently feasible using CFD tools. An elliptic CFD model basedon the actuator disk method and various RANS turbulence closure schemes is tested and validated using power ratios extracted from Horns Rev and Nysted wind farms, collected as part of the EU-funded UPWIND project. The primary focus of the present work is on turbulence modeling, as turbulent mixing is the main mechanism for flow recovery inside wind farms. A higher-order approach, based on the anisotropic RSM model, is tested to better take into account the imbalance in the length scales inside and outside of the wake, not well reproduced by current two-equation closure schemes.
Resumo:
Wake effect represents one of the most important aspects to be analyzed at the engineering phase of every wind farm since it supposes an important power deficit and an increase of turbulence levels with the consequent decrease of the lifetime. It depends on the wind farm design, wind turbine type and the atmospheric conditions prevailing at the site. Traditionally industry has used analytical models, quick and robust, which allow carry out at the preliminary stages wind farm engineering in a flexible way. However, new models based on Computational Fluid Dynamics (CFD) are needed. These models must increase the accuracy of the output variables avoiding at the same time an increase in the computational time. Among them, the elliptic models based on the actuator disk technique have reached an extended use during the last years. These models present three important problems in case of being used by default for the solution of large wind farms: the estimation of the reference wind speed upstream of each rotor disk, turbulence modeling and computational time. In order to minimize the consequence of these problems, this PhD Thesis proposes solutions implemented under the open source CFD solver OpenFOAM and adapted for each type of site: a correction on the reference wind speed for the general elliptic models, the semi-parabollic model for large offshore wind farms and the hybrid model for wind farms in complex terrain. All the models are validated in terms of power ratios by means of experimental data derived from real operating wind farms.
Resumo:
This thesis aims to introduce some fundamental concepts underlying option valuation theory including implementation of computational tools. In many cases analytical solution for option pricing does not exist, thus the following numerical methods are used: binomial trees, Monte Carlo simulations and finite difference methods. First, an algorithm based on Hull and Wilmott is written for every method. Then these algorithms are improved in different ways. For the binomial tree both speed and memory usage is significantly improved by using only one vector instead of a whole price storing matrix. Computational time in Monte Carlo simulations is reduced by implementing a parallel algorithm (in C) which is capable of improving speed by a factor which equals the number of processors used. Furthermore, MatLab code for Monte Carlo was made faster by vectorizing simulation process. Finally, obtained option values are compared to those obtained with popular finite difference methods, and it is discussed which of the algorithms is more appropriate for which purpose.
Resumo:
This article presents a new and computationally efficient method of analysis of a railway track modelled as a continuous beam of 2N spans supported by elastic vertical springs. The main feature of this method is its important reduction in computational effort with respect to standard matrix methods of structural analysis. In this article, the whole structure is considered to be a repetition of a single one. The analysis presented is applied to a simple railway track model, i.e. to a repetitive beam supported on vertical springs (sleepers). The proposed method of analysis is based on the general theory of spatially periodic structures. The main feature of this theory is the possibility to apply Discrete Fourier Transform (DFT) in order to reduce a large system of q(2N + 1) linear stiffness equilibrium equations to a set of 2N + 1 uncoupled systems of q equations each. In this way, a dramatic reduction of the computational effort of solving the large system of equations is achieved. This fact is particularly important in the analysis of railway track structures, in which N is a very large number (around several thousands), and q = 2, the vertical displacement and rotation, is very small. The proposed method allows us to easily obtain the exact solution given by Samartín [1], i.e. the continuous beam railway track response. The comparison between the proposed method and other methods of analysis of railway tracks, such as Lorente de Nó and Zimmermann-Timoshenko, clearly shows the accuracy of the obtained results for the proposed method, even for low values of N. In addition, identical results between the proposed and the Lorente methods have been found, although the proposed method seems to be of simpler application and computationally more efficient than the Lorente one. Small but significative differences occur between these two methods and the one developed by Zimmermann-Timoshenko. This article also presents a detailed sensitivity analysis of the vertical displacement of the sleepers. Although standard matrix methods of structural analysis can handle this railway model, one of the objectives of this article is to show the efficiency of DFT method with respect to standard matrix structural analysis. A comparative analysis between standard matrix structural analysis and the proposed method (DFT), in terms of computational time, input, output and also software programming, will be carried out. Finally, a URL link to a MatLab computer program list, based on the proposed method, is given
Resumo:
El objetivo del presente trabajo de investigación es explorar nuevas técnicas de implementación, basadas en grafos, para las Redes de Neuronas, con el fin de simplificar y optimizar las arquitecturas y la complejidad computacional de las mismas. Hemos centrado nuestra atención en una clase de Red de Neuronas: las Redes de Neuronas Recursivas (RNR), también conocidas como redes de Hopfield. El problema de obtener la matriz sináptica asociada con una RNR imponiendo un determinado número de vectores como puntos fijos, no está en absoluto resuelto, el número de vectores prototipo que pueden ser almacenados en la red, cuando se utiliza la ley de Hebb, es bastante limitado, la red se satura rápidamente cuando se pretende almacenar nuevos prototipos. La ley de Hebb necesita, por tanto, ser revisada. Algunas aproximaciones dirigidas a solventar dicho problema, han sido ya desarrolladas. Nosotros hemos desarrollado una nueva aproximación en la forma de implementar una RNR en orden a solucionar estos problemas. La matriz sináptica es obtenida mediante la superposición de las componentes de los vectores prototipo, sobre los vértices de un Grafo, lo cual puede ser también interpretado como una coloración de dicho grafo. Cuando el periodo de entrenamiento se termina, la matriz de adyacencia del Grafo Resultante o matriz de pesos, presenta ciertas propiedades por las cuales dichas matrices serán llamadas tetraédricas. La energía asociada a cualquier estado de la red es representado por un punto (a,b) de R2. Cada uno de los puntos de energía asociados a estados que disten lo mismo del vector cero está localizado sobre la misma línea de energía de R2. El espacio de vectores de estado puede, por tanto, clasificarse en n clases correspondientes a cada una de las n diferentes distancias que puede tener cualquier vector al vector cero. La matriz (n x n) de pesos puede reducirse a un n-vector; de esta forma, tanto el tiempo de computación como el espacio de memoria requerido par almacenar los pesos, son simplificados y optimizados. En la etapa de recuperación, es introducido un vector de parámetros R2, éste es utilizado para controlar la capacidad de la red: probaremos que lo mayor es la componente a¡, lo menor es el número de puntos fijos pertenecientes a la línea de energía R¡. Una vez que la capacidad de la red ha sido controlada mediante este parámetro, introducimos otro parámetro, definido como la desviación del vector de pesos relativos, este parámetro sirve para disminuir ostensiblemente el número de parásitos. A lo largo de todo el trabajo, hemos ido desarrollando un ejemplo, el cual nos ha servido para ir corroborando los resultados teóricos, los algoritmos están escritos en un pseudocódigo, aunque a su vez han sido implamentados utilizando el paquete Mathematica 2.2., mostrándolos en un volumen suplementario al texto.---ABSTRACT---The aim of the present research is intended to explore new specifícation techniques of Neural Networks based on Graphs to be used in the optimization and simplification of Network Architectures and Computational Complexhy. We have focused our attention in a, well known, class of Neural Networks: the Recursive Neural Networks, also known as Hopfield's Neural Networks. The general problem of constructing the synaptic matrix associated with a Recursive Neural Network imposing some vectors as fixed points is fer for completery solved, the number of prototype vectors (learning patterns) which can be stored by Hebb's law is rather limited and the memory will thus quickly reach saturation if new prototypes are continuously acquired in the course of time. Hebb's law needs thus to be revised in order to allow new prototypes to be stored at the expense of the older ones. Some approaches related with this problem has been developed. We have developed a new approach of implementing a Recursive Neural Network in order to sob/e these kind of problems, the synaptic matrix is obtained superposing the components of the prototype vectors over the vértices of a Graph which may be interpreted as a coloring of the Graph. When training is finished the adjacency matrix of the Resulting Graph or matrix of weights presents certain properties for which it may be called a tetrahedral matrix The energy associated to any possible state of the net is represented as a point (a,b) in R2. Every one of the energy points associated with state-vectors having the same Hamming distance to the zero vector are located over the same energy Une in R2. The state-vector space may be then classified in n classes according to the n different possible distances firom any of the state-vectors to the zero vector The (n x n) matrix of weights may also be reduced to a n-vector of weights, in this way the computational time and the memory space required for obtaining the weights is optimized and simplified. In the recall stage, a parameter vectora is introduced, this parameter is used for controlling the capacity of the net: it may be proved that the bigger is the r, component of J, the lower is the number of fixed points located in the r¡ energy line. Once the capacity of the net has been controlled by the ex parameter, we introduced other parameter, obtained as the relative weight vector deviation parameter, in order to reduce the number of spurious states. All along the present text, we have also developed an example, which serves as a prove for the theoretical results, the algorithms are shown in a pseudocode language in the text, these algorithm so as the graphics have been developed also using the Mathematica 2.2. mathematical package which are shown in a supplementary volume of the text.
Resumo:
Automatic 2D-to-3D conversion is an important application for filling the gap between the increasing number of 3D displays and the still scant 3D content. However, existing approaches have an excessive computational cost that complicates its practical application. In this paper, a fast automatic 2D-to-3D conversion technique is proposed, which uses a machine learning framework to infer the 3D structure of a query color image from a training database with color and depth images. Assuming that photometrically similar images have analogous 3D structures, a depth map is estimated by searching the most similar color images in the database, and fusing the corresponding depth maps. Large databases are desirable to achieve better results, but the computational cost also increases. A clustering-based hierarchical search using compact SURF descriptors to characterize images is proposed to drastically reduce search times. A significant computational time improvement has been obtained regarding other state-of-the-art approaches, maintaining the quality results.
Resumo:
Para las decisiones urgentes sobre intervenciones quirúrgicas en el sistema cardiovascular se necesitan simulaciones computacionales con resultados fiables y que consuman un tiempo de cálculo razonable. Durante años los investigadores han trabajado en diversos métodos numéricos de cálculo que resulten atractivos para los cirujanos. Estos métodos, precisos pero costosos desde el punto de vista del coste computacional, crean un desajuste entre la oferta de los ingenieros que realizan las simulaciones y los médicos que operan en el quirófano. Por otra parte, los métodos de cálculo más simplificados reducen el tiempo de cálculo pero pueden proporcionar resultados no realistas. El objetivo de esta tesis es combinar los conceptos de autorregulación e impedancia del sistema circulatorio, la interacción flujo sanguíneo-pared arterial y modelos geométricos idealizados tridimensionales de las arterias pero sin pérdida de realismo, con objeto de proponer una metodología de simulación que proporcione resultados correctos y completos, con tiempos de cálculo moderados. En las simulaciones numéricas, las condiciones de contorno basadas en historias de presión presentan inconvenientes por ser difícil conocerlas con detalle, y porque los resultados son muy sensibles ante pequeñas variaciones de dichas historias. La metodología propuesta se basa en los conceptos de autorregulación, para imponer la demanda de flujo aguas abajo del modelo en el ciclo cardiaco, y la impedancia, para representar el efecto que ejerce el flujo en el resto del sistema circulatorio sobre las arterias modeladas. De este modo las historias de presión en el contorno son resultados del cálculo, que se obtienen de manera iterativa. El método propuesto se aplica en una geometría idealizada del arco aórtico sin patologías y en otra geometría correspondiente a una disección Stanford de tipo A, considerando la interacción del flujo pulsátil con las paredes arteriales. El efecto de los tejidos circundantes también se incorpora en los modelos. También se hacen aplicaciones considerando la interacción en una geometría especifica de un paciente anciano que proviene de una tomografía computarizada. Finalmente se analiza una disección Stanford tipo B con tres modelos que incluyen la fenestración del saco. Clinicians demand fast and reliable numerical results of cardiovascular biomechanic simulations for their urgent pre-surgery decissions. Researchers during many years have work on different numerical methods in order to attract the clinicians' confidence to their colorful contours. Though precise but expensive and time-consuming methodologies create a gap between numerical biomechanics and hospital personnel. On the other hand, simulation simplifications with the aim of reduction in computational time may cause in production of unrealistic outcomes. The main objective of the current investigation is to combine ideas such as autoregulation, impedance, fluid-solid interaction and idealized geometries in order to propose a computationally cheap methodology without excessive or unrealistic simplifications. The pressure boundary conditions are critical and polemic in numerical simulations of cardiovascular system, in which a specific arterial site is of interest and the rest of the netwrok is neglected but represented by a boundary condition. The proposed methodology is a pressure boundary condition which takes advantage of numerical simplicity of application of an imposed pressure boundary condition on outlets, while it includes more sophisticated concepts such as autoregulation and impedance to gain more realistic results. Incorporation of autoregulation and impedance converts the pressure boundary conditions to an active and dynamic boundary conditions, receiving feedback from the results during the numerical calculations and comparing them with the physiological requirements. On the other hand, the impedance boundary condition defines the shapes of the pressure history curves applied at outlets. The applications of the proposed method are seen on idealized geometry of the healthy arotic arch as well as idealized Stanford type A dissection, considering the interaction of the arterial walls with the pulsatile blood flow. The effect of surrounding tissues is incorporated and studied in the models. The simulations continue with FSI analysis of a patient-specific CT scanned geometry of an old individual. Finally, inspiring of the statistic results of mortality rates in Stanford type B dissection, three models of fenestrated dissection sac is studied and discussed. Applying the developed boundary condition, an alternative hypothesis is proposed by the author with respect to the decrease in mortality rates in patients with fenestrations.
Resumo:
Dentro del análisis y diseño estructural surgen frecuentemente problemas de ingeniería donde se requiere el análisis dinámico de grandes modelos de elementos finitos que llegan a millones de grados de libertad y emplean volúmenes de datos de gran tamaño. La complejidad y dimensión de los análisis se dispara cuando se requiere realizar análisis paramétricos. Este problema se ha abordado tradicionalmente desde diversas perspectivas: en primer lugar, aumentando la capacidad tanto de cálculo como de memoria de los sistemas informáticos empleados en los análisis. En segundo lugar, se pueden simplificar los análisis paramétricos reduciendo su número o detalle y por último se puede recurrir a métodos complementarios a los elementos .nitos para la reducción de sus variables y la simplificación de su ejecución manteniendo los resultados obtenidos próximos al comportamiento real de la estructura. Se propone el empleo de un método de reducción que encaja en la tercera de las opciones y consiste en un análisis simplificado que proporciona una solución para la respuesta dinámica de una estructura en el subespacio modal complejo empleando un volumen de datos muy reducido. De este modo se pueden realizar análisis paramétricos variando múltiples parámetros, para obtener una solución muy aproximada al objetivo buscado. Se propone no solo la variación de propiedades locales de masa, rigidez y amortiguamiento sino la adición de grados de libertad a la estructura original para el cálculo de la respuesta tanto permanente como transitoria. Adicionalmente, su facilidad de implementación permite un control exhaustivo sobre las variables del problema y la implementación de mejoras como diferentes formas de obtención de los autovalores o la eliminación de las limitaciones de amortiguamiento en la estructura original. El objetivo del método se puede considerar similar a los que se obtienen al aplicar el método de Guyan u otras técnicas de reducción de modelos empleados en dinámica estructural. Sin embargo, aunque el método permite ser empleado en conjunción con otros para obtener las ventajas de ambos, el presente procedimiento no realiza la condensación del sistema de ecuaciones, sino que emplea la información del sistema de ecuaciones completa estudiando tan solo la respuesta en las variables apropiadas de los puntos de interés para el analista. Dicho interés puede surgir de la necesidad de obtener la respuesta de las grandes estructuras en unos puntos determinados o de la necesidad de modificar la estructura en zonas determinadas para cambiar su comportamiento (respuesta en aceleraciones, velocidades o desplazamientos) ante cargas dinámicas. Por lo tanto, el procedimiento está particularmente indicado para la selección del valor óptimo de varios parámetros en grandes estructuras (del orden de cientos de miles de modos) como pueden ser la localización de elementos introducidos, rigideces, masas o valores de amortiguamientos viscosos en estudios previos en los que diversas soluciones son planteadas y optimizadas, y que en el caso de grandes estructuras, pueden conllevar un número de simulaciones extremadamente elevado para alcanzar la solución óptima. Tras plantear las herramientas necesarias y desarrollar el procedimiento, se propone un caso de estudio para su aplicación al modelo de elementos .nitos del UAV MILANO desarrollado por el Instituto Nacional de Técnica Aeroespacial. A dicha estructura se le imponen ciertos requisitos al incorporar un equipo en aceleraciones en punta de ala izquierda y desplazamientos en punta de ala derecha en presencia de la sustentación producida por una ráfaga continua de viento de forma sinusoidal. La modificación propuesta consiste en la adición de un equipo en la punta de ala izquierda, bien mediante un anclaje rígido, bien unido mediante un sistema de reducción de la respuesta dinámica con propiedades de masa, rigidez y amortiguamiento variables. El estudio de los resultados obtenidos permite determinar la optimización de los parámetros del sistema de atenuación por medio de múltiples análisis dinámicos de forma que se cumplan de la mejor forma posible los requisitos impuestos con la modificación. Se comparan los resultados con los obtenidos mediante el uso de un programa comercial de análisis por el método de los elementos .nitos lográndose soluciones muy aproximadas entre el modelo completo y el reducido. La influencia de diversos factores como son el amortiguamiento modal de la estructura original, el número de modos retenidos en la truncatura o la precisión proporcionada por el barrido en frecuencia se analiza en detalle para, por último, señalar la eficiencia en términos de tiempo y volumen de datos de computación que ofrece el método propuesto en comparación con otras aproximaciones. Por lo tanto, puede concluirse que el método propuesto se considera una opción útil y eficiente para el análisis paramétrico de modificaciones locales en grandes estructuras. ABSTRACT When developing structural design and analysis some projects require dynamic analysis of large finite element models with millions of degrees of freedom which use large size data .les. The analysis complexity and size grow if a parametric analysis is required. This problem has been approached traditionally in several ways: one way is increasing the power and the storage capacity of computer systems involved in the analysis. Other obvious way is reducing the total amount of analyses and their details. Finally, complementary methods to finite element analysis can also be employed in order to limit the number of variables and to reduce the execution time keeping the results as close as possible to the actual behaviour of the structure. Following this third option, we propose a model reduction method that is based in a simplified analysis that supplies a solution for the dynamic response of the structure in the complex modal space using few data. Thereby, parametric analysis can be done varying multiple parameters so as to obtain a solution which complies with the desired objetive. We propose not only mass, stiffness and damping variations, but also addition of degrees of freedom to the original structure in order to calculate the transient and steady-state response. Additionally, the simple implementation of the procedure allows an in-depth control of the problem variables. Furthermore, improvements such as different ways to obtain eigenvectors or to remove damping limitations of the original structure are also possible. The purpose of the procedure is similar to that of using the Guyan or similar model order reduction techniques. However, in our method we do not perform a true model order reduction in the traditional sense. Furthermore, additional gains, which we do not explore herein, can be obtained through the combination of this method with traditional model-order reduction procedures. In our procedure we use the information of the whole system of equations is used but only those nodes of interest to the analyst are processed. That interest comes from the need to obtain the response of the structure at specific locations or from the need to modify the structure at some suitable positions in order to change its behaviour (acceleration, velocity or displacement response) under dynamic loads. Therefore, the procedure is particularly suitable for parametric optimization in large structures with >100000 normal modes such as position of new elements, stiffness, mass and viscous dampings in previous studies where different solutions are devised and optimized, and in the case of large structures, can carry an extremely high number of simulations to get the optimum solution. After the introduction of the required tools and the development of the procedure, a study case is proposed with use the finite element model (FEM) of the MILANO UAV developed by Instituto Nacional de Técnica Aeroespacial. Due to an equipment addition, certain acceleration and displacement requirements on left wing tip and right wing tip, respectively, are imposed. The structure is under a continuous sinusoidal wind gust which produces lift. The proposed modification consists of the addition of an equipment in left wing tip clamped through a rigid attachment or through a dynamic response reduction system with variable properties of mass, stiffness and damping. The analysis of the obtained results allows us to determine the optimized parametric by means of multiple dynamic analyses in a way such that the imposed requirements have been accomplished in the best possible way. The results achieved are compared with results from a commercial finite element analysis software, showing a good correlation. Influence of several factors such as the modal damping of the original structure, the number of modes kept in the modal truncation or the precission given by the frequency sweep is analyzed. Finally, the efficiency of the proposed method is addressed in tems of computational time and data size compared with other approaches. From the analyses performed, we can conclude that the proposed method is a useful and efficient option to perform parametric analysis of possible local modifications in large structures.
Resumo:
En este trabajo proponemos un algoritmo para la resolución de las descripciones definidas en español a través de la estructura del diálogo, mediante la definición de un espacio de accesibilidad anafórico. Este algoritmo está basado en la hipótesis de que la resolución de la anáfora está relacionada con la estructura del diálogo. Así, la resolución de la anáfora mejora si se especifica un espacio de accesibilidad para cada tipo descripción definida según la estructura del diálogo. La utilización de este espacio de accesibilidad anafóico reduce tanto el tiempo de procesamiento como la posibilidad de obtener un antecedente erróneo. Además, la definición de este espacio de accesibilidad depende únicamente de la propia estructura textual del diálogo.
Resumo:
This paper introduces a novel MILP approach for the design of distillation columns sequences of zeotropic mixtures that explicitly include from conventional to fully thermally coupled sequences and divided wall columns with a single wall. The model is based on the use of two superstructure levels. In the upper level a superstructure that includes all the basic sequences of separation tasks is postulated. The lower level is an extended tree that explicitly includes different thermal states and compositions of the feed to a given separation task. In that way, it is possible to a priori optimize all the possible separation tasks involved in the superstructure. A set of logical relationships relates the feasible sequences with the optimized tasks in the extended tree resulting in a MILP to select the optimal sequence. The performance of the model in terms of robustness and computational time is illustrated with several examples.
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]
Resumo:
Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.
Resumo:
The successful development and optimisation of optically-driven micromachines will be greatly enhanced by the ability to computationally model the optical forces and torques applied to such devices. In principle, this can be done by calculating the light-scattering properties of such devices. However, while fast methods exist for scattering calculations for spheres and axisymmetric particles, optically-driven micromachines will almost always be more geometrically complex. Fortunately, such micromachines will typically possess a high degree of symmetry, typically discrete rotational symmetry. Many current designs for optically-driven micromachines are also mirror-symmetric about a plane. We show how such symmetries can be used to reduce the computational time required by orders of magnitude. Similar improvements are also possible for other highly-symmetric objects such as crystals. We demonstrate the efficacy of such methods by modelling the optical trapping of a cube, and show that even simple shapes can function as optically-driven micromachines.