981 resultados para Simplified Models.
Resumo:
Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design. Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions. The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.
Resumo:
Recrystallization processes in marine sediments can alter the extent to which biogenic calcite composition serves as a proxy of oceanic chemical and isotopic history. Models of calcite recrystallization developed to date have resulted in significant insights into these processes, but are not completely adequate to describe the conditions of recrystallization. Marine sediments frequently have concentration gradients in interstitial dissolved calcium, magnesium, and strontium which have probably evolved during sediment accumulation. Realistic, albeit simplified, models of the temporal evolution of interstitial water profiles of Ca, Mg, and Sr were used with several patterns of recrystallization rate variation to predict the composition of recrystallized inorganic calcite. Comparison of predictions with measured Mg/Ca and Sr/Ca ratios in severely altered calcite samples from several Deep Sea Drilling Project sites demonstrates that models incorporating temporal variation in interstitial water composition more successfully predict observed calcite compositions than do models which rely solely on present-day interstitial water chemistry. Temporal changes in interstitial composition are particularly important in interpreting Mg/Ca ratios in conjunction with Sr/Ca ratios. Estimates of Mg distribution coefficients from previous observations in marine sediments, much lower than those in laboratory studies of inorganic calcite, are confirmed by these results. Evaluation of the effects of diagenetic alteration of biogenic calcium carbonate sediment must be a site-specific process, taking into account accumulation history, present interstitial chemistry and its variation in the past, and sample depths and ages.
Resumo:
This paper is aimed at investigating the effect of web openings on the plastic bending behaviour and section moment capacity of a new cold-formed steel beam known as LiteSteel beam (LSB) using numerical modelling. Different LSB sections with varying circular hole diameter and spacing were considered. A simplified but appropriate numerical modelling technique was developed for the modelling of monosymmetric sections such as LSBs subject to bending, and was used to simulate a series of section moment capacity tests of LSB flexural members with web openings. The buckling and ultimate strength behaviour was investigated in detail and the modeling technique was further improved through a comparison of numerical and experimental results. This paper describes the simplified finite element modeling technique used in this study that includes all the significant behavioural effects affecting the plastic bending behaviour and section moment capacity of LSB sections with web holes. Numerical and test results and associated findings are also presented.
Resumo:
ACR is supported by a research grant from CNPq.
Resumo:
"June 1977."
Resumo:
In this paper, the problems of three carrier phase ambiguity resolution (TCAR) and position estimation (PE) are generalized as real time GNSS data processing problems for a continuously observing network on large scale. In order to describe these problems, a general linear equation system is presented to uniform various geometry-free, geometry-based and geometry-constrained TCAR models, along with state transition questions between observation times. With this general formulation, generalized TCAR solutions are given to cover different real time GNSS data processing scenarios, and various simplified integer solutions, such as geometry-free rounding and geometry-based LAMBDA solutions with single and multiple-epoch measurements. In fact, various ambiguity resolution (AR) solutions differ in the floating ambiguity estimation and integer ambiguity search processes, but their theoretical equivalence remains under the same observational systems models and statistical assumptions. TCAR performance benefits as outlined from the data analyses in some recent literatures are reviewed, showing profound implications for the future GNSS development from both technology and application perspectives.
Resumo:
This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.
Resumo:
In the exclusion-process literature, mean-field models are often derived by assuming that the occupancy status of lattice sites is independent. Although this assumption is questionable, it is the foundation of many mean-field models. In this work we develop methods to relax the independence assumption for a range of discrete exclusion process-based mechanisms motivated by applications from cell biology. Previous investigations that focussed on relaxing the independence assumption have been limited to studying initially-uniform populations and ignored any spatial variations. By ignoring spatial variations these previous studies were greatly simplified due to translational invariance of the lattice. These previous corrected mean-field models could not be applied to many important problems in cell biology such as invasion waves of cells that are characterised by moving fronts. Here we propose generalised methods that relax the independence assumption for spatially inhomogeneous problems, leading to corrected mean-field descriptions of a range of exclusion process-based models that incorporate (i) unbiased motility, (ii) biased motility, and (iii) unbiased motility with agent birth and death processes. The corrected mean-field models derived here are applicable to spatially variable processes including invasion wave type problems. We show that there can be large deviations between simulation data and traditional mean-field models based on invoking the independence assumption. Furthermore, we show that the corrected mean-field models give an improved match to the simulation data in all cases considered.
Resumo:
Recent fire research into the behaviour of light gauge steel frame (LSF) wall systems has devel-oped fire design rules based on Australian and European cold-formed steel design standards, AS/NZS 4600 and Eurocode 3 Part 1.3. However, these design rules are complex since the LSF wall studs are subjected to non-uniform elevated temperature distributions when the walls are exposed to fire from one side. Therefore this paper proposes an alternative design method for routine predictions of fire resistance rating of LSF walls. In this method, suitable equations are recommended first to predict the idealised stud time-temperature pro-files of eight different LSF wall configurations subject to standard fire conditions based on full scale fire test results. A new set of equations was then proposed to find the critical hot flange (failure) temperature for a giv-en load ratio for the same LSF wall configurations with varying steel grades and thickness. These equations were developed based on detailed finite element analyses that predicted the axial compression capacities and failure times of LSF wall studs subject to non-uniform temperature distributions with varying steel grades and thicknesses. This paper proposes a simple design method in which the two sets of equations developed for time-temperature profiles and critical hot flange temperatures are used to find the failure times of LSF walls. The proposed method was verified by comparing its predictions with the results from full scale fire tests and finite element analyses. This paper presents the details of this study including the finite element models of LSF wall studs, the results from relevant fire tests and finite element analyses, and the proposed equations.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
The acceptance of broadband ultrasound attenuation (BUA) for the assessment of osteoporosis suffers from a limited understanding of both ultrasound wave propagation through cancellous bone and its exact dependence upon the material and structural properties. It has recently been proposed that ultrasound wave propagation in cancellous bone may be described by a concept of parallel sonic rays; the transit time of each ray defined by the proportion of bone and marrow propagated. A Transit Time Spectrum (TTS) describes the proportion of sonic rays having a particular transit time, effectively describing the lateral inhomogeneity of transit times over the surface aperture of the receive ultrasound transducer. The aim of this study was to test the hypothesis that the solid volume fraction (SVF) of simplified bone:marrow replica models may be reliably estimated from the corresponding ultrasound transit time spectrum. Transit time spectra were derived via digital deconvolution of the experimentally measured input and output ultrasonic signals, and compared to predicted TTS based on the parallel sonic ray concept, demonstrating agreement in both position and amplitude of spectral peaks. Solid volume fraction was calculated from the TTS; agreement between true (geometric calculation) with predicted (computer simulation) and experimentally-derived values were R2=99.9% and R2=97.3% respectively. It is therefore envisaged that ultrasound transit time spectroscopy (UTTS) offers the potential to reliably estimate bone mineral density and hence the established T-score parameter for clinical osteoporosis assessment.
Resumo:
Diffusive transport is a universal phenomenon, throughout both biological and physical sciences, and models of diffusion are routinely used to interrogate diffusion-driven processes. However, most models neglect to take into account the role of volume exclusion, which can significantly alter diffusive transport, particularly within biological systems where the diffusing particles might occupy a significant fraction of the available space. In this work we use a random walk approach to provide a means to reconcile models that incorporate crowding effects on different spatial scales. Our work demonstrates that coarse-grained models incorporating simplified descriptions of excluded volume can be used in many circumstances, but that care must be taken in pushing the coarse-graining process too far.
Resumo:
Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.
Resumo:
Starting with the Levinthal paradox, a brief introduction to the protein folding problem is presented. The existing theories of protein folding, including the folding funnel scenario, are discussed. After briefly discussing different simulation studies of model proteins, we discuss our recent work on the dynamics of folding of the model HP-36 (the chicken villin headpiece) protein by using a simplified hydropathy scale. Special attention has been paid to the statics and dynamics of contact formation among the hydrophobic residues. The results obtained from this simple model appear to be surprisingly similar to several features observed in the folding of real proteins. The account concludes with a discussion of future problems.