638 resultados para Analytical models
Resumo:
In order to describe the total mineralogical diversity within primitive extraterrestrial materials, individual interplanetary dust particles (IDPs) collected from the stratosphere as part of the JSC Cosmic Dust Curatorial Program were analyzed using a var ...
Resumo:
CI chondrites are used pervasively in the meteorite literature as a cosmochemical reference point for bulk compositions[1], isotope analyses[2] and, within certain models of meteorite evolution, as an important component of an alteration sequence within the carbonaceous chondrite subset[3]. More recently, the chemical variablity of CI chondrite matrices (which comprise >80% of the meteorite), has been cited in discussions about the "chondritic" nature of spectroscopic data from P/comet Halley missions[4] and of chemical data from related materials such as interplanetary dust particles[5]. Most CI chondrites have been studied as bulk samples(e.g. major and trace element abundances)and considerable effort has also been focussed on accessory phases such as magnetites, olivine, sulphates and carbonates [6-8]. A number of early studies showed that the primary constituents of CI matrices are layer silicates and the most definitive structural study on powdered samples identified two minerals: montmorillonite and serpentine[9]. In many cases, as with the study by Bass[9],the relative scarcity of most CI chondrites restricts such bulk analyses to the Orgueil meteorite. The electron microprobe/SEM has been used on petrographic sections to more precisely define the "bulk" composition of at least four CI matrices[3], and as recently summarised by McSween[3], these data define a compositional trend quite different to that obtained for CM chondrite matrices. These "defocussed-beam" microprobe analyses average major element compositions over matrix regions ~lOOµm in diameter and provide only an approximation to silicate mineral composition(s) because their grain sizes are much less than the diameter of the beam. In order to (a) more precisely define the major element compositions of individual mineral grains within CI matrices, and (b)complement previous TEM studies [11,12], we have undertaken an analytical electron microscopy (AEM) study of Alais and Orgueil matrices.
Resumo:
A mineralogical survey of chondritic interplanetary dust particles (IDPs)showed that these micrometeorites differ significantly in form and texture from components of carbonaceous chondrites and contain some mineral assemblages which do not occur in any meteorite class1. Models of chondritic IDP mineral evolution generally ignore the typical (ultra-) fine grain size of consituent minerals which range between 0.002-0.1µm in size2. The chondritic porous (CP) subset of chondritic IDPs is probably debris from short period comets although evidence for a cometary origin is still circumstantial3. If CP IDPs represent dust from regions of the Solar System in which comet accretion occurred, it can be argued that pervasive mineralogical evolution of IDP dust has been arrested due to cryogenic storage in comet nuclei. Thus, preservation in CP IDPs of "unusual meteorite minerals", such as oxides of tin, bismuth and titanium4, should not be dismissed casually. These minerals may contain specific information about processes that occurred in regions of the solar nebula, and early Solar System, which spawned the IDP parent bodies such as comets and C, P and D asteroids6. It is not fully appreciated that the apparent disparity between the mineralogy of CP IDPs and carbonaceous chondrite matrix may also be caused by the choice of electron-beam techniques with different analytical resolution. For example, Mg-Si-Fe distributions of Cl matrix obtained by "defocussed beam" microprobe analyses are displaced towards lower Fe-values when using analytical electron microscope (AEM)data which resolve individual mineral grains of various layer silicates and magnetite in the same matrix6,7. In general, "unusual meteorite minerals" in chondritic IDPs, such as metallic titanium, Tin01-n(Magneli phases) and anatase8 add to the mineral data base of fine-grained Solar System materials and provide constraints on processes that occurred in the early Solar System.
Resumo:
In this article, we analyze the stability and the associated bifurcations of several types of pulse solutions in a singularly perturbed three-component reaction-diffusion equation that has its origin as a model for gas discharge dynamics. Due to the richness and complexity of the dynamics generated by this model, it has in recent years become a paradigm model for the study of pulse interactions. A mathematical analysis of pulse interactions is based on detailed information on the existence and stability of isolated pulse solutions. The existence of these isolated pulse solutions is established in previous work. Here, the pulse solutions are studied by an Evans function associated to the linearized stability problem. Evans functions for stability problems in singularly perturbed reaction-diffusion models can be decomposed into a fast and a slow component, and their zeroes can be determined explicitly by the NLEP method. In the context of the present model, we have extended the NLEP method so that it can be applied to multi-pulse and multi-front solutions of singularly perturbed reaction-diffusion equations with more than one slow component. The brunt of this article is devoted to the analysis of the stability characteristics and the bifurcations of the pulse solutions. Our methods enable us to obtain explicit, analytical information on the various types of bifurcations, such as saddle-node bifurcations, Hopf bifurcations in which breathing pulse solutions are created, and bifurcations into travelling pulse solutions, which can be both subcritical and supercritical.
Resumo:
Developers and policy makers are consistently at odds over the debate as to whether impact fees increase house prices. This debate continues despite the extensive body of theoretical and empirical international literature that discusses the passing on to home buyers of impact fees, and the corresponding increase to housing prices. In attempting to quantify this impact, over a dozen empirical studies have been carried out in the US and Canada since the 1980’s. However the methodologies used vary greatly, as do the results. Despite similar infrastructure funding policies in numerous developed countries, no such empirical works exist outside of the US/Canada. The purpose of this research is to analyse the existing econometric models in order to identify, compare and contrast the theoretical bases, methodologies, key assumptions and findings of each. This research will assist in identifying if further model development is required and/or whether any of these models have external validity and are readily transferable outside of the US. The findings conclude that there is very little explicit rationale behind the various model selections and that significant model deficiencies appear still to exist.
Resumo:
The emergence of highly chloroquine (CQ) resistant P. vivax in Southeast Asia has created an urgent need for an improved understanding of the mechanisms of drug resistance in these parasites, the development of robust tools for defining the spread of resistance, and the discovery of new antimalarial agents. The ex vivo Schizont Maturation Test (SMT), originally developed for the study of P. falciparum, has been modified for P. vivax. We retrospectively analysed the results from 760 parasite isolates assessed by the modified SMT to investigate the relationship between parasite growth dynamics and parasite susceptibility to antimalarial drugs. Previous observations of the stage-specific activity of CQ against P. vivax were confirmed, and shown to have profound consequences for interpretation of the assay. Using a nonlinear model we show increased duration of the assay and a higher proportion of ring stages in the initial blood sample were associated with decreased effective concentration (EC50) values of CQ, and identify a threshold where these associations no longer hold. Thus, starting composition of parasites in the SMT and duration of the assay can have a profound effect on the calculated EC50 for CQ. Our findings indicate that EC50 values from assays with a duration less than 34 hours do not truly reflect the sensitivity of the parasite to CQ, nor an assay where the proportion of ring stage parasites at the start of the assay does not exceed 66%. Application of this threshold modelling approach suggests that similar issues may occur for susceptibility testing of amodiaquine and mefloquine. The statistical methodology which has been developed also provides a novel means of detecting stage-specific drug activity for new antimalarials.
Resumo:
A synthesis is presented of the predictive capability of a family of near-wall wall-normal free Reynolds stress models (which are completely independent of wall topology, i.e., of the distance fromthe wall and the normal-to-thewall orientation) for oblique-shock-wave/turbulent-boundary-layer interactions. For the purpose of comparison, results are also presented using a standard low turbulence Reynolds number k–ε closure and a Reynolds stress model that uses geometric wall normals and wall distances. Studied shock-wave Mach numbers are in the range MSW = 2.85–2.9 and incoming boundary-layer-thickness Reynolds numbers are in the range Reδ0 = 1–2×106. Computations were carefully checked for grid convergence. Comparison with measurements shows satisfactory agreement, improving on results obtained using a k–ε model, and highlights the relative importance of redistribution and diffusion closures, indicating directions for future modeling work.
Resumo:
The use of Bayesian methodologies for solving optimal experimental design problems has increased. Many of these methods have been found to be computationally intensive for design problems that require a large number of design points. A simulation-based approach that can be used to solve optimal design problems in which one is interested in finding a large number of (near) optimal design points for a small number of design variables is presented. The approach involves the use of lower dimensional parameterisations that consist of a few design variables, which generate multiple design points. Using this approach, one simply has to search over a few design variables, rather than searching over a large number of optimal design points, thus providing substantial computational savings. The methodologies are demonstrated on four applications, including the selection of sampling times for pharmacokinetic and heat transfer studies, and involve nonlinear models. Several Bayesian design criteria are also compared and contrasted, as well as several different lower dimensional parameterisation schemes for generating the many design points.
Resumo:
This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.
Resumo:
Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.
Resumo:
Capacity probability models of generating units are commonly used in many power system reliability studies, at hierarchical level one (HLI). Analytical modelling of a generating system with many units or generating units with many derated states in a system, can result in an extensive number of states in the capacity model. Limitations on available memory and computational time of present computer facilities can pose difficulties for assessment of such systems in many studies. A cluster procedure using the nearest centroid sorting method was used for IEEE-RTS load model. The application proved to be very effective in producing a highly similar model with substantially fewer states. This paper presents an extended application of the clustering method to include capacity probability representation. A series of sensitivity studies are illustrated using IEEE-RTS generating system and load models. The loss of load and energy expectations (LOLE, LOEE), are used as indicators to evaluate the application