967 resultados para Equations of state (EoS) models
Resumo:
Pennsylvania Department of Transportation, Harrisburg
Resumo:
Mode of access: Internet.
Resumo:
Deformable models are a highly accurate and flexible approach to segmenting structures in medical images. The primary drawback of deformable models is that they are sensitive to initialisation, with accurate and robust results often requiring initialisation close to the true object in the image. Automatically obtaining a good initialisation is problematic for many structures in the body. The cartilages of the knee are a thin elastic material that cover the ends of the bone, absorbing shock and allowing smooth movement. The degeneration of these cartilages characterize the progression of osteoarthritis. The state of the art in the segmentation of the cartilage are 2D semi-automated algorithms. These algorithms require significant time and supervison by a clinical expert, so the development of an automatic segmentation algorithm for the cartilages is an important clinical goal. In this paper we present an approach towards this goal that allows us to automatically providing a good initialisation for deformable models of the patella cartilage, by utilising the strong spatial relationship of the cartilage to the underlying bone.
Resumo:
The deficiencies of stationary models applied to financial time series are well documented. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We use a dynamic switching (modelled by a hidden Markov model) combined with a linear dynamical system in a hybrid switching state space model (SSSM) and discuss the practical details of training such models with a variational EM algorithm due to [Ghahramani and Hilton,1998]. The performance of the SSSM is evaluated on several financial data sets and it is shown to improve on a number of existing benchmark methods.
Resumo:
This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.
Resumo:
The equation of state for dense fluids has been derived within the framework of the Sutherland and Katz potential models. The equation quantitatively agrees with experimental data on the isothermal compression of water under extrapolation into the high pressure region. It establishes an explicit relationship between the thermodynamic experimental data and the effective parameters of the molecular potential.
Resumo:
The purpose of this inquiry was to investigate the impact of a large, urban school district's experience in implementing a mandated school improvement plan and to examine how that plan was perceived, interpreted, and executed by those charged with the task. The research addressed the following questions: First, by whom was the district implementation plan designed, and what factors were considered in its construction? Second, what impact did the district implementation plan have on those charged with its implementation? Third, what impact did the district plan have on the teaching and learning practices of a particular school? Fourth, what aspects of the implemention plan were perceived as most and least helpful by school personnel in achieving stated goals? Last, what were the intended and unintended consequences of an externally mandated and directed plan for improving student achievement? The implementation process was measured against Fullan's model as expounded upon in The Meaning of Educational Change (1982) and The New Meaning of Educational Change (1990). The Banya implementation model (1993), because it added a dimension not adequately addressed by Fullan, was also considered.^ A case study was used as the methodological framework of this qualitative study. Sources of data used in this inquiry included document analysis, participant observations in situ, follow-up interviews, the "long" interview, and triangulation. The study was conducted over a twelve-month period. Findings were obtained from the content analysis of interview transcripts of multiple participants. Results were described and interpreted using the Fullan and Banya models as the descriptive framework. A cross-case comparison of the multiple perspectives of the same phenomena by various participants was constructed.^ The study concluded that the school district's implementation plan to improve student achievement was closely aligned to Fullan's model, although not intentionally. The research also showed that where there was common understanding at all levels of the organization as to the expectations for teachers, level of support to be provided, and availability of resources, successful implementation occured. The areas where successful implementation did not occur were those where the complexity of the changes were underestimated and processes for dealing with unintended consequences were not considered or adequately addressed. The unique perspectives of the various participants, from the superintendent to the classroom teacher, are described. Finally, recommendations for enhancement of implementation are offered and possible topics for further research studies are postulated. ^
Resumo:
The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.
Resumo:
We propose a mathematically well-founded approach for locating the source (initial state) of density functions evolved within a nonlinear reaction-diffusion model. The reconstruction of the initial source is an ill-posed inverse problem since the solution is highly unstable with respect to measurement noise. To address this instability problem, we introduce a regularization procedure based on the nonlinear Landweber method for the stable determination of the source location. This amounts to solving a sequence of well-posed forward reaction-diffusion problems. The developed framework is general, and as a special instance we consider the problem of source localization of brain tumors. We show numerically that the source of the initial densities of tumor cells are reconstructed well on both imaging data consisting of simple and complex geometric structures.
Resumo:
This chapter discusses the growth and nature of community enterprise and in particular the sub-set of asset-based community development trusts and reviews their contribution to urban regeneration in Britain. Three models are presented and illustrated with case studies.
Resumo:
An investigation into karst hazard in southern Ontario has been undertaken with the intention of leading to the development of predictive karst models for this region. The reason these are not currently feasible is a lack of sufficient karst data, though this is not entirely due to the lack of karst features. Geophysical data was collected at Lake on the Mountain, Ontario as part of this karst investigation. This data was collected in order to validate the long-standing hypothesis that Lake on the Mountain was formed from a sinkhole collapse. Sub-bottom acoustic profiling data was collected in order to image the lake bottom sediments and bedrock. Vertical bedrock features interpreted as solutionally enlarged fractures were taken as evidence for karst processes on the lake bottom. Additionally, the bedrock topography shows a narrower and more elongated basin than was previously identified, and this also lies parallel to a mapped fault system in the area. This suggests that Lake on the Mountain was formed over a fault zone which also supports the sinkhole hypothesis as it would provide groundwater pathways for karst dissolution to occur. Previous sediment cores suggest that Lake on the Mountain would have formed at some point during the Wisconsinan glaciation with glacial meltwater and glacial loading as potential contributing factors to sinkhole development. A probabilistic karst model for the state of Kentucky, USA, has been generated using the Weights of Evidence method. This model is presented as an example of the predictive capabilities of these kind of data-driven modelling techniques and to show how such models could be applied to karst in Ontario. The model was able to classify 70% of the validation dataset correctly while minimizing false positive identifications. This is moderately successful and could stand to be improved. Finally, suggestions to improving the current karst model of southern Ontario are suggested with the goal of increasing investigation into karst in Ontario and streamlining the reporting system for sinkholes, caves, and other karst features so as to improve the current Ontario karst database.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.
Resumo:
This paper reviews the construction of quantum field theory on a 4-dimensional spacetime by combinatorial methods, and discusses the recent developments in the direction of a combinatorial construction of quantum gravity.
Resumo:
This research looks into forms of state crime taking place around the U.S.-Mexico border. On the Mexican side of the border violent corruption and criminal activities stemming from state actors complicity with drug trafficking organisations has produced widespread violence and human casualty while forcing many to cross the border legally or illegally in fear for their lives. Upon their arrival on the U.S. side of the border, these individuals are treated as criminal suspects. They are held in immigration detention facilities, interrogated and categorised as inadmissible ‘economic migrants’ or ‘drug offenders’ only to be denied asylum status and deported to dangerous and violent zones in Mexico. These individuals have been persecuted and victimised by the state during the 2007-2012 counter narcotic operations on one side of the border while criminalised and punished by a categorizing anti-immigration regime on the other side of the border. This thesis examines this border crisis as injurious actions against border residents have been executed by the states under legal and illegal formats in violation of criminal law and human rights conventions. The ethnographic research uses data to develop a nuanced understanding of individuals’ experiences of state victimisation on both sides of the border. In contributing to state crime scholarship it presents a multidimensional theoretical lens by using organised crime theoretical models and critical criminology concepts to explain the role of the state in producing multiple insecurities that exclude citizens and non-citizens through criminalisation processes.