997 resultados para Projected models
Resumo:
In the world today there are many ways in which we measure, count and determine whether something is worth the effort or not. In Australia and many other countries, new government legislation is requiring government-funded entities to become more transparent in their practice and to develop a more cohesive narrative about the worth, or impact, for the betterment of society. This places the executives of such entities in a position of needing evaluative thinking and practice to guide how they may build the narrative that documents and demonstrates this type of impact. In thinking about where to start, executives, project and program managers may consider this workshop as a professional development opportunity to explore both the intended and unintended consequences of performance models as tools of evaluation. This workshop will offer participants an opportunity to unpack the place of performance models as an evaluative tool through the following: · What shape does an ethical, sound and valid performance measure for an organization or personnel take? · What role does cultural specificity play in the design and development of a performance model for an organization or for personnel? · How are stakeholders able to identify risk during the design and development of such models? · When and where will dissemination strategies be required? · And so what? How can you determine that your performance model implementation has made a difference now or in the future?
Resumo:
We provide analytical models for capacity evaluation of an infrastructure IEEE 802.11 based network carrying TCP controlled file downloads or full-duplex packet telephone calls. In each case the analytical models utilize the attempt probabilities from a well known fixed-point based saturation analysis. For TCP controlled file downloads, following Bruno et al. (In Networking '04, LNCS 2042, pp. 626-637), we model the number of wireless stations (STAs) with ACKs as a Markov renewal process embedded at packet success instants. In our work, analysis of the evolution between the embedded instants is done by using saturation analysis to provide state dependent attempt probabilities. We show that in spite of its simplicity, our model works well, by comparing various simulated quantities, such as collision probability, with values predicted from our model. Next we consider N constant bit rate VoIP calls terminating at N STAs. We model the number of STAs that have an up-link voice packet as a Markov renewal process embedded at so called channel slot boundaries. Analysis of the evolution over a channel slot is done using saturation analysis as before. We find that again the AP is the bottleneck, and the system can support (in the sense of a bound on the probability of delay exceeding a given value) a number of calls less than that at which the arrival rate into the AP exceeds the average service rate applied to the AP. Finally, we extend the analytical model for VoIP calls to determine the call capacity of an 802.11b WLAN in a situation where VoIP calls originate from two different types of coders. We consider N-1 calls originating from Type 1 codecs and N-2 calls originating from Type 2 codecs. For G711 and G729 voice coders, we show that the analytical model again provides accurate results in comparison with simulations.
Resumo:
The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic technique that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.
Resumo:
We study quench dynamics and defect production in the Kitaev and the extended Kitaev models. For the Kitaev model in one dimension, we show that in the limit of slow quench rate, the defect density n∼1/√τ, where 1/τ is the quench rate. We also compute the defect correlation function by providing an exact calculation of all independent nonzero spin correlation functions of the model. In two dimensions, where the quench dynamics takes the system across a critical line, we elaborate on the results of earlier work [K. Sengupta, D. Sen, and S. Mondal, Phys. Rev. Lett. 100, 077204 (2008)] to discuss the unconventional scaling of the defect density with the quench rate. In this context, we outline a general proof that for a d-dimensional quantum model, where the quench takes the system through a d−m dimensional gapless (critical) surface characterized by correlation length exponent ν and dynamical critical exponent z, the defect density n∼1/τmν/(zν+1). We also discuss the variation of the shape and spatial extent of the defect correlation function with both the rate of quench and the model parameters and compute the entropy generated during such a quenching process. Finally, we study the defect scaling law, entropy generation and defect correlation function of the two-dimensional extended Kitaev model.
Resumo:
This paper presents the results of shaking table tests on model reinforced soil retaining walls in the laboratory. The influence of backfill relative density on the seismic response was studied through a series of laboratory model tests on retaining walls. Construction of model retaining walls in the laminar box mounted on shaking table, instrumentation and results from the shaking table tests are described in detail. Three types of walls: wrap- and rigid-faced reinforced soil walls and unreinforced rigid-faced walls constructed to different densities were tested for a relatively small excitation. Wrap-faced walls are further tested for higher base excitation at different frequencies and relative densities. It is observed from these tests that the effect of backfill density on the seismic performance of reinforced retaining walls is pronounced only at very low relative density and at the higher base excitation. The walls constructed with higher backfill relative density showed lesser face deformations and more acceleration amplifications compared to the walls constructed with lower densities when tested at higher base excitation. The response of wrap- and rigid-faced retaining walls is not much affected by the backfill relative density when tested at smaller base excitation. The effects of facing rigidity were evaluated to a limited extent. Displacements in wrap-faced walls are many times higher compared to rigid-faced walls. The results obtained from this study are helpful in understanding the relative performance of reinforced soil retaining walls constructed to when subjected to smaller and higher base excitation for the range of relative density employed in the testing program. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.
Resumo:
Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.
Resumo:
Relatively few studies have addressed water management and adaptation measures in the face of changing water balances due to climate change. The current work studies climate change impact on a multipurpose reservoir performance and derives adaptive policies for possible futurescenarios. The method developed in this work is illustrated with a case study of Hirakud reservoir on the Mahanadi river in Orissa, India,which is a multipurpose reservoir serving flood control, irrigation and power generation. Climate change effects on annual hydropower generation and four performance indices (reliability with respect to three reservoir functions, viz. hydropower, irrigation and flood control, resiliency, vulnerability and deficit ratio with respect to hydropower) are studied. Outputs from three general circulation models (GCMs) for three scenarios each are downscaled to monsoon streamflow in the Mahanadi river for two future time slices, 2045-65 and 2075-95. Increased irrigation demands, rule curves dictated by increased need for flood storage and downscaled projections of streamflow from the ensemble of GCMs and scenarios are used for projecting future hydrologic scenarios. It is seen that hydropower generation and reliability with respect to hydropower and irrigation are likely to show a decrease in future in most scenarios, whereas the deficit ratio and vulnerability are likely to increase as a result of climate change if the standard operating policy (SOP) using current rule curves for flood protection is employed. An optimal monthly operating policy is then derived using stochastic dynamic programming (SDP) as an adaptive policy for mitigating impacts of climate change on reservoir operation. The objective of this policy is to maximize reliabilities with respect to multiple reservoir functions of hydropower, irrigation and flood control. In variations to this adaptive policy, increasingly more weightage is given to the purpose of maximizing reliability with respect to hydropower for two extreme scenarios. It is seen that by marginally sacrificing reliability with respect to irrigation and flood control, hydropower reliability and generation can be increased for future scenarios. This suggests that reservoir rules for flood control may have to be revised in basins where climate change projects an increasing probability of droughts. However, it is also seen that power generation is unable to be restored to current levels, due in part to the large projected increases in irrigation demand. This suggests that future water balance deficits may limit the success of adaptive policy options. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.