809 resultados para Performance model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

X-ray medical imaging is increasingly becoming three-dimensional (3-D). The dose to the population and its management are of special concern in computed tomography (CT). Task-based methods with model observers to assess the dose-image quality trade-off are promising tools, but they still need to be validated for real volumetric images. The purpose of the present work is to evaluate anthropomorphic model observers in 3-D detection tasks for low-contrast CT images. We scanned a low-contrast phantom containing four types of signals at three dose levels and used two reconstruction algorithms. We implemented a multislice model observer based on the channelized Hotelling observer (msCHO) with anthropomorphic channels and investigated different internal noise methods. We found a good correlation for all tested model observers. These results suggest that the msCHO can be used as a relevant task-based method to evaluate low-contrast detection for CT and optimize scan protocols to lower dose in an efficient way.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of the thesis was to develop a competitors’ financial performance monitoring model for management reporting. The research consisted of the selections of the comparison group and the performance meters as well as the actual creation of the model. A brief analysis of the current situation was also made. The aim of the results was to improve the financial reporting quality in the case organization by adding external business environment observation to the management reports. The comparison group for the case company was selected to include five companies that were all involved in power equipment engineering and project type business. The most limiting factor related to the comparison group selection was the availability of quarterly financial reporting. The most suitable performance meters were defined to be the developments of revenue, order backlog and EBITDA. These meters should be monitored systematically on quarterly basis and reported to the company management in a brief and informative way. The monitoring model was based on spreadsheet construction with key characteristics being usability, flexibility and simplicity. The model acts as a centered storage for financial competitor information as well as a reporting tool. The current market situation is strongly affected by the economic boom in the recent years and future challenges can be clearly seen in declining order backlogs. The case company has succeeded well related to its comparison group during the observation period since its business volume and profitability have developed in the best way.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study is to view credit risk from the financier’s point of view in a theoretical framework. Results and aspects of the previous studies regarding measuring credit risk with accounting based scoring models are also examined. The theoretical framework and previous studies are then used to support the empirical analysis which aims to develop a credit risk measure for a bank’s internal use or a risk management tool for a company to indicate its credit risk to the financier. The study covers a sample of Finnish companies from 12 different industries and four different company categories and employs their accounting information from 2004 to 2008. The empirical analysis consists of six stage methodology process which uses measures of profitability, liquidity, capital structure and cash flow to determine financier’s credit risk, define five significant risk classes and produce risk classification model. The study is confidential until 15.10.2012.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bearing performance signi cantly a ects the dynamic behaviors and estimated working life of a rotating system. A common bearing type is the ball bearing, which has been under investigation in numerous published studies. The complexity of the ball bearing models described in the literature varies. Naturally, model complexity is related to computational burden. In particular, the inclusion of centrifugal forces and gyroscopic moments signi cantly increases the system degrees of freedom and lengthens solution time. On the other hand, for low or moderate rotating speeds, these e ects can be neglected without signi cant loss of accuracy. The objective of this paper is to present guidelines for the appropriate selection of a suitable bearing model for three case studies. To this end, two ball bearing models were implemented. One considers high-speed forces, and the other neglects them. Both models were used to study a three structures, and the simulation results were.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Volatility has a central role in various theoretical and practical applications in financial markets. These include the applications related to portfolio theory, derivatives pricing and financial risk management. Both theoretical and practical applications require good estimates and forecasts for the asset return volatility. The goal of this study is to examine the forecast performance of one of the more recent volatility measures, model-free implied volatility. Model-free implied volatility is extracted from the prices in the option markets, and it aims to provide an unbiased estimate for the market’s expectation on the future level of volatility. Since it is extracted from the option prices, model-free implied volatility should contain all the relevant information that the market participants have. Moreover, model-free implied volatility requires less restrictive assumptions than the commonly used Black-Scholes implied volatility, which means that it should be less biased estimate for the market’s expectations. Therefore, it should also be a better forecast for the future volatility. The forecast performance of model-free implied volatility is evaluated by comparing it to the forecast performance of Black-Scholes implied volatility and GARCH(1,1) forecast. Weekly forecasts for six years period were calculated for the forecasted variable, German stock market index DAX. The data consisted of price observations for DAX index options. The forecast performance was measured using econometric methods, which aimed to capture the biasedness, accuracy and the information content of the forecasts. The results of the study suggest that the forecast performance of model-free implied volatility is superior to forecast performance of GARCH(1,1) forecast. However, the results also suggest that the forecast performance of model-free implied volatility is not as good as the forecast performance of Black-Scholes implied volatility, which is against the hypotheses based on theory. The results of this study are consistent with the majority of prior research on the subject.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research studied the project performance measurement from the perspective of strategic management. The objective was to find a generic model for project performance measurement that emphasizes strategy and decision making. Research followed the guidelines of a constructive research methodology. As a result, the study suggests a model that measures projects with multiple meters during and after projects. Measurement after the project is suggested to be linked to the strategic performance measures of a company. The measurement should be conducted with centralized project portfolio management e.g. using the project management office in the organization. Metrics, after the project, measure the project’s actual benefit realization. During the project, the metrics are universal and they measure the accomplished objectives relation to costs, schedule and internal resource usage. Outcomes of these measures should be forecasted by using qualitative or stochastic methods. Solid theoretical background for the model was found from the literature that covers the subjects of performance measurement, projects and uncertainty. The study states that the model can be implemented in companies. This statement is supported by empirical evidence from a single case study. The gathering of empiric evidence about the actual usefulness of the model in companies is left to be done by the evaluative research in the future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis "Entitled performance of district industries centres in kerala :An application of augmented solow model.The first chapter deals with evolution of approaches for promoting small scale production and the growth of small scale industries in india.the developing countries face the problems like sluggish growth capital shortages high levels of unemployment,enoromous rural-urban economic disparities regional inequalities increasing concentration of capital and chronic difficulities in the export sector.Review of literature and methodology of the study are presented in the second chapter. In the third chapter an attempt has been made to make an in-depth study of the emergence and growth of district of district industries centres.In the chapter four an attempt was made to study the organisational structure of DICs functions and responsibilities assigned to the functional managers and performance of the functionaries.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The performance of different correlation functionals has been tested for alkali metals, Li to Cs, interacting with cluster models simulating different active sites of the Si(111) surface. In all cases, the ab initio Hartree-Fock density has been obtained and used as a starting point. The electronic correlation energy is then introduced as an a posteriori correction to the Hartree-Fock energy using different correlation functionals. By making use of the ionic nature of the interaction and of different dissociation limits we have been able to prove that all functionals tested introduce the right correlation energy, although to a different extent. Hence, correlation functionals appear as an effective and easy way to introduce electronic correlation in the ab initio Hartree-Fock description of the chemisorption bond in complex systems where conventional configuration interaction techniques cannot be used. However, the calculated energies may differ by some tens of eV. Therefore, these methods can be employed to get a qualitative idea of how important correlation effects are, but they have some limitations if accurate binding energies are to be obtained.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Caches are known to consume up to half of all system power in embedded processors. Co-optimizing performance and power of the cache subsystems is therefore an important step in the design of embedded systems, especially those employing application specific instruction processors. In this project, we propose an analytical cache model that succinctly captures the miss performance of an application over the entire cache parameter space. Unlike exhaustive trace driven simulation, our model requires that the program be simulated once so that a few key characteristics can be obtained. Using these application-dependent characteristics, the model can span the entire cache parameter space consisting of cache sizes, associativity and cache block sizes. In our unified model, we are able to cater for direct-mapped, set and fully associative instruction, data and unified caches. Validation against full trace-driven simulations shows that our model has a high degree of fidelity. Finally, we show how the model can be coupled with a power model for caches such that one can very quickly decide on pareto-optimal performance-power design points for rapid design space exploration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The climatology of the OPA/ARPEGE-T21 coupled general circulation model (GCM) is presented. The atmosphere GCM has a T21 spectral truncation and the ocean GCM has a 2°×1.5° average resolution. A 50-year climatic simulation is performed using the OASIS coupler, without flux correction techniques. The mean state and seasonal cycle for the last 10 years of the experiment are described and compared to the corresponding uncoupled experiments and to climatology when available. The model reasonably simulates most of the basic features of the observed climate. Energy budgets and transports in the coupled system, of importance for climate studies, are assessed and prove to be within available estimates. After an adjustment phase of a few years, the model stabilizes around a mean state where the tropics are warm and resemble a permanent ENSO, the Southern Ocean warms and almost no sea-ice is left in the Southern Hemisphere. The atmospheric circulation becomes more zonal and symmetric with respect to the equator. Once those systematic errors are established, the model shows little secular drift, the small remaining trends being mainly associated to horizontal physics in the ocean GCM. The stability of the model is shown to be related to qualities already present in the uncoupled GCMs used, namely a balanced radiation budget at the top-of-the-atmosphere and a tight ocean thermocline.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Satellite observed data for flood events have been used to calibrate and validate flood inundation models, providing valuable information on the spatial extent of the flood. Improvements in the resolution of this satellite imagery have enabled indirect remote sensing of water levels by using an underlying LiDAR DEM to extract the water surface elevation at the flood margin. Further to comparison of the spatial extent, this now allows for direct comparison between modelled and observed water surface elevations. Using a 12.5m ERS-1 image of a flood event in 2006 on the River Dee, North Wales, UK, both of these data types are extracted and each assessed for their value in the calibration of flood inundation models. A LiDAR guided snake algorithm is used to extract an outline of the flood from the satellite image. From the extracted outline a binary grid of wet / dry cells is created at the same resolution as the model, using this the spatial extent of the modelled and observed flood can be compared using a measure of fit between the two binary patterns of flooding. Water heights are extracted using points at intervals of approximately 100m along the extracted outline, and the students T-test is used to compare modelled and observed water surface elevations. A LISFLOOD-FP model of the catchment is set up using LiDAR topographic data resampled to the 12.5m resolution of the satellite image, and calibration of the friction parameter in the model is undertaken using each of the two approaches. Comparison between the two approaches highlights the sensitivity of the spatial measure of fit to uncertainty in the observed data and the potential drawbacks of using the spatial extent when parts of the flood are contained by the topography.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.