799 resultados para Utility-based performance measures
Resumo:
We develop an approximate analytical technique for evaluating the performance of multi-hop networks based on beacon-less CSMA/CA as standardised in IEEE 802.15.4, a popular standard for wireless sensor networks. The network comprises sensor nodes, which generate measurement packets, and relay nodes which only forward packets. We consider a detailed stochastic process at each node, and analyse this process taking into account the interaction with neighbouring nodes via certain unknown variables (e.g., channel sensing rates, collision probabilities, etc.). By coupling these analyses of the various nodes, we obtain fixed point equations that can be solved numerically to obtain the unknown variables, thereby yielding approximations of time average performance measures, such as packet discard probabilities and average queueing delays. Different analyses arise for networks with no hidden nodes and networks with hidden nodes. We apply this approach to the performance analysis of tree networks rooted at a data sink. Finally, we provide a validation of our analysis technique against simulations.
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.
Resumo:
We develop an approximate analytical technique for evaluating the performance of multi-hop networks based on beaconless IEEE 802.15.4 ( the ``ZigBee'' PHY and MAC), a popular standard for wireless sensor networks. The network comprises sensor nodes, which generate measurement packets, relay nodes which only forward packets, and a data sink (base station). We consider a detailed stochastic process at each node, and analyse this process taking into account the interaction with neighbouring nodes via certain time averaged unknown variables (e.g., channel sensing rates, collision probabilities, etc.). By coupling the analyses at various nodes, we obtain fixed point equations that can be solved numerically to obtain the unknown variables, thereby yielding approximations of time average performance measures, such as packet discard probabilities and average queueing delays. The model incorporates packet generation at the sensor nodes and queues at the sensor nodes and relay nodes. We demonstrate the accuracy of our model by an extensive comparison with simulations. As an additional assessment of the accuracy of the model, we utilize it in an algorithm for sensor network design with quality-of-service (QoS) objectives, and show that designs obtained using our model actually satisfy the QoS constraints (as validated by simulating the networks), and the predictions are accurate to well within 10% as compared to the simulation results in a regime where the packet discard probability is low. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.
In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.
The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.
The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.
Resumo:
This paper is in two parts and addresses two of getting more information out of the RF signal from three-dimensional (3D) mechanically-swept medical ultrasound . The first topic is the use of non-blind deconvolution improve the clarity of the data, particularly in the direction to the individual B-scans. The second topic is imaging. We present a robust and efficient approach to estimation and display of axial strain information. deconvolution, we calculate an estimate of the point-spread at each depth in the image using Field II. This is used as of an Expectation Maximisation (EM) framework in which ultrasound scatterer field is modelled as the product of (a) a smooth function and (b) a fine-grain varying function. the E step, a Wiener filter is used to estimate the scatterer based on an assumed piecewise smooth component. In the M , wavelet de-noising is used to estimate the piecewise smooth from the scatterer field. strain imaging, we use a quasi-static approach with efficient based algorithms. Our contributions lie in robust and 3D displacement tracking, point-wise quality-weighted , and a stable display that shows not only strain but an indication of the quality of the data at each point in the . This enables clinicians to see where the strain estimate is and where it is mostly noise. deconvolution, we present in-vivo images and simulations quantitative performance measures. With the blurred 3D taken as OdB, we get an improvement in signal to noise ratio 4.6dB with a Wiener filter alone, 4.36dB with the ForWaRD and S.18dB with our EM algorithm. For strain imaging show images based on 2D and 3D data and describe how full D analysis can be performed in about 20 seconds on a typical . We will also present initial results of our clinical study to explore the applications of our system in our local hospital. © 2008 IEEE.
Resumo:
With the concerns over climate change and the escalation in worldwide population, sustainable development attracts more and more attention of academia, policy makers, and businesses in countries. Sustainable manufacturing is an inextricable measure to achieve sustainable development since manufacturing is one of the main energy consumers and greenhouse gas contributors. In the previous researches on production planning of manufacturing systems, environmental factor was rarely considered. This paper investigates the production planning problem under the performance measures of economy and environment with respect to seru production systems, a new manufacturing system praised as Double E (ecology and economy) in Japanese manufacturing industries. We propose a mathematical model with two objectives minimizing carbon dioxide emission and makespan for processing all product types by a seru production system. To solve this mathematical model, we develop an algorithm based on the non-dominated sorting genetic algorithm II. The computation results and analysis of three numeral examples confirm the effectiveness of our proposed algorithm. © 2014 Elsevier Ltd. All rights reserved.
Resumo:
This paper describes research into retrieval based on 3-dimensional shapes for use in the metal casting industry. The purpose of the system is to advise a casting engineer on the design aspects of a new casting by reference to similar castings which have been prototyped and tested in the past. The key aspects of the system are the orientation of the shape within the mould, the positions of feeders and chills, and particular advice concerning special problems and solutions, and possible redesign. The main focus of this research is the effectiveness of similarity measures based on 3-dimensional shapes. The approach adopted here is to construct similarity measures based on a graphical representation deriving from a shape decomposition used extensively by experienced casting design engineers. The paper explains the graphical representation and discusses similarity measures based on it. Performance measures for the CBR system are given, and the results for trials of the system are presented. The competence of the current case-base is discussed, with reference to a representation of cases as points in an n-dimensional feature space, and its principal components visualization. A refinement of the case base is performed as a result of the competence analysis and the performance of the case-base before and after refinement is compared.
Resumo:
The aim of this paper is to investigate the performance and persistence of 20 iShares MSCI country-specific exchange-traded funds (ETFs) in comparison with S&P 500 index over the period July 2001 to June 2006. There are several studies analysing mutual funds performance in past years, but very little is known about ETFs. In our analysis the Sharpe, Treynor and Sortino ratios are used as risk-adjusted performance measures. To evaluate performance persistence and therefore if there is any relationship among past performance and future performance, we apply to the Spearman Rank Correlation Coefficient and the Winner-loser Contingency Table. The main findings are at two levels. First, ETFs can beat the U.S. market index based on risk-adjusted performance measures. Second, there is evidence of ETFs performance persistence based on annual return.
Resumo:
Improvement in the quality of end-of-life (EOL) care is a priority health care issue since serious deficiencies in quality of care have been reported across care settings. Increasing pressure is now focused on Canadian health care organizations to be accountable for the quality of palliative and EOL care delivered. Numerous domains of quality EOL care upon which to create accountability frameworks are now published, with some derived from the patient/family perspective. There is a need to reach common ground on the domains of quality EOL care valued by patients and families in order to develop consistent performance measures and set priorities for health care improvement. This paper describes a meta-synthesis study to develop a common conceptual framework of quality EOL care integrating attributes of quality valued by patients and their families. © 2005 Centre for Bioethics, IRCM.
Resumo:
Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias.
Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture.
Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria.
Setting: Critical care departments within NHS hospitals in the north-west of England.
Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation.
Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard.
Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy.
Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.
Resumo:
Traditional approaches to evaluate performance in hotels, have mainly used financial measures. Building on Speckbacher et al. (2003), this Work Project aims to design and propose a Balanced Scorecard Type II as a performance measurement/management system for the hospitality industry based on data collected at the Luxury Brand Hotels of Pestana Group. The main contribution is to better align the vision, strategy and financial and non-financial performance measures in this category of hotels, in particular those of Pestana Group, and by doing so, lead their managers to focus on what is really critical and, consequently improve the overall performance.
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
The present study explored processing strategies used by individuals when they begin to read c;l script. Stimuli were artificial words created from symbols and based on an alphabetic system. The words were.presented to Grade Nine and Ten students, with variations included in the difficulty of orthography and word familiarity, and then scores were recorded on the mean number of trials for defined learning variables. Qualitative findings revealed that subjects 1 earned parts of the visual a'nd auditory features of words prior to hooking up the visual stimulus to the word's name. Performance measures-which appear to affect the rate of learning were as follows: auditory short-term memory, auditory delayed short-term memory, visual delayed short- term memory, and word attack or decod~ng skills. Qualitative data emerging in verbal reports by the subjects revealed that strategies they pefceived to use were, graphic, phonetic decoding and word .reading.
Resumo:
As one of the key indicators of the firm’s ability to leverage successfully its resources and capabilities in the international context, export performance has been one of the most extensively studied phenomena. A plethora of studies have been conducted pertaining to provide better understanding of the factors (firm- or environment-specific) and behaviours (e.g., export strategy) that make exporting a successful venture. Following a comprehensive literature review undertaking in this study the current state of the export performance literature could be summarisedas (i) methodologically fragmented in that there is a variety of analytical and methodological approaches, (ii) conceptually diverse, a large number of determinants have been identified as having direct or indirect influence on the firm’s export performance, and a large number of indicators have been used to conceptualise and operationalise the export performance measures, and (iii) inconclusive, the studies have produced inconsistent results of the impact of different determinants on export performance.
Resumo:
Despite the generally positive contribution of supply management capabilities to firm performance their respective routines require more depth of assessment. Using the resource-based view we examine four routines bundles comprising ostensive and performative aspects of supply management capability – supply management integration, coordinated sourcing, collaboration management and performance assessment. Using structural equation modelling we measure supply management capability empirically as a second-order latent variable and estimate its effect on a series of financial and operational performance measures. The routines-based approach allows us to demonstrate a different, more fine-grained approach for assessing consistent bundles of homogeneous patterns of activity across firms. The results suggest supply management capability is formed of internally consistent routine bundles, which are significantly related to financial performance, mediated by operational performance. Our results confirm an indirect effect of firm performance for ‘core’ routines forming the architecture of a supply management capability. Supply management capability primarily improves the operational performance of the business, which is subsequently translated into improved financial performance. The study is significant for practice as it offers a different view about the face-valid rationale of supply management directly influencing firm financial performance. We confound this assumption, prompting caution when placing too much importance on directly assessing supply management capability using financial performance of the business.