799 resultados para Utility-based performance measures
Resumo:
The purpose of this study was to compare the effects of two commonly utilised sleepiness countermeasures: a nap break and an active rest break. The effects of the countermeasures were evaluated by physiological (EEG), subjective, and driving performance measures. Participants completed two hours of simulated driving, followed by a 15 minute nap break or a 15 minute active rest break then completed the final hour of simulated driving. The nap break reduced EEG and subjective sleepiness. The active rest break did not reduce EEG sleepiness, with sleepiness levels eventually increasing, and resulted in an immediate reduction of subjective sleepiness. No difference was found between the two breaks for the driving performance measure. The immediate reduction of subjective sleepiness after the active rest break could leave drivers with erroneous perceptions of their sleepiness, particularly with increases of physiological sleepiness after the break.
Resumo:
Purpose This study seeks to extend the existing literature on value creation by specifically focusing on service brand value creation (SBVC) and the role of brand marketing. Design/methodology/approach The authors first develop a model of SBVC and simultaneously investigate SBVC from the firm perspective (service brand value offering – SBVO) and from the customer perspective (service brand perceive value-in use – SBPVI). Subsequently, they investigate the effects of SBVO on SBPVI and integrate the moderation role of service brand marketing capability (SBMC) on the relationship between SBVO-SBPVI outcomes. SBVO is viewed as the firms' interpretation of and responsiveness to customer requirements via the delivery of superior performance the value offering through the service brand and SBPVI customers' perceived value from the firms' service brand. The contributions of SBVC to customer-based performance outcomes are then investigated. Hypotheses were tested using a sample of the senior managers of service firms in Cambodia and their customers. A survey was used to gather data via a drop-and-collect approach. Findings Results indicated that SBVO is positively related to SBPVI and SBPVI is positively related to customer-based performance. Noticeably, the results revealed that SBMC enhances the positive relationship between the firm SBVO and the customers SBPVI. Originality/value The paper extends the previous literature on value creation to capture SBVC. More significantly, the premise of the theoretical framework provides a breakthrough in the current SBVC literature which has so far neglected to take into account the dyadic approach (firm-customer) in understanding value creation and more specifically SBVC. The model is expanded by looking at the contingency role of SBMC in communicating value to customers.
Resumo:
Objectives The purpose for this study was to determine the relative benefit of nap and active rest breaks for reducing driver sleepiness. Methods Participants were 20 healthy young adults (20-25 years), including 8 males and 12 females. A counterbalanced within-subjects design was used such that each participant completed both conditions on separate occasions, a week apart. The effects of the countermeasures were evaluated by established physiological (EEG theta and alpha absolute power), subjective (Karolinska Sleepiness Scale), and driving performance measures (Hazard Perception Task). Participants woke at 5am, and undertook a simulated driving task for two hours; each participant then had either a 15-minute nap opportunity or a 15-minute active rest break that included 10 minutes of brisk walking, followed by another hour of simulated driving. Results The nap break reduced EEG theta and alpha absolute power and eventually reduced subjective sleepiness levels. In contrast, the active rest break did not reduce EEG theta and alpha absolute power levels with the power levels eventually increasing. An immediate reduction of subjective sleepiness was observed, with subjective sleepiness increasing during the final hour of simulated driving. No difference was found between the two breaks for hazard perception performance. Conclusions Only the nap break produced a significant reduction in physiological sleepiness. The immediate reductions of subjective sleepiness following the active rest break could leave drivers with erroneous perceptions of their sleepiness, particularly as physiological sleepiness continued to increase after the break.
Resumo:
The ‘Centro case’ confirmed that each individual director is responsible for financial governance and must be able to ‘read and understand’ financial statements. Despite the centrality of director financial literacy to directors duties, practitioner and academic literature have failed to clearly define or provide evidence-based reliable measures of director financial literacy. This paper seeks to address this weakness by presenting the initial results of a Delphi study on unpacking the conceptualisation of director financial literacy. We have found that director financial literacy involves more than reading and understanding financial statements. Rather, it encompasses capabilities in applying accounting concepts to the analysis and evaluation of financial statements. As such director financial literacy may be more accurately described as ‘director accounting literacy’.
Resumo:
Index tracking is an investment approach where the primary objective is to keep portfolio return as close as possible to a target index without purchasing all index components. The main purpose is to minimize the tracking error between the returns of the selected portfolio and a benchmark. In this paper, quadratic as well as linear models are presented for minimizing the tracking error. The uncertainty is considered in the input data using a tractable robust framework that controls the level of conservatism while maintaining linearity. The linearity of the proposed robust optimization models allows a simple implementation of an ordinary optimization software package to find the optimal robust solution. The proposed model of this paper employs Morgan Stanley Capital International Index as the target index and the results are reported for six national indices including Japan, the USA, the UK, Germany, Switzerland and France. The performance of the proposed models is evaluated using several financial criteria e.g. information ratio, market ratio, Sharpe ratio and Treynor ratio. The preliminary results demonstrate that the proposed model lowers the amount of tracking error while raising values of portfolio performance measures.
Resumo:
Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.
Resumo:
Performance measures for monitoring and comparing the reproductive performance of northern Australian beef herds.
Resumo:
This dissertation examines the short- and long-run impacts of timber prices and other factors affecting NIPF owners' timber harvesting and timber stocking decisions. The utility-based Faustmann model provides testable hypotheses of the exogenous variables retained in the timber supply analysis. The timber stock function, derived from a two-period biomass harvesting model, is estimated using a two-step GMM estimator based on balanced panel data from 1983 to 1991. Timber supply functions are estimated using a Tobit model adjusted for heteroscedasticity and nonnormality of errors based on panel data from 1994 to 1998. Results show that if specification analysis of the Tobit model is ignored, inconsistency and biasedness can have a marked effect on parameter estimates. The empirical results show that owner's age is the single most important factor determining timber stock; timber price is the single most important factor in harvesting decision. The results of the timber supply estimations can be interpreted using utility-based Faustmann model of a forest owner who values a growing timber in situ.
Resumo:
This paper deals with a batch service queue and multiple vacations. The system consists of a single server and a waiting room of finite capacity. Arrival of customers follows a Markovian arrival process (MAP). The server is unavailable for occasional intervals of time called vacations, and when it is available, customers are served in batches of maximum size ‘b’ with a minimum threshold value ‘a’. We obtain the queue length distributions at various epochs along with some key performance measures. Finally, some numerical results have been presented.
Resumo:
We study a fixed-point formalization of the well-known analysis of Bianchi. We provide a significant simplification and generalization of the analysis. In this more general framework, the fixed-point solution and performance measures resulting from it are studied. Uniqueness of the fixed point is established. Simple and general throughput formulas are provided. It is shown that the throughput of any flow will be bounded by the one with the smallest transmission rate. The aggregate throughput is bounded by the reciprocal of the harmonic mean of the transmission rates. In an asymptotic regime with a large number of nodes, explicit formulas for the collision probability, the aggregate attempt rate, and the aggregate throughput are provided. The results from the analysis are compared with ns2 simulations and also with an exact Markov model of the backoff process. It is shown how the saturated network analysis can be used to obtain TCP transfer throughputs in some cases.
Resumo:
Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a characterization of nearest-neighbor collaborative filtering that allows us to disaggregate global recommender performance measures into contributions made by each individual rating. In particular, we formulate three roles-scouts, promoters, and connectors-that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected, respectively. These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute ( or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community.
Resumo:
Evaluation practices have pervaded the Finnish society and welfare state. At the same time the term effectiveness has become a powerful organising concept in welfare state activities. The aim of the study is to analyse how the outcome-oriented society came into being through historical processes, to answer the question of how social policy and welfare state practices were brought under the governance of the concept of effectiveness . Discussions about social imagination, Michel Foucault s conceptions of the history of the present and of governmentality, genealogy and archaeology, along with Ian Hacking s notions of dynamic nominalism and styles of reasoning, are used as the conceptual and methodological starting points for the study. In addition, Luc Boltanski s and Laurent Thévenot s ideas of orders of worth , regimes of evaluation in everyday life, are employed. Usually, evaluation is conceptualised as an autonomous epistemic culture and practice (evaluation as epistemic practice), but evaluation is here understood as knowledge-creation processes elementary to different epistemic practices (evaluation in epistemic practices). The emergence of epistemic cultures and styles of reasoning about the effectiveness or impacts of welfare state activities are analysed through Finnish social policy and social work research. The study uses case studies which represent debates and empirical research dealing with the effectiveness and quality of social services and social work. While uncertainty and doubts over the effects and consequences of welfare policies have always been present in discourses about social policy, the theme has not been acknowledged much in social policy research. To resolve these uncertainties, eight styles of reasoning about such effects have emerged over time. These are the statistical, goal-based, needs-based, experimental, interaction-based, performance measurement, auditing and evidence-based styles of reasoning. Social policy research has contributed in various ways to the creation of these epistemic practices. The transformation of the welfare state, starting at the end of 1980s, increased market-orientation and trimmed public welfare responsibilities, and led to the adoption of the New Public Management (NPM) style of leadership. Due to these developments the concept of effectiveness made a breakthrough, and new accountabilities with their knowledge tools for performance measurement and auditing and evidence-based styles of reasoning became more dominant in the ruling of the welfare state. Social sciences and evaluation have developed a heteronomous relation with each other, although there still remain divergent tendencies between them. Key words: evaluation, effectiveness, social policy, welfare state, public services, sociology of knowledge
Resumo:
We provide a comparative performance analysis of network architectures for beacon enabled Zigbee sensor clusters using the CSMA/CA MAC defined in the IEEE 802.15.4 standard, and organised as (i) a star topology, and (ii) a two-hop topology. We provide analytical models for obtaining performance measures such as mean network delay, and mean node lifetime. We find that the star topology is substantially superior both in delay performance and lifetime performance than the two-hop topology.
Resumo:
In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We build upon the fixed point analysis and performance insights in [1]. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures.The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY).
Resumo:
A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.