880 resultados para Evaluation models
Resumo:
Optimal allocation of water resources for various stakeholders often involves considerable complexity with several conflicting goals, which often leads to multi-objective optimization. In aid of effective decision-making to the water managers, apart from developing effective multi-objective mathematical models, there is a greater necessity of providing efficient Pareto optimal solutions to the real world problems. This study proposes a swarm-intelligence-based multi-objective technique, namely the elitist-mutated multi-objective particle swarm optimization technique (EM-MOPSO), for arriving at efficient Pareto optimal solutions to the multi-objective water resource management problems. The EM-MOPSO technique is applied to a case study of the multi-objective reservoir operation problem. The model performance is evaluated by comparing with results of a non-dominated sorting genetic algorithm (NSGA-II) model, and it is found that the EM-MOPSO method results in better performance. The developed method can be used as an effective aid for multi-objective decision-making in integrated water resource management.
Resumo:
The New Zealand White rabbit has been widely used as a model of limbal stem cell deficiency (LSCD). Current techniques for experimental induction of LSCD utilize caustic chemicals, or organic solvents applied in conjunction with a surgical limbectomy. While generally successful in depleting epithelial progenitors, the depth and severity of injury is difficult to control using chemical-based methods. Moreover, the anterior chamber can be easily perforated while surgically excising the corneal limbus. In the interest of creating a safer and more defined LSCD model, we have therefore evaluated a mechanical debridement technique based upon use of the AlgerBrush II rotating burr. An initial comparison of debridement techniques was conducted in situ using 24 eyes in freshly acquired New Zealand White rabbit cadavers. Techniques for comparison (4 eyes each) included: (1) non-wounded control, (2) surgical limbectomy followed by treatment with 100% (v/v) n-heptanol to remove the corneal epithelium (1-2 minutes), (3) treatment of both limbus and cornea with n-heptanol alone, (4) treatment of both limbus and cornea with 20% (v/v) ethanol (2-3 minutes), (5) a 2.5-mm rounded burr applied to both the limbus and cornea, and (6) a 1-mm pointed burr applied to the limbus, followed by the 2.5-mm rounded burr applied to the cornea. All corneas were excised and processed for histology immediately following debridement. A panel of four assessors subsequently scored the degree of epithelial debridement within the cornea and limbus using masked slides. The 2.5-mm burr most consistently removed the corneal and limbal epithelia. Islands of limbal epithelial cells were occasionally retained following surgical limbectomy/heptanol treatment, or use of the 1-mm burr. Limbal epithelial cells were consistently retained following treatment with either ethanol or n-heptanol alone, with ethanol being the least effective treatment overall. The 2.5-mm burr method was subsequently evaluated in the right eye of 3 live rabbits by weekly clinical assessments (photography and slit lamp examination) for up to 5 weeks, followed by histological analyses (hematoxylin & eosin stain, periodic acid-Schiff stain and immunohistochemistry for keratin 3 and 13). All 3 eyes that had been completely debrided using the 2.5-mm burr displayed symptoms of ocular surface failure as defined by retention of a prominent epithelial defect (~40% of corneal surface at 5 weeks), corneal neovascularization (2 to 3 quadrants), reduced corneal transparency and conjunctivalization of the corneal surface (demonstrated by the presence of goblet cells and/or staining for keratin 13). In conclusion, our findings indicate that the AlgerBrush II rotating burr is an effective method for the establishment of ocular surface failure in New Zealand White rabbits. In particular, we recommend use of the 2.5-mm rotating burr for improved efficiency of epithelial debridement and safety compared to surgical limbectomy.
Resumo:
In the world today there are many ways in which we measure, count and determine whether something is worth the effort or not. In Australia and many other countries, new government legislation is requiring government-funded entities to become more transparent in their practice and to develop a more cohesive narrative about the worth, or impact, for the betterment of society. This places the executives of such entities in a position of needing evaluative thinking and practice to guide how they may build the narrative that documents and demonstrates this type of impact. In thinking about where to start, executives, project and program managers may consider this workshop as a professional development opportunity to explore both the intended and unintended consequences of performance models as tools of evaluation. This workshop will offer participants an opportunity to unpack the place of performance models as an evaluative tool through the following: · What shape does an ethical, sound and valid performance measure for an organization or personnel take? · What role does cultural specificity play in the design and development of a performance model for an organization or for personnel? · How are stakeholders able to identify risk during the design and development of such models? · When and where will dissemination strategies be required? · And so what? How can you determine that your performance model implementation has made a difference now or in the future?
Resumo:
We provide analytical models for capacity evaluation of an infrastructure IEEE 802.11 based network carrying TCP controlled file downloads or full-duplex packet telephone calls. In each case the analytical models utilize the attempt probabilities from a well known fixed-point based saturation analysis. For TCP controlled file downloads, following Bruno et al. (In Networking '04, LNCS 2042, pp. 626-637), we model the number of wireless stations (STAs) with ACKs as a Markov renewal process embedded at packet success instants. In our work, analysis of the evolution between the embedded instants is done by using saturation analysis to provide state dependent attempt probabilities. We show that in spite of its simplicity, our model works well, by comparing various simulated quantities, such as collision probability, with values predicted from our model. Next we consider N constant bit rate VoIP calls terminating at N STAs. We model the number of STAs that have an up-link voice packet as a Markov renewal process embedded at so called channel slot boundaries. Analysis of the evolution over a channel slot is done using saturation analysis as before. We find that again the AP is the bottleneck, and the system can support (in the sense of a bound on the probability of delay exceeding a given value) a number of calls less than that at which the arrival rate into the AP exceeds the average service rate applied to the AP. Finally, we extend the analytical model for VoIP calls to determine the call capacity of an 802.11b WLAN in a situation where VoIP calls originate from two different types of coders. We consider N-1 calls originating from Type 1 codecs and N-2 calls originating from Type 2 codecs. For G711 and G729 voice coders, we show that the analytical model again provides accurate results in comparison with simulations.
Resumo:
The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.
Resumo:
This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic technique that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.
Resumo:
This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
The goal of this study was to examine the role of organizational causal attribution in understanding the relation of work stressors (work-role overload, excessive role responsibility, and unpleasant physical environment) and personal resources (social support and cognitive coping) to such organizational-attitudinal outcomes as work engagement, turnover intention, and organizational identification. In some analyses, cognitive coping was also treated as an organizational outcome. Causal attribution was conceptualized in terms of four dimensions: internality-externality, attributing the cause of one’s successes and failures to oneself, as opposed to external factors, stability (thinking that the cause of one’s successes and failures is stable over time), globality (perceiving the cause to be operative on many areas of one’s life), and controllability (believing that one can control the causes of one’s successes and failures). Several hypotheses were derived from Karasek’s (1989) Job Demands–Control (JD-C) model and from the Job Demands–Resources (JD-R) model (Demerouti, Bakker, Nachreiner & Schaufeli, 2001). Based on the JD-C model, a number of moderation effects were predicted, stating that the strength of the association of work stressors with the outcome variables (e.g. turnover intentions) varies as a function of the causal attribution; for example, unpleasant work environment is more strongly associated with turnover intention among those with an external locus of causality than among those with an internal locuse of causality. From the JD-R model, a number of hypotheses on the mediation model were derived. They were based on two processes posited by the model: an energy-draining process in which work stressors along with a mediating effect of causal attribution for failures deplete the nurses’ energy, leading to turnover intention, and a motivational process in which personal resources along with a mediating effect of causal attribution for successes foster the nurses’ engagement in their work, leading to higher organizational identification and to decreased intention to leave the nursing job. For instance, it was expected that the relationship between work stressors and turnover intention could be explained (mediated) by a tendency to attribute one’s work failures to stable causes. The data were collected from among Finnish hospital nurses using e-questionnaires. Overall 934 nurses responded the questionnaires. Work stressors and personal resources were measured by five scales derived from the Occupational Stress Inventory-Revised (Osipow, 1998). Causal attribution was measured using the Occupational Attributional Style Questionnaire (Furnham, 2004). Work engagement was assessed through the Utrecht Work Engagement Scale (Schaufeli & al., 2002), turnover intention by the Van Veldhoven & Meijman (1994) scale, and organizational identification by the Mael & Ashforth (1992) measure. The results provided support for the function of causal attribution in the overall work stress process. Findings related to the moderation model can be divided into three main findings. First, external locus of causality along with job level moderated the relationship between work overload and cognitive coping. Hence, this interaction was evidenced only among nurses in non-supervisory positions. Second, external locus of causality and job level together moderated the relationship between physical environment and turnover intention. An opposite pattern of interaction was found for this interaction: among nurses, externality exacerbated the effect of perceived unpleasantness of the physical environment on turnover intention, whereas among supervisors internality produced the same effect. Third, job level also disclosed a moderation effect for controllability attribution over the relationship between physical environment and cognitive coping. Findings related to the mediation model for the energetic process indicated that the partial model in which work stressors have also a direct effect on turnover intention fitted the data better. In the mediation model for the motivational process, an intermediate mediation effect in which the effects of personal resources on turnover intention went through two mediators (e.g., causal dimensions and organizational identification) fitted the data better. All dimensions of causal attribution appeared to follow a somewhat unique pattern of mediation effect not only for energetic but also for motivational processes. Overall findings on mediation models partly supported the two simultaneous underlying processes proposed by the JD-R model. While in the energetic process the dimension of externality mediated the relationship between stressors and turnover partially, all the dimensions of causal attribution appeared to entail significant mediator effects in the motivational process. The general findings supported the moderation effect and the mediation effect of causal attribution in the work stress process. The study contributes to several research traditions, including the interaction approach, the JD-C, and the JD-R models. However, many potential functions of organizational causal attribution are yet to be evaluated by relevant academic and organizational research. Keywords: organizational causal attribution, optimistic / pessimistic attributional style, work stressors, organisational stress process, stressors in nursing profession, hospital nursing, JD-R model, personal resources, turnover intention, work engagement, organizational identification.
Resumo:
The cultural appropriateness of human service processes is a major factor in determining the effectiveness of their delivery. Sensitivity to issues of culture is particularly critical in dealing with family disputes, which are generally highly emotive and require difficult decisions to be made regarding children, material assets and ongoing relationships. In this article we draw on findings from an evaluation of the Family Relationship Centre at Broadmeadows (FRCB) to offer some insights into and suggestions about managing cultural matters in the current practice of family dispute resolution (FDR) in Australia. The brief for the original research was to evaluate the cultural appropriateness of FDR services offered to culturally and linguistically diverse (CALD) communities living within the FRCB’s catchment area, specifically members of the Lebanese, Turkish and Iraqi communities. The conclusions of the evaluations were substantially positive. The work of the Centre was found to illustrate many aspects of best practice but also raised questions worthy of future exploration. The current article reports on issues of access, retention and outcomes obtained by CALD clients at various stages of the FRCB service.
Resumo:
Hybrid innovations, or new products that combine two existing product categories into one, are increasingly popular in today’s marketplace. Despite this proliferation, few studies address them. The purpose of this thesis is to examine consumer evaluation of hybrid innovations by focusing on consumer categorization of such innovations and on factors contributing positively and negatively to their evaluation. This issue is examined by means of three studies. The first study addresses the proportion of consumers categorizing hybrid products as single- versus dual-purpose, what contributes to such a categorization, what differences can be found between the two groups, and if categorization can and should be included in models of innovation adoption. The second study expands on the scope by including motivation as a predictor of consumer evaluation and examines two cognitive and affective factors and their differential impact on innovation evaluation. Finally, the third study examines the product comparisons single- versus dual-purpose categorization induce. These three essays together build up a broader understanding of hybrid innovation evaluation. The thesis uses theories from both psychology and marketing to examine the issues at hand. Conceptual combination and analogical learning theories from psychology are used to comprehend categorization and knowledge transfer. From marketing, consumer behavior and innovation adoption studies are addressed to better understand the link between categorization and product evaluation and the factors contributing to product evaluation. The main results of the current thesis are that (1) most consumers categorize hybrid products as single- and not as dual-purpose products, (2) consumers that categorize them as dual-purpose find them more attractive (3) motivation has a significant effect on consumer evaluation of innovations; cognitive factors promote an emphasis on product net benefits, whereas affective factors induce consumers to consider product meaning in the form of categorization and perceived product complexity, (4) categorization constrains subsequent product evaluation, and (5) categorization can and should be included to models of innovation adoption. Maria Sääksjärvi is associated with CERS, the Center for Relationship Marketing and Service Management at the Swedish School of Economics and Business Administration
Resumo:
Instrument landing systems (ILS) are normally designed assuming the site around them to be flat. Uneven terrain results in undulations in the glidescope. In recent years, models have been evolved for predicting such aberrations as a simpler alternative to experimental methods. Such modeling normally assumes the ground to be fully conducting. A method is presented for considering imperfect terrain conductivity within the framework of the uniform theory of diffraction (UTD). A single impedance wedge formulation is developed to a form that resembles the standard form of UTD, with only one extra term in the diffraction coefficient. This extends the applicability of the standard UTD formulation and software packages to the case of the imperfectly conducting terrain. The method has been applied to a real airport site in India and improved agreement with measured glidescope parameters is demonstrated
Resumo:
XVIII IUFRO World Congress, Ljubljana 1986.
Resumo:
In meteorology, observations and forecasts of a wide range of phenomena for example, snow, clouds, hail, fog, and tornados can be categorical, that is, they can only have discrete values (e.g., "snow" and "no snow"). Concentrating on satellite-based snow and cloud analyses, this thesis explores methods that have been developed for evaluation of categorical products and analyses. Different algorithms for satellite products generate different results; sometimes the differences are subtle, sometimes all too visible. In addition to differences between algorithms, the satellite products are influenced by physical processes and conditions, such as diurnal and seasonal variation in solar radiation, topography, and land use. The analysis of satellite-based snow cover analyses from NOAA, NASA, and EUMETSAT, and snow analyses for numerical weather prediction models from FMI and ECMWF was complicated by the fact that we did not have the true knowledge of snow extent, and we were forced simply to measure the agreement between different products. The Sammon mapping, a multidimensional scaling method, was then used to visualize the differences between different products. The trustworthiness of the results for cloud analyses [EUMETSAT Meteorological Products Extraction Facility cloud mask (MPEF), together with the Nowcasting Satellite Application Facility (SAFNWC) cloud masks provided by Météo-France (SAFNWC/MSG) and the Swedish Meteorological and Hydrological Institute (SAFNWC/PPS)] compared with ceilometers of the Helsinki Testbed was estimated by constructing confidence intervals (CIs). Bootstrapping, a statistical resampling method, was used to construct CIs, especially in the presence of spatial and temporal correlation. The reference data for validation are constantly in short supply. In general, the needs of a particular project drive the requirements for evaluation, for example, for the accuracy and the timeliness of the particular data and methods. In this vein, we discuss tentatively how data provided by general public, e.g., photos shared on the Internet photo-sharing service Flickr, can be used as a new source for validation. Results show that they are of reasonable quality and their use for case studies can be warmly recommended. Last, the use of cluster analysis on meteorological in-situ measurements was explored. The Autoclass algorithm was used to construct compact representations of synoptic conditions of fog at Finnish airports.