882 resultados para Cuisine evaluation criteria
Resumo:
The purpose of this paper is to present two multi-criteria decision-making models, including an Analytic Hierarchy Process (AHP) model and an Analytic Network Process (ANP) model for the assessment of deconstruction plans and to make a comparison between the two models with an experimental case study. Deconstruction planning is under pressure to reduce operation costs, adverse environmental impacts and duration, in the meanwhile to improve productivity and safety in accordance with structure characteristics, site conditions and past experiences. To achieve these targets in deconstruction projects, there is an impending need to develop a formal procedure for contractors to select a most appropriate deconstruction plan. Because numbers of factors influence the selection of deconstruction techniques, engineers definitely need effective tools to conduct the selection process. In this regard, multi-criteria decision-making methods such as AHP have been adopted to effectively support deconstruction technique selection in previous researches. in which it has been proved that AHP method can help decision-makers to make informed decisions on deconstruction technique selection based on a sound technical framework. In this paper, the authors present the application and comparison of two decision-making models including the AHP model and the ANP model for deconstruction plan assessment. The paper concludes that both AHP and ANP are viable and capable tools for deconstruction plan assessment under the same set of evaluation criteria. However, although the ANP can measure relationship among selection criteria and their sub-criteria, which is normally ignored in the AHP, the authors also indicate that whether the ANP model can provide a more accurate result should be examined in further research.
Resumo:
Satellite data are increasingly used to provide observation-based estimates of the effects of aerosols on climate. The Aerosol-cci project, part of the European Space Agency's Climate Change Initiative (CCI), was designed to provide essential climate variables for aerosols from satellite data. Eight algorithms, developed for the retrieval of aerosol properties using data from AATSR (4), MERIS (3) and POLDER, were evaluated to determine their suitability for climate studies. The primary result from each of these algorithms is the aerosol optical depth (AOD) at several wavelengths, together with the Ångström exponent (AE) which describes the spectral variation of the AOD for a given wavelength pair. Other aerosol parameters which are possibly retrieved from satellite observations are not considered in this paper. The AOD and AE (AE only for Level 2) were evaluated against independent collocated observations from the ground-based AERONET sun photometer network and against “reference” satellite data provided by MODIS and MISR. Tools used for the evaluation were developed for daily products as produced by the retrieval with a spatial resolution of 10 × 10 km2 (Level 2) and daily or monthly aggregates (Level 3). These tools include statistics for L2 and L3 products compared with AERONET, as well as scoring based on spatial and temporal correlations. In this paper we describe their use in a round robin (RR) evaluation of four months of data, one month for each season in 2008. The amount of data was restricted to only four months because of the large effort made to improve the algorithms, and to evaluate the improvement and current status, before larger data sets will be processed. Evaluation criteria are discussed. Results presented show the current status of the European aerosol algorithms in comparison to both AERONET and MODIS and MISR data. The comparison leads to a preliminary conclusion that the scores are similar, including those for the references, but the coverage of AATSR needs to be enhanced and further improvements are possible for most algorithms. None of the algorithms, including the references, outperforms all others everywhere. AATSR data can be used for the retrieval of AOD and AE over land and ocean. PARASOL and one of the MERIS algorithms have been evaluated over ocean only and both algorithms provide good results.
Resumo:
Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
Previous research conducted in the late 1980’s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over twenty-five years old, the data used in the previous research is no longer representative of the currently installed barriers or US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. The analysis included 1,383 (596,331 weighted) real-world barrier midsection impacts selected from thirteen years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS). For each suitable case, the scene diagram and available scene photographs were used to determine roadside and barrier specific variables not available in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors toward secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of seven compared to cases with no second event present. Twenty-four full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from NCHRP Report 350. It was found that the NCHRP Report 350 exit angle criterion alone was not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
PURPOSE Our main objective was to prospectively determine the prognostic value of [(18)F]fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) after two cycles of rituximab plus cyclophosphamide, doxorubicin, vincristine, and prednisone given every 14 days (R-CHOP-14) under standardized treatment and PET evaluation criteria. PATIENTS AND METHODS Patients with any stage of diffuse large B-cell lymphoma were treated with six cycles of R-CHOP-14 followed by two cycles of rituximab. PET/CT examinations were performed at baseline, after two cycles (and after four cycles if the patient was PET-positive after two cycles), and at the end of treatment. PET/CT examinations were evaluated locally and by central review. The primary end point was event-free survival at 2 years (2-year EFS). RESULTS Median age of the 138 evaluable patients was 58.5 years with a WHO performance status of 0, 1, or 2 in 56%, 36%, or 8% of the patients, respectively. By local assessment, 83 PET/CT scans (60%) were reported as positive and 55 (40%) as negative after two cycles of R-CHOP-14. Two-year EFS was significantly shorter for PET-positive compared with PET-negative patients (48% v 74%; P = .004). Overall survival at 2 years was not significantly different, with 88% for PET-positive versus 91% for PET-negative patients (P = .46). By using central review and the Deauville criteria, 2-year EFS was 41% versus 76% (P < .001) for patients who had interim PET/CT scans after two cycles of R-CHOP-14 and 24% versus 72% (P < .001) for patients who had PET/CT scans at the end of treatment. CONCLUSION Our results confirmed that an interim PET/CT scan has limited prognostic value in patients with diffuse large B-cell lymphoma homogeneously treated with six cycles of R-CHOP-14 in a large prospective trial. At this point, interim PET/CT scanning is not ready for clinical use to guide treatment decisions in individual patients.
Resumo:
The Leximancer system is a relatively new method for transforming lexical co-occurrence information from natural language into semantic patterns in an unsupervised manner. It employs two stages of co-occurrence information extraction-semantic and relational-using a different algorithm for each stage. The algorithms used are statistical, but they employ nonlinear dynamics and machine learning. This article is an attempt to validate the output of Leximancer, using a set of evaluation criteria taken from content analysis that are appropriate for knowledge discovery tasks.
Resumo:
Purpose - Despite the increasing sophistication of new product development (NPD) research, the reliance on traditional approaches to studying NPD has left several areas in need of further research. The authors propose addressing some of these gaps, especially the limited focus on consumer brands, evaluation criteria used across different project-review points in the NPD process, and the distinction between "kills", "successes", and "failures". Moreover, they propose investigating how screening criteria change across project-review points, using real-time NPD projects. Design/methodology/approach - A postal survey generated 172 usable questionnaires from a sample of European, North American, Far Eastern and Australian consumer packaged-goods firms, providing data on 314 new product projects covering different development and post-commercialization review points. Findings - The results confirm that acceptance-rejection criteria vary through the NPD process. However, financial criteria dominate across all the project-review points. Initial screening is coarse, focusing predominantly on financial criteria. Fit with organizational, product, brand, promotional, and market requirements dominate in the detailed screen and pre-development evaluation points. At pre-launch, decision-makers focus on product, brand, and promotional criteria. Commercial fit, production synergies, and reliability of the firm's market intelligence are significant discriminators in the post-launch review. Moreover, the importance of marketing and channel issues makes the criteria for screening brands different from those of industrial markets. Originality/value - The study, although largely descriptive and involves a relatively small sample of consumer goods firms, offers new insights into NPD project evaluation behavior. Future, larger-scale investigations covering a broader spectrum of consumer product sectors are needed to validate our results and to explain the reasons behind managers' decisions. © Emerald Group Publishing Limited.
Resumo:
In this paper a Hierarchical Analytical Network Process (HANP) model is demonstrated for evaluating alternative technologies for generating electricity from MSW in India. The technological alternatives and evaluation criteria for the HANP study are characterised by reviewing the literature and consulting experts in the field of waste management. Technologies reviewed in the context of India include landfill, anaerobic digestion, incineration, pelletisation and gasification. To investigate the sensitivity of the result, we examine variations in expert opinions and carry out an Analytical Hierarchy Process (AHP) analysis for comparison. We find that anaerobic digestion is the preferred technology for generating electricity from MSW in India. Gasification is indicated as the preferred technology in an AHP model due to the exclusion of criteria dependencies and in an HANP analysis when placing a high priority on net output and retention time. We conclude that HANP successfully provides a structured framework for recommending which technologies to pursue in India, and the adoption of such tools is critical at a time when key investments in infrastructure are being made. Therefore the presented methodology is thought to have a wider potential for investors, policy makers, researchers and plant developers in India and elsewhere. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
Well–prepared, adaptive and sustainably developing specialists are an important competitive advantage, but also one of the main challenges for businesses. One option of the education system for creation and development of staff adequate to the needs is the development of pro jects with topics from real economy ("Practical Projects"). The objective assessment is an essential driver and motivator, and is based on a system of well-chosen, well-defined and specific criteria and indicators. An approach to a more objective evaluation of practical projects is finding more objective weights of the criteria. A natural and reasonable approach is the accumulation of opinions of proven experts and subsequent bringing out the weights from the accumulated data. The preparation and conduction of a survey among recognized experts in the field of project-based learning in mathematics, informatics and information technologies is described. The processing of the data accumulated by applying AHP, allowed us to objectively determine weights of evaluation criteria and hence to achieve the desired objectiveness. ACM Computing Classification System (1998): K.3.2.
Resumo:
Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.
Resumo:
Aim: To evaluate the clinical performance of a composite resin (CR) and a resin-modified glassionomer cement (RMGIC) for the treatment of abfraction lesions. Methods: Thirty patients with abfraction lesions in at least two premolar teeth were selected and invited to participate in this study. All restorations were made within the same clinical time frame. One tooth was restored with CR Z100TM (3M, St. Paul, MN, USA), and the other was restored with RMGIC VitremerTM (3M). The restorations were assessed immediately and 1, 6 and 12 months after the restoration, using modified US Public Health Service (USPHS) criteria: marginal integrity, marginal discoloration, wear, retention, secondary caries and hypersensitivity. The statistical analysis was based on Friedman ANOVA test and Mann-Whitney test, considering p<0.05 for statistical significance. Results: Both materials demonstrated satisfactory clinical performance after one year. In the individual analysis of each material, there was a significant difference (p<0.05) in the criteria marginal integrity and wear, for both CR and RMGIC. RMGIC exhibited more damage one year after the restoration. Comparing both materials, it was found a significant difference only for marginal discoloration, while the RMGIC restorations showed the worst prognosis after a year of evaluation. There was no significant difference in the number of retentions, caries or hypersensitivity between CR and RMGIC. Conclusions: It was concluded that CR exhibited the best clinical performance according to the cost-effectiveness and evaluation criteria used in this study.
Resumo:
Hazardous materials are substances that, if not regulated, can pose a threat to human populations and their environmental health, safety or property when transported in commerce. About 1.5 million tons of hazardous material shipments are transported by truck in the US annually, with a steady increase of approximately 5% per year. The objective of this study was to develop a routing tool for hazardous material transport in order to facilitate reduced environmental impacts and less transportation difficulties, yet would also find paths that were still compelling for the shipping carriers as a matter of trucking cost. The study started with identification of inhalation hazard impact zones and explosion protective areas around the location of hypothetical hazardous material releases, considering different parameters (i.e., chemicals characteristics, release quantities, atmospheric condition, etc.). Results showed that depending on the quantity of release, chemical, and atmospheric stability (a function of wind speed, meteorology, sky cover, time and location of accidents, etc.) the consequence of these incidents can differ. The study was extended by selection of other evaluation criteria for further investigation because health risk as an evaluation criterion would not be the only concern in selection of routes. Transportation difficulties (i.e., road blockage and congestion) were incorporated as important factor due to their indirect impact/cost on the users of transportation networks. Trucking costs were also considered as one of the primary criteria in selection of hazardous material paths; otherwise the suggested routes would have not been convincing for the shipping companies. The last but not least criterion was proximity of public places to the routes. The approach evolved from a simple framework to a complicated and efficient GIS-based tool able to investigate transportation networks of any given study area, and capable of generating best routing options for cargos. The suggested tool uses a multi-criteria-decision-making method, which considers the priorities of the decision makers in choosing the cargo routes. Comparison of the routing options based on each criterion and also the overall suitableness of the path in regards to all the criteria (using a multi-criteria-decision-making method) showed that using similar tools as the one proposed by this study can provide decision makers insights in the area of hazardous material transport. This tool shows the probable consequences of considering each path in a very easily understandable way; in the formats of maps and tables, which makes the tradeoffs of costs and risks considerably simpler, as in some cases slightly compromising on trucking cost may drastically decrease the probable health risk and/or traffic difficulties. This will not only be rewarding to the community by making cities safer places to live, but also can be beneficial to shipping companies by allowing them to advertise as environmental friendly conveyors.
Resumo:
Objectives The procurement research of Sydney Opera House FM Exemplar Project aims to develop innovative methods and guidelines for the procurement of FM services, applicable to iconic and / or performing arts centre facilities, or facilities with similar FM functions. The initial procurement report in June 2005 analysed the strategic objectives and operational requirements that provide ‘demand statements’ as evaluation criteria in the service procurement process. The subsequent interim procurement report in September 2005 discussed the elements contributing to the criteria for decision-making in the service procurement process. This procurement report concentrates on the research on procurement strategies and innovative methods using a case study approach. The objectives of this report are: • to investigate service procurement methods and process in iconic and/or performing arts centre facilities; • to showcase FM innovation in Sydney Opera House through a case study; • to establish a preliminary decision-making framework and guidelines for selection of appropriate FM procurement routes to provide a useful model for FM community. Findings Findings from this procurement research are presented as follows. • FM innovation and experience of Sydney Opera House • Innovative procurement methods and processes, drawn from a case study of Sydney Opera House as exemplar • An integrated performance framework to link maintenance service functions to high level organisational objective and strategies • Procurement methods and contract outcomes, focusing on building maintenance and cleaning services of Sydney Opera House • Multi-dimensional assessment of Service Providers • General decision-making strategies and guidelines for selection of appropriate FM procurement routes Further Research Whilst the Sydney Opera House case study emphasises the experience of Sydney Opera House, a study of procurement strategies and methods from published research and FM good practice will supply facilities managers with alternative procurement routes. Further research on the procurement theme will develop a final decision-making model for the procurement of FM services, drawn from the evaluation of the case study outcomes, as well as FM good practice and findings from current published research.