885 resultados para Methods for Multi-criteria Evaluation


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Az utbbi vtizedekben egyre gyakrabban merlt fel a kzszolglati szervezetek rtkelsnek ignye, s egyre jabb mdszerek jelentek meg, amelyek felvetettk ezek rendszerezsnek szksgessgt mind a gyakorlatban, mind a kutatsokban. A szerz a szakirodalomban fellelhet osztlyozsi ksrleteknek s az rtkels szakterlete szempontjainak figyelembevtelvel javaslatot tesz a kzszolglati szervezetek rtkelsi mdszereinek osztlyozsi keretrendszerre. Az osztlyozsi szempontok kztt szerepel az rtkel helyzete, az rtkels szerepe s a megismers mdszere. Az osztlyozsi keretrendszer tartalmt a szerz pldkkal is illusztrlja, amely jelzi a modell gyakorlati alkalmazhatsgt. Ugyanakkor a keretrendszer a kutatsok fkusznak s rvnyessgi krnek meghatrozsban is segtsget nyjthat. _____ In the last decades the need of the evaluation of public sector organizations has emerged more and more often, and many new methods have shown up that has raised the need of their classification in practice and in research, as well. Based on literature review and the literature of evaluation the author makes a proposal on the classification framework of the evaluation methods of public sector organizations. The dimensions of the classification include the situation of evaluator, the role of evaluation and the approach of knowledge. The author illustrates the content of the framework with examples referring to the applicability of the model in practice. At the same time, the framework is also useful in determining the focus or the scope of research projects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Environmentally conscious construction has received a significant amount of research attention during the last decades. Even though construction literature is rich in studies that emphasize the importance of environmental impact during the construction phase, most of the previous studies failed to combine environmental analysis with other project performance criteria in construction. This is mainly because most of the studies have overlooked the multi-objective nature of construction projects. In order to achieve environmentally conscious construction, multi-objectives and their relationships need to be successfully analyzed in the complex construction environment. The complex construction system is composed of changing project conditions that have an impact on the relationship between time, cost and environmental impact (TCEI) of construction operations. Yet, this impact is still unknown by construction professionals. Studying this impact is vital to fulfill multiple project objectives and achieve environmentally conscious construction. This research proposes an analytical framework to analyze the impact of changing project conditions on the relationship of TCEI. This study includes green house gas (GHG) emissions as an environmental impact category. The methodology utilizes multi-agent systems, multi-objective optimization, analytical network process, and system dynamics tools to study the relationships of TCEI and support decision-making under the influence of project conditions. Life cycle assessment (LCA) is applied to the evaluation of environmental impact in terms of GHG. The mixed method approach allowed for the collection and analysis of qualitative and quantitative data. Structured interviews of professionals in the highway construction field were conducted to gain their perspectives in decision-making under the influence of certain project conditions, while the quantitative data were collected from the Florida Department of Transportation (FDOT) for highway resurfacing projects. The data collected were used to test the framework. The framework yielded statistically significant results in simulating project conditions and optimizing TCEI. The results showed that the change in project conditions had a significant impact on the TCEI optimal solutions. The correlation between TCEI suggested that they affected each other positively, but in different strengths. The findings of the study will assist contractors to visualize the impact of their decision on the relationship of TCEI.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 3040% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (3040%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet/Aoutlet<1, this in effect reduced the mixing of air inside the building and hence the ventilation effectiveness; (ii) the presence of multi-room partitioning increased the pressure differential and consequently the air exchange rate. Overall good agreement was found between the observed large-scale, small-scale and CFD based IP responses. Comparisons with ASCE 7-10 consistently demonstrated that the code underestimated peak positive and suction IP.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the article - Menu Analysis: Review and Evaluation - by Lendal H. Kotschevar, Distinguished Professor School of Hospitality Management, Florida International University, Kotschevars initial statement reads: Various methods are used to evaluate menus. Some have quite different approaches and give different information. Even those using quite similar methods vary in the information they give. The author attempts to describe the most frequently used methods and to indicate their value. A correlation calculation is made to see how well certain of these methods agree in the information they give. There is more than one way to look at the word menu. The culinary selections decided upon by the head chef or owner of a restaurant, which ultimately define the type of restaurant is one way. The physical outline of the food, which a patron actually holds in his or her hand, is another. These descriptions are most common to the word, menu. The author primarily concentrates on the latter description, and uses the act of counting the number of items sold on a menu to measure the popularity of any particular item. This, along with a formula, allows Kotschevar to arrive at a specific value per item. Menu analysis would appear a difficult subject to broach. How does a person approach a menu analysis, how do you qualify and quantify a menu; it seems such a subjective exercise. The author offers methods and outlines on approaching menu analysis from empirical perspectives. Menus are often examined visually through the evaluation of various factors. It is a subjective method but has the advantage of allowing scrutiny of a wide range of factors which other methods do not, says Distinguished Professor, Kotschevar. The method is also highly flexible. Factors can be given a score value and scores summed to give a total for a menu. This allows comparison between menus. If the one making the evaluations knows menu values, it is a good method of judgment, he further offers. The author wants you to know that assigning values is fundamental to a pragmatic menu analysis; it is how the reviewer keeps score, so to speak. Value merit provides reliable criteria from which to gauge a particular menu item. In the final analysis, menu evaluation provides the mechanism for either keeping or rejecting selected items on a menu. Kotschevar provides at least three different matrix evaluation methods; they are defined as the Miller method, the Smith and Kasavana method, and the Pavesic method. He offers illustrated examples of each via a table format. These are helpful tools since trying to explain the theories behind the tables would be difficult at best. Kotschevar also references examples of analysis methods which arent matrix based. The Hayes and Huffman - Goal Value Analysis - is one such method. The author sees no one method better than another, and suggests that combining two or more of the methods to be a benefit.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Since the establishment of the evaluation system in 1975, the junior colleges in the Republic of China (Taiwan), have gone through six formal evaluations. We know that evaluation in schooling, like quality control in businesses, should be a systematic, formal, and a continual process. It can doubtless serve as a strategy to refine the quality of education. The purpose of this research is to explore the current practice of junior college evaluation in Taiwan. This provides insight into the development of and quality of the current evaluation system. Moreover, this study also identified the source of problems with the current evaluation system and provided suggestion for improvements.^ In order to attain the above purposes, this research was undertaken in both theoretical and practical ways. First, theoretically, on the basis of a literature review, the theories of educational evaluation and, according to the course and principles of development, a view of the current practice in Taiwan. Secondly, in practice, by means of questionnaires, an analysis of the views of evaluation committeemen, junior college presidents, and administrators were obtained on evaluation models, methods, contents, organization, functions, criteria, grades reports, and others with suggestions for improvement. The summary of findings concludes that most evaluators and evaluatees think the purpose of evaluation can help the colleges explore their difficulties and problems. In addition, it was found that there is significant difference between the two groups regarding the evaluation methods, contents, organization, functions, criteria, grades reports and others, while analyzing these objective data forms the basis for an improved method of evaluation for Junior Colleges in Taiwan. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Microarray platforms have been around for many years and while there is a rise of new technologies in laboratories, microarrays are still prevalent. When it comes to the analysis of microarray data to identify differentially expressed (DE) genes, many methods have been proposed and modified for improvement. However, the most popular methods such as Significance Analysis of Microarrays (SAM), samroc, fold change, and rank product are far from perfect. When it comes down to choosing which method is most powerful, it comes down to the characteristics of the sample and distribution of the gene expressions. The most practiced method is usually SAM or samroc but when the data tends to be skewed, the power of these methods decrease. With the concept that the median becomes a better measure of central tendency than the mean when the data is skewed, the tests statistics of the SAM and fold change methods are modified in this thesis. This study shows that the median modified fold change method improves the power for many cases when identifying DE genes if the data follows a lognormal distribution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this study was to develop a GIS-based multi-class index overlay model to determine areas susceptible to inland flooding during extreme precipitation events in Broward County, Florida. Data layers used in the method include Airborne Laser Terrain Mapper (ALTM) elevation data, excess precipitation depth determined through performing a Soil Conservation Service (SCS) Curve Number (CN) analysis, and the slope of the terrain. The method includes a calibration procedure that uses "weights and scores" criteria obtained from Hurricane Irene (1999) records, a reported 100-year precipitation event, Doppler radar data and documented flooding locations. Results are displayed in maps of Eastern Broward County depicting types of flooding scenarios for a 100-year, 24-hour storm based on the soil saturation conditions. As expected the results of the multi-class index overlay analysis showed that an increase for the potential of inland flooding could be expected when a higher antecedent moisture condition is experienced. The proposed method proves to have some potential as a predictive tool for flooding susceptibility based on a relatively simple approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

<p>Human genetics has been experiencing a wave of genetic discoveries thanks to the development of several technologies, such as genome-wide association studies (GWAS), whole-exome sequencing, and whole genome sequencing. Despite the massive genetic discoveries of new variants associated with human diseases, several key challenges emerge following the genetic discovery. GWAS is known to be good at identifying the locus associated with the patient phenotype. However, the actually causal variants responsible for the phenotype are often elusive. Another challenge in human genetics is that even the causal mutations are already known, the underlying biological effect might remain largely ambiguous. Functional evaluation plays a key role to solve these key challenges in human genetics both to identify causal variants responsible for the phenotype, and to further develop the biological insights from the disease-causing mutations. </p><p>We adopted various methods to characterize the effects of variants identified in human genetic studies, including patient genetic and phenotypic data, RNA chemistry, molecular biology, virology, and multi-electrode array and primary neuronal culture systems. Chapter 1 is a broader introduction for the motivation and challenges for functional evaluation in human genetic studies, and the background of several genetics discoveries, such as hepatitis C treatment response, in which we performed functional characterization. </p><p>Chapter 2 focuses on the characterization of causal variants following the GWAS study for hepatitis C treatment response. We characterized a non-coding SNP (rs4803217) of IL28B (IFNL3) in high linkage disequilibrium (LD) with the discovery SNP identified in the GWAS. In this chapter, we used inter-disciplinary approaches to characterize rs4803217 on RNA structure, disease association, and protein translation.</p><p>Chapter 3 describes another avenue of functional characterization following GWAS focusing on the novel transcripts and proteins identified near the IL28B (IFNL3) locus. It has been recently speculated that this novel protein, which was named IFNL4, may affect the HCV treatment response and clearance. In this chapter, we used molecular biology, virology, and patient genetic and phenotypic data to further characterize and understand the biology of IFNL4. The efforts in chapter 2 and 3 provided new insights to the candidate causal variant(s) responsible for the GWAS for HCV treatment response, however, more evidence is still required to make claims for the exact causal roles of these variants for the GWAS association. </p><p>Chapter 4 aims to characterize a mutation already known to cause a disease (seizure) in a mouse model. We demonstrate the potential use of multi-electrode array (MEA) system for the functional characterization and drug testing on mutations found in neurological diseases, such as seizure. Functional characterization in neurological diseases is relatively challenging and available systematic tools are relatively limited. This chapter shows an exploratory research and example to establish a system for the broader use for functional characterization and translational opportunities for mutations found in neurological diseases. </p><p>Overall, this dissertation spans a range of challenges of functional evaluations in human genetics. It is expected that the functional characterization to understand human mutations will become more central in human genetics, because there are still many biological questions remaining to be answered after the explosion of human genetic discoveries. The recent advance in several technologies, including genome editing and pluripotent stem cells, is also expected to make new tools available for functional studies in human diseases.</p>

Relevância:

40.00% 40.00%

Publicador:

Resumo:

<p>Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patients medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. </p><p>Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.</p><p>Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated. </p><p>Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.</p>

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: The move toward evidence-based education has led to increasing numbers of randomised trials in schools. However, the literature on recruitment to non-clinical trials is relatively underdeveloped, when compared to that of clinical trials. Recruitment to school-based randomised trials is, however, challenging; even more so when the focus of the study is a sensitive issue such as sexual health. This article reflects on the challenges of recruiting post-primary schools, adolescent pupils and parents to a cluster randomised feasibility trial of a sexual health intervention, and the strategies employed to address them. <br/>Methods: The Jack Trial was funded by the UK National Institute for Health Research (NIHR). It comprised a feasibility study of an interactive film-based sexual health intervention entitled If I Were Jack, recruiting over 800 adolescents from eight socio-demographically diverse post-primary schools in Northern Ireland. It aimed to determine the facilitators and barriers to recruitment and retention to a school-based sexual health trial and identify optimal multi-level strategies for an effectiveness study. As part of an embedded process evaluation, we conducted semi-structured interviews and focus groups with principals, vice-principals, teachers, pupils and parents recruited to the study as well as classroom observations and a parents survey. <br/>Results: With reference to Social Learning Theory, we identified a number of individual, behavioural and environmental level factors which influenced recruitment. Commonly identified facilitators included perceptions of the relevance and potential benefit of the intervention to adolescents, the credibility of the organisation and individuals running the study, support offered by trial staff, and financial incentives. Key barriers were prior commitment to other research, lack of time and resources, and perceptions that the intervention was incompatible with pupil or parent needs or the school ethos. <br/>Conclusions: Reflecting on the methodological challenges of recruiting to a school-based sexual health feasibility trial, this study highlights pertinent general and trial-specific facilitators and barriers to recruitment, which will prove useful for future trials with schools, adolescent pupils and parents. <br/><br/>

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Implementing effective antenatal care models is a key global policy goal. However, the mechanisms of action of these multi-faceted models that would allow widespread implementation are seldom examined and poorly understood. In existing care model analyses there is little distinction between what is done, how it is done, and who does it. A new evidence-informed quality maternal and newborn care (QMNC) framework identifies key characteristics of quality care. This offers the opportunity to identify systematically the characteristics of care delivery that may be generalizable across contexts, thereby enhancing implementation. Our objective was to map the characteristics of antenatal care models tested in Randomised Controlled Trials (RCTs) to a new evidence-based framework for quality maternal and newborn care; thus facilitating the identification of characteristics of effective care.<br/><br/>Methods: A systematic review of RCTs of midwifery-led antenatal care models. Mapping and evaluation of these models characteristics to the QMNC framework using data extraction and scoring forms derived from the five framework components. Paired team members independently extracted data and conducted quality assessment using the QMNC framework and standard RCT criteria.<br/><br/>Results: From 13,050 citations initially retrieved we identified 17 RCTs of midwifery-led antenatal care models from Australia (7), the UK (4), China (2), and Sweden, Ireland, Mexico and Canada (1 each). QMNC framework scores ranged from 9 to 25 (possible range 032), with most models reporting fewer than half the characteristics associated with quality maternity care. Description of care model characteristics was lacking in many studies, but was better reported for the intervention arms. Organisation of care was the best-described component. Underlying values and philosophy of care were poorly reported.<br/><br/>Conclusions: The QMNC framework facilitates assessment of the characteristics of antenatal care models. It is vital to understand all the characteristics of multi-faceted interventions such as care models; not only what is done but why it is done, by whom, and how this differed from the standard care package. By applying the QMNC framework we have established a foundation for future reports of intervention studies so that the characteristics of individual models can be evaluated, and the impact of any differences appraised.