13 resultados para Cuisine evaluation criteria
em Digital Commons at Florida International University
Resumo:
Defining what makes up "a cuisine" involves finding a way to evaluate what makes it "distinctly unique and meritable." In this first of a two-part series, the author develops six criteria which can be used in such an evaluative process.
Resumo:
Hazardous materials are substances that, if not regulated, can pose a threat to human populations and their environmental health, safety or property when transported in commerce. About 1.5 million tons of hazardous material shipments are transported by truck in the US annually, with a steady increase of approximately 5% per year. The objective of this study was to develop a routing tool for hazardous material transport in order to facilitate reduced environmental impacts and less transportation difficulties, yet would also find paths that were still compelling for the shipping carriers as a matter of trucking cost. The study started with identification of inhalation hazard impact zones and explosion protective areas around the location of hypothetical hazardous material releases, considering different parameters (i.e., chemicals characteristics, release quantities, atmospheric condition, etc.). Results showed that depending on the quantity of release, chemical, and atmospheric stability (a function of wind speed, meteorology, sky cover, time and location of accidents, etc.) the consequence of these incidents can differ. The study was extended by selection of other evaluation criteria for further investigation because health risk as an evaluation criterion would not be the only concern in selection of routes. Transportation difficulties (i.e., road blockage and congestion) were incorporated as important factor due to their indirect impact/cost on the users of transportation networks. Trucking costs were also considered as one of the primary criteria in selection of hazardous material paths; otherwise the suggested routes would have not been convincing for the shipping companies. The last but not least criterion was proximity of public places to the routes. The approach evolved from a simple framework to a complicated and efficient GIS-based tool able to investigate transportation networks of any given study area, and capable of generating best routing options for cargos. The suggested tool uses a multi-criteria-decision-making method, which considers the priorities of the decision makers in choosing the cargo routes. Comparison of the routing options based on each criterion and also the overall suitableness of the path in regards to all the criteria (using a multi-criteria-decision-making method) showed that using similar tools as the one proposed by this study can provide decision makers insights in the area of hazardous material transport. This tool shows the probable consequences of considering each path in a very easily understandable way; in the formats of maps and tables, which makes the tradeoffs of costs and risks considerably simpler, as in some cases slightly compromising on trucking cost may drastically decrease the probable health risk and/or traffic difficulties. This will not only be rewarding to the community by making cities safer places to live, but also can be beneficial to shipping companies by allowing them to advertise as environmental friendly conveyors.
Resumo:
A study was conducted to investigate the effectiveness, as measured by performance on course posttests, of mindmapping versus traditional notetaking in a corporate training class. The purpose of this study was to increase knowledge concerning the effectiveness of mindmapping as an information encoding tool to enhance the effectiveness of learning. Corporations invest billions of dollars, annually, in training programs. Given this increased demand for effective and efficient workplace learning, continual reliance on traditional notetaking is questionable for the high-speed and continual learning required on workers.^ An experimental, posttest-only control group design was used to test the following hypotheses: (1) there is no significant difference in posttest scores on an achievement test, administered immediately after the course, between adult learners using mindmapping versus traditional notetaking methods in a training lecture, and (2) there is no significant difference in posttest scores on an achievement test, administered 30 days after the course, between adult learners using mindmapping versus traditional notetaking methods in a training lecture. After a 1.5 hour instruction on mindmapping, the treatment group used mindmapping throughout the course. The control group used traditional notetaking. T-tests were used to determine if there were significant differences between mean posttest scores between the two groups. In addition, an attitudinal survey, brain hemisphere dominance survey, course dynamics observations, and course evaluations were used to investigate preference for mindmapping, its perceived effect on test performance, and the effectiveness of mindmapping instruction.^ This study's principal finding was that although the mindmapping group did not perform significantly higher on posttests administered immediately and 30 days after the course, than the traditional notetaking group, the mindmapping group did score higher on both posttests and reported higher ratings of the course on every evaluation criteria. Lower educated, right brain dominant learners reported a significantly positive learning experience. These results suggest that mindmapping enhances and reinforces the preconditions of learning. Recommendations for future study are provided. ^
Resumo:
The primary goal of this dissertation is the study of patterns of viral evolution inferred from serially-sampled sequence data, i.e., sequence data obtained from strains isolated at consecutive time points from a single patient or host. RNA viral populations have an extremely high genetic variability, largely due to their astronomical population sizes within host systems, high replication rate, and short generation time. It is this aspect of their evolution that demands special attention and a different approach when studying the evolutionary relationships of serially-sampled sequence data. New methods that analyze serially-sampled data were developed shortly after a groundbreaking HIV-1 study of several patients from which viruses were isolated at recurring intervals over a period of 10 or more years. These methods assume a tree-like evolutionary model, while many RNA viruses have the capacity to exchange genetic material with one another using a process called recombination. ^ A genealogy involving recombination is best described by a network structure. A more general approach was implemented in a new computational tool, Sliding MinPD, one that is mindful of the sampling times of the input sequences and that reconstructs the viral evolutionary relationships in the form of a network structure with implicit representations of recombination events. The underlying network organization reveals unique patterns of viral evolution and could help explain the emergence of disease-associated mutants and drug-resistant strains, with implications for patient prognosis and treatment strategies. In order to comprehensively test the developed methods and to carry out comparison studies with other methods, synthetic data sets are critical. Therefore, appropriate sequence generators were also developed to simulate the evolution of serially-sampled recombinant viruses, new and more through evaluation criteria for recombination detection methods were established, and three major comparison studies were performed. The newly developed tools were also applied to "real" HIV-1 sequence data and it was shown that the results represented within an evolutionary network structure can be interpreted in biologically meaningful ways. ^
Resumo:
Personalized recommender systems aim to assist users in retrieving and accessing interesting items by automatically acquiring user preferences from the historical data and matching items with the preferences. In the last decade, recommendation services have gained great attention due to the problem of information overload. However, despite recent advances of personalization techniques, several critical issues in modern recommender systems have not been well studied. These issues include: (1) understanding the accessing patterns of users (i.e., how to effectively model users' accessing behaviors); (2) understanding the relations between users and other objects (i.e., how to comprehensively assess the complex correlations between users and entities in recommender systems); and (3) understanding the interest change of users (i.e., how to adaptively capture users' preference drift over time). To meet the needs of users in modern recommender systems, it is imperative to provide solutions to address the aforementioned issues and apply the solutions to real-world applications. ^ The major goal of this dissertation is to provide integrated recommendation approaches to tackle the challenges of the current generation of recommender systems. In particular, three user-oriented aspects of recommendation techniques were studied, including understanding accessing patterns, understanding complex relations and understanding temporal dynamics. To this end, we made three research contributions. First, we presented various personalized user profiling algorithms to capture click behaviors of users from both coarse- and fine-grained granularities; second, we proposed graph-based recommendation models to describe the complex correlations in a recommender system; third, we studied temporal recommendation approaches in order to capture the preference changes of users, by considering both long-term and short-term user profiles. In addition, a versatile recommendation framework was proposed, in which the proposed recommendation techniques were seamlessly integrated. Different evaluation criteria were implemented in this framework for evaluating recommendation techniques in real-world recommendation applications. ^ In summary, the frequent changes of user interests and item repository lead to a series of user-centric challenges that are not well addressed in the current generation of recommender systems. My work proposed reasonable solutions to these challenges and provided insights on how to address these challenges using a simple yet effective recommendation framework.^
Resumo:
This dissertation examined the efficacy of family cognitive behavior treatment (FCBT) and group cognitive behavior treatment (GBCT) for reducing anxiety disorders in children and adolescents using several approaches: clinical significant change, equivalence testing, and analyses of variance. It also examined treatment specificity in terms of targeting family/parents (in FCBT) and peers/group (in GCBT) contextual variables using two main approaches: analyses of variance and structural equation modeling (SEM). The sample consisted of 143 children and their parents who presented to the Child Anxiety and Phobia Program housed within the Child and Family Psychosocial Research Center at Florida International University. Diagnostic interviews and questionnaires were administered to assess youth anxiety. Questionnaires were administered to assess child and parent views of family/parents and peers/group contextual variables. In terms of clinical significant change, results indicated that 84.6% of youth in FCBT and 71.2% of youth in GBCT no longer met diagnostic criteria for their primary/targeted anxiety disorder. In addition, results from analyses of variance indicated that FCBT and GCBT were both efficacious in reducing anxiety disorders in youth across both child and parent ratings. Results using both analyses of variance and structural equation modeling also indicated that there was no meaningful treatment specificity between FCBT and GCBT in terms of either family/parents or peers/group contextual variables. That is, child social skills improved in GCBT in which these skills were targeted and in FCBT in which these skills were not targeted; parenting skills improved in FCBT in which these skills were targeted and in GCBT in which these skills were not targeted. Clinical implications and future research recommendations are discussed.
Resumo:
Elemental analysis can become an important piece of evidence to assist the solution of a case. The work presented in this dissertation aims to evaluate the evidential value of the elemental composition of three particular matrices: ink, paper and glass. In the first part of this study, the analytical performance of LIBS and LA-ICP-MS methods was evaluated for paper, writing inks and printing inks. A total of 350 ink specimens were examined including black and blue gel inks, ballpoint inks, inkjets and toners originating from several manufacturing sources and/or batches. The paper collection set consisted of over 200 paper specimens originating from 20 different paper sources produced by 10 different plants. Micro-homogeneity studies show smaller variation of elemental compositions within a single source (i.e., sheet, pen or cartridge) than the observed variation between different sources (i.e., brands, types, batches). Significant and detectable differences in the elemental profile of the inks and paper were observed between samples originating from different sources (discrimination of 87–100% of samples, depending on the sample set under investigation and the method applied). These results support the use of elemental analysis, using LA-ICP-MS and LIBS, for the examination of documents and provide additional discrimination to the currently used techniques in document examination. In the second part of this study, a direct comparison between four analytical methods (µ-XRF, solution-ICP-MS, LA-ICP-MS and LIBS) was conducted for glass analyses using interlaboratory studies. The data provided by 21 participants were used to assess the performance of the analytical methods in associating glass samples from the same source and differentiating different sources, as well as the use of different match criteria (confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test (sequential univariate, p=0.05 and p=0.01), t-test with Bonferroni correction (for multivariate comparisons), range overlap, and Hotelling's T2 tests. Error rates (Type 1 and Type 2) are reported for the use of each of these match criteria and depend on the heterogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The study provided recommendations for analytical performance-based parameters for µ-XRF and LA-ICP-MS as well as the best performing match criteria for both analytical techniques, which can be applied now by forensic glass examiners.
Resumo:
In the article - Menu Analysis: Review and Evaluation - by Lendal H. Kotschevar, Distinguished Professor School of Hospitality Management, Florida International University, Kotschevar’s initial statement reads: “Various methods are used to evaluate menus. Some have quite different approaches and give different information. Even those using quite similar methods vary in the information they give. The author attempts to describe the most frequently used methods and to indicate their value. A correlation calculation is made to see how well certain of these methods agree in the information they give.” There is more than one way to look at the word menu. The culinary selections decided upon by the head chef or owner of a restaurant, which ultimately define the type of restaurant is one way. The physical outline of the food, which a patron actually holds in his or her hand, is another. These descriptions are most common to the word, menu. The author primarily concentrates on the latter description, and uses the act of counting the number of items sold on a menu to measure the popularity of any particular item. This, along with a formula, allows Kotschevar to arrive at a specific value per item. Menu analysis would appear a difficult subject to broach. How does a person approach a menu analysis, how do you qualify and quantify a menu; it seems such a subjective exercise. The author offers methods and outlines on approaching menu analysis from empirical perspectives. “Menus are often examined visually through the evaluation of various factors. It is a subjective method but has the advantage of allowing scrutiny of a wide range of factors which other methods do not,” says Distinguished Professor, Kotschevar. “The method is also highly flexible. Factors can be given a score value and scores summed to give a total for a menu. This allows comparison between menus. If the one making the evaluations knows menu values, it is a good method of judgment,” he further offers. The author wants you to know that assigning values is fundamental to a pragmatic menu analysis; it is how the reviewer keeps score, so to speak. Value merit provides reliable criteria from which to gauge a particular menu item. In the final analysis, menu evaluation provides the mechanism for either keeping or rejecting selected items on a menu. Kotschevar provides at least three different matrix evaluation methods; they are defined as the Miller method, the Smith and Kasavana method, and the Pavesic method. He offers illustrated examples of each via a table format. These are helpful tools since trying to explain the theories behind the tables would be difficult at best. Kotschevar also references examples of analysis methods which aren’t matrix based. The Hayes and Huffman - Goal Value Analysis - is one such method. The author sees no one method better than another, and suggests that combining two or more of the methods to be a benefit.
Resumo:
This is the second of a two-part series on an evaluation of cuisines. The author establishes standards for cuisine which determine that Chinese food is superior to French.
Resumo:
Since the establishment of the evaluation system in 1975, the junior colleges in the Republic of China (Taiwan), have gone through six formal evaluations. We know that evaluation in schooling, like quality control in businesses, should be a systematic, formal, and a continual process. It can doubtless serve as a strategy to refine the quality of education. The purpose of this research is to explore the current practice of junior college evaluation in Taiwan. This provides insight into the development of and quality of the current evaluation system. Moreover, this study also identified the source of problems with the current evaluation system and provided suggestion for improvements.^ In order to attain the above purposes, this research was undertaken in both theoretical and practical ways. First, theoretically, on the basis of a literature review, the theories of educational evaluation and, according to the course and principles of development, a view of the current practice in Taiwan. Secondly, in practice, by means of questionnaires, an analysis of the views of evaluation committeemen, junior college presidents, and administrators were obtained on evaluation models, methods, contents, organization, functions, criteria, grades reports, and others with suggestions for improvement. The summary of findings concludes that most evaluators and evaluatees think the purpose of evaluation can help the colleges explore their difficulties and problems. In addition, it was found that there is significant difference between the two groups regarding the evaluation methods, contents, organization, functions, criteria, grades reports and others, while analyzing these objective data forms the basis for an improved method of evaluation for Junior Colleges in Taiwan. ^
Resumo:
The objective of this study was to develop a GIS-based multi-class index overlay model to determine areas susceptible to inland flooding during extreme precipitation events in Broward County, Florida. Data layers used in the method include Airborne Laser Terrain Mapper (ALTM) elevation data, excess precipitation depth determined through performing a Soil Conservation Service (SCS) Curve Number (CN) analysis, and the slope of the terrain. The method includes a calibration procedure that uses "weights and scores" criteria obtained from Hurricane Irene (1999) records, a reported 100-year precipitation event, Doppler radar data and documented flooding locations. Results are displayed in maps of Eastern Broward County depicting types of flooding scenarios for a 100-year, 24-hour storm based on the soil saturation conditions. As expected the results of the multi-class index overlay analysis showed that an increase for the potential of inland flooding could be expected when a higher antecedent moisture condition is experienced. The proposed method proves to have some potential as a predictive tool for flooding susceptibility based on a relatively simple approach.
Resumo:
Elemental analysis can become an important piece of evidence to assist the solution of a case. The work presented in this dissertation aims to evaluate the evidential value of the elemental composition of three particular matrices: ink, paper and glass. In the first part of this study, the analytical performance of LIBS and LA-ICP-MS methods was evaluated for paper, writing inks and printing inks. A total of 350 ink specimens were examined including black and blue gel inks, ballpoint inks, inkjets and toners originating from several manufacturing sources and/or batches. The paper collection set consisted of over 200 paper specimens originating from 20 different paper sources produced by 10 different plants. Micro-homogeneity studies show smaller variation of elemental compositions within a single source (i.e., sheet, pen or cartridge) than the observed variation between different sources (i.e., brands, types, batches). Significant and detectable differences in the elemental profile of the inks and paper were observed between samples originating from different sources (discrimination of 87 – 100% of samples, depending on the sample set under investigation and the method applied). These results support the use of elemental analysis, using LA-ICP-MS and LIBS, for the examination of documents and provide additional discrimination to the currently used techniques in document examination. In the second part of this study, a direct comparison between four analytical methods (µ-XRF, solution-ICP-MS, LA-ICP-MS and LIBS) was conducted for glass analyses using interlaboratory studies. The data provided by 21 participants were used to assess the performance of the analytical methods in associating glass samples from the same source and differentiating different sources, as well as the use of different match criteria (confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test (sequential univariate, p=0.05 and p=0.01), t-test with Bonferroni correction (for multivariate comparisons), range overlap, and Hotelling’s T2 tests. Error rates (Type 1 and Type 2) are reported for the use of each of these match criteria and depend on the heterogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The study provided recommendations for analytical performance-based parameters for µ-XRF and LA-ICP-MS as well as the best performing match criteria for both analytical techniques, which can be applied now by forensic glass examiners.
Resumo:
The attention on green building is driven by the desire to reduce a building’s running cost over its entire life cycle. However, with the use of sustainable technologies and more environmentally friendly products in the building sector, the construction industry contributes significantly to sustainable actions of our society. Different certification systems have entered the market with the aim to measure a building’s sustainability. However, each system uses its own set of criteria for the purpose of rating. The primary goal of this study is to identify a comprehensive set of criteria for the measurement of building sustainability, and therefore to facilitate the comparison of existing rating methods. The collection and analysis of the criteria, identified through a comprehensive literature review, has led to the establishment of two additional categories besides the 3 pillars of sustainability. The comparative analyses presented in this thesis reveal strengths and weaknesses of the chosen green building certification systems - LEED, BREEAM, and DGNB.