985 resultados para Topic hotness evaluation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of adenoids by x-ray imaging has been the topic of heated debate, but few studies have looked into the reliability of most existing radiographic parameters. Objective: This study aims to verify the intra-examiner and inter-examiner reproducibility of the adenoid radiographic assessment methods. Materials and Methods: This is a cross-sectional case series study. Forty children of both genders aged between 4 and 14 were enrolled. They were selected based on complaints of nasal obstruction or mouth breathing and suspicion of pharyngeal tonsil hypertrophy. Cavum x-rays and orthodontic teleradiographs were assessed by two examiners in quantitative and categorical terms. Results: All quantitative parameters in both x-ray modes showed excellent intra and inter-examiner reproducibility. Relatively better performance was observed in categorical parameters used in cavum x-ray assessment by C-Kurien, C-Wang, C-Fujioka, and C-Elwany over C-Cohen and C-Ysunza. As for orthodontic teleradiograph grading systems, C-McNamara has been proven to be more reliable than C-Holmberg. Conclusion: Most instruments showed adequate reproducibility levels. However, more research is needed to properly determine the accuracy and viability of each method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we address the problem of defining the product mix in order to maximise a system's throughput. This problem is well known for being NP-Complete and therefore, most contributions to the topic focus on developing heuristics that are able to obtain good solutions for the problem in a short CPU time. In particular, constructive heuristics are available for the problem such as that by Fredendall and Lea, and by Aryanezhad and Komijan. We propose a new constructive heuristic based on the Theory of Constraints and the Knapsack Problem. The computational results indicate that the proposed heuristic yields better results than the existing heuristic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dental caries is a common preventable childhood disease leading to severe physical, mental and economic repercussions for children and their families if left untreated. A needs assessment in Harris County reported that 45.9% of second graders had untreated dental caries. In order to address this growing problem, the School Sealant Program (SSP), a primary preventive initiative, was launched by the Houston Department of Health and Human Services (HDHHS) to provide oral health education, and underutilized dental preventive services to second grade children from participating Local School Districts (LSDs). ^ To determine the effectiveness and efficiency of the SSP, a program evaluation was conducted by the HDHHS between September 2007 and June 2008 for the Oral Health Education (OHE) component of the SSP. The objective of the evaluation was to assess short term changes in oral health knowledge of the participants and determine if these changes, if any, were due to the OHE sessions. An 8-item multiple choice pre/post test was developed for this purpose and administered to the participants before and immediately after the OHE sessions. ^ The present project analyzed pre and post test data of 1,088 second graders from 22 participating schools. Changes in overall and topic-specific knowledge of the program participants before and after the OHE sessions were analyzed using the Wilcoxon's signed rank test. ^ Results. The overall knowledge assessment showed a statistically significant (p <0.001) increase in the dental health knowledge of the participants after the oral health education sessions. Participants in the higher scoring category (7-8 correct responses) increased from 9.5% at baseline to 60.8% after the education sessions. Overall knowledge increased in all school regions with the highest knowledge gains seen in the Central and South regions. Males and females had similar knowledge gains. Significant knowledge differences were also found for each of the topic specific categories (functions of teeth, healthy diet, healthy habits, dental sealants; p<0.001) indicating an increase in topic specific knowledge of the participants post-health education sessions. ^ Conclusions. The OHE sessions were successful in increasing the short term oral health knowledge of the participants. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetics education for physicians has been a popular publication topic in the United States and in Europe for over 20 years. Decreasing numbers of medical genetics professionals and an increasing volume of genetic information has created a dire need for increased genetics training in medical school and in clinical practice. This study aimed to assess how well pediatrics-focused primary care physicians apply their general genetics knowledge to clinical genetic testing using scenario-based questions. We chose to specifically focus on knowledge of the diagnostic applicability of Chromosomal Microarray (CMA) technology in pediatrics because of its recent recommendation by the International Standard Cytogenomic Array (ISCA) Consortium as a first-tier genetic test for individuals with developmental disabilities and/or congenital anomalies. Proficiency in ordering baseline genetic testing was evaluated for eighty-one respondents from four pediatrics-focused residencies (categorical pediatrics, pediatric neurology, internal medicine/pediatrics, and family practice) at two large residency programs in Houston, Texas. Similar to other studies, we found an overall deficit of genetic testing knowledge, especially among family practice residents. Interestingly, residents who elected to complete a genetics rotation in medical school scored significantly better than expected, as well as better than residents who did not elect to complete a genetics rotation. We suspect that the insufficient knowledge among physicians regarding a baseline genetics work-up is leading to redundant (i.e. concurrent karyotype and CMA) and incorrect (i.e. ordering CMA to detect achondroplasia) genetic testing and is contributing to rising health care costs in the United States. Our results provide specific teaching points upon which medical schools can focus education about clinical genetic testing and suggest that increased collaboration between primary care physicians and genetics professionals could benefit patient health care overall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research in psychology has reported that, among the variety of possibilities for assessment methodologies, summary evaluation offers a particularly adequate context for inferring text comprehension and topic understanding. However, grades obtained in this methodology are hard to quantify objectively. Therefore, we carried out an empirical study to analyze the decisions underlying human summary-grading behavior. The task consisted of expert evaluation of summaries produced in critically relevant contexts of summarization development, and the resulting data were modeled by means of Bayesian networks using an application called Elvira, which allows for graphically observing the predictive power (if any) of the resultant variables. Thus, in this article, we analyzed summary-evaluation decision making in a computational framework

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La última década ha sido testigo de importantes avances en el campo de la tecnología de reconocimiento de voz. Los sistemas comerciales existentes actualmente poseen la capacidad de reconocer habla continua de múltiples locutores, consiguiendo valores aceptables de error, y sin la necesidad de realizar procedimientos explícitos de adaptación. A pesar del buen momento que vive esta tecnología, el reconocimiento de voz dista de ser un problema resuelto. La mayoría de estos sistemas de reconocimiento se ajustan a dominios particulares y su eficacia depende de manera significativa, entre otros muchos aspectos, de la similitud que exista entre el modelo de lenguaje utilizado y la tarea específica para la cual se está empleando. Esta dependencia cobra aún más importancia en aquellos escenarios en los cuales las propiedades estadísticas del lenguaje varían a lo largo del tiempo, como por ejemplo, en dominios de aplicación que involucren habla espontánea y múltiples temáticas. En los últimos años se ha evidenciado un constante esfuerzo por mejorar los sistemas de reconocimiento para tales dominios. Esto se ha hecho, entre otros muchos enfoques, a través de técnicas automáticas de adaptación. Estas técnicas son aplicadas a sistemas ya existentes, dado que exportar el sistema a una nueva tarea o dominio puede requerir tiempo a la vez que resultar costoso. Las técnicas de adaptación requieren fuentes adicionales de información, y en este sentido, el lenguaje hablado puede aportar algunas de ellas. El habla no sólo transmite un mensaje, también transmite información acerca del contexto en el cual se desarrolla la comunicación hablada (e.g. acerca del tema sobre el cual se está hablando). Por tanto, cuando nos comunicamos a través del habla, es posible identificar los elementos del lenguaje que caracterizan el contexto, y al mismo tiempo, rastrear los cambios que ocurren en estos elementos a lo largo del tiempo. Esta información podría ser capturada y aprovechada por medio de técnicas de recuperación de información (information retrieval) y de aprendizaje de máquina (machine learning). Esto podría permitirnos, dentro del desarrollo de mejores sistemas automáticos de reconocimiento de voz, mejorar la adaptación de modelos del lenguaje a las condiciones del contexto, y por tanto, robustecer al sistema de reconocimiento en dominios con condiciones variables (tales como variaciones potenciales en el vocabulario, el estilo y la temática). En este sentido, la principal contribución de esta Tesis es la propuesta y evaluación de un marco de contextualización motivado por el análisis temático y basado en la adaptación dinámica y no supervisada de modelos de lenguaje para el robustecimiento de un sistema automático de reconocimiento de voz. Esta adaptación toma como base distintos enfoque de los sistemas mencionados (de recuperación de información y aprendizaje de máquina) mediante los cuales buscamos identificar las temáticas sobre las cuales se está hablando en una grabación de audio. Dicha identificación, por lo tanto, permite realizar una adaptación del modelo de lenguaje de acuerdo a las condiciones del contexto. El marco de contextualización propuesto se puede dividir en dos sistemas principales: un sistema de identificación de temática y un sistema de adaptación dinámica de modelos de lenguaje. Esta Tesis puede describirse en detalle desde la perspectiva de las contribuciones particulares realizadas en cada uno de los campos que componen el marco propuesto: _ En lo referente al sistema de identificación de temática, nos hemos enfocado en aportar mejoras a las técnicas de pre-procesamiento de documentos, asimismo en contribuir a la definición de criterios más robustos para la selección de index-terms. – La eficiencia de los sistemas basados tanto en técnicas de recuperación de información como en técnicas de aprendizaje de máquina, y específicamente de aquellos sistemas que particularizan en la tarea de identificación de temática, depende, en gran medida, de los mecanismos de preprocesamiento que se aplican a los documentos. Entre las múltiples operaciones que hacen parte de un esquema de preprocesamiento, la selección adecuada de los términos de indexado (index-terms) es crucial para establecer relaciones semánticas y conceptuales entre los términos y los documentos. Este proceso también puede verse afectado, o bien por una mala elección de stopwords, o bien por la falta de precisión en la definición de reglas de lematización. En este sentido, en este trabajo comparamos y evaluamos diferentes criterios para el preprocesamiento de los documentos, así como también distintas estrategias para la selección de los index-terms. Esto nos permite no sólo reducir el tamaño de la estructura de indexación, sino también mejorar el proceso de identificación de temática. – Uno de los aspectos más importantes en cuanto al rendimiento de los sistemas de identificación de temática es la asignación de diferentes pesos a los términos de acuerdo a su contribución al contenido del documento. En este trabajo evaluamos y proponemos enfoques alternativos a los esquemas tradicionales de ponderado de términos (tales como tf-idf ) que nos permitan mejorar la especificidad de los términos, así como también discriminar mejor las temáticas de los documentos. _ Respecto a la adaptación dinámica de modelos de lenguaje, hemos dividimos el proceso de contextualización en varios pasos. – Para la generación de modelos de lenguaje basados en temática, proponemos dos tipos de enfoques: un enfoque supervisado y un enfoque no supervisado. En el primero de ellos nos basamos en las etiquetas de temática que originalmente acompañan a los documentos del corpus que empleamos. A partir de estas, agrupamos los documentos que forman parte de la misma temática y generamos modelos de lenguaje a partir de dichos grupos. Sin embargo, uno de los objetivos que se persigue en esta Tesis es evaluar si el uso de estas etiquetas para la generación de modelos es óptimo en términos del rendimiento del reconocedor. Por esta razón, nosotros proponemos un segundo enfoque, un enfoque no supervisado, en el cual el objetivo es agrupar, automáticamente, los documentos en clusters temáticos, basándonos en la similaridad semántica existente entre los documentos. Por medio de enfoques de agrupamiento conseguimos mejorar la cohesión conceptual y semántica en cada uno de los clusters, lo que a su vez nos permitió refinar los modelos de lenguaje basados en temática y mejorar el rendimiento del sistema de reconocimiento. – Desarrollamos diversas estrategias para generar un modelo de lenguaje dependiente del contexto. Nuestro objetivo es que este modelo refleje el contexto semántico del habla, i.e. las temáticas más relevantes que se están discutiendo. Este modelo es generado por medio de la interpolación lineal entre aquellos modelos de lenguaje basados en temática que estén relacionados con las temáticas más relevantes. La estimación de los pesos de interpolación está basada principalmente en el resultado del proceso de identificación de temática. – Finalmente, proponemos una metodología para la adaptación dinámica de un modelo de lenguaje general. El proceso de adaptación tiene en cuenta no sólo al modelo dependiente del contexto sino también a la información entregada por el proceso de identificación de temática. El esquema usado para la adaptación es una interpolación lineal entre el modelo general y el modelo dependiente de contexto. Estudiamos también diferentes enfoques para determinar los pesos de interpolación entre ambos modelos. Una vez definida la base teórica de nuestro marco de contextualización, proponemos su aplicación dentro de un sistema automático de reconocimiento de voz. Para esto, nos enfocamos en dos aspectos: la contextualización de los modelos de lenguaje empleados por el sistema y la incorporación de información semántica en el proceso de adaptación basado en temática. En esta Tesis proponemos un marco experimental basado en una arquitectura de reconocimiento en ‘dos etapas’. En la primera etapa, empleamos sistemas basados en técnicas de recuperación de información y aprendizaje de máquina para identificar las temáticas sobre las cuales se habla en una transcripción de un segmento de audio. Esta transcripción es generada por el sistema de reconocimiento empleando un modelo de lenguaje general. De acuerdo con la relevancia de las temáticas que han sido identificadas, se lleva a cabo la adaptación dinámica del modelo de lenguaje. En la segunda etapa de la arquitectura de reconocimiento, usamos este modelo adaptado para realizar de nuevo el reconocimiento del segmento de audio. Para determinar los beneficios del marco de trabajo propuesto, llevamos a cabo la evaluación de cada uno de los sistemas principales previamente mencionados. Esta evaluación es realizada sobre discursos en el dominio de la política usando la base de datos EPPS (European Parliamentary Plenary Sessions - Sesiones Plenarias del Parlamento Europeo) del proyecto europeo TC-STAR. Analizamos distintas métricas acerca del rendimiento de los sistemas y evaluamos las mejoras propuestas con respecto a los sistemas de referencia. ABSTRACT The last decade has witnessed major advances in speech recognition technology. Today’s commercial systems are able to recognize continuous speech from numerous speakers, with acceptable levels of error and without the need for an explicit adaptation procedure. Despite this progress, speech recognition is far from being a solved problem. Most of these systems are adjusted to a particular domain and their efficacy depends significantly, among many other aspects, on the similarity between the language model used and the task that is being addressed. This dependence is even more important in scenarios where the statistical properties of the language fluctuates throughout the time, for example, in application domains involving spontaneous and multitopic speech. Over the last years there has been an increasing effort in enhancing the speech recognition systems for such domains. This has been done, among other approaches, by means of techniques of automatic adaptation. These techniques are applied to the existing systems, specially since exporting the system to a new task or domain may be both time-consuming and expensive. Adaptation techniques require additional sources of information, and the spoken language could provide some of them. It must be considered that speech not only conveys a message, it also provides information on the context in which the spoken communication takes place (e.g. on the subject on which it is being talked about). Therefore, when we communicate through speech, it could be feasible to identify the elements of the language that characterize the context, and at the same time, to track the changes that occur in those elements over time. This information can be extracted and exploited through techniques of information retrieval and machine learning. This allows us, within the development of more robust speech recognition systems, to enhance the adaptation of language models to the conditions of the context, thus strengthening the recognition system for domains under changing conditions (such as potential variations in vocabulary, style and topic). In this sense, the main contribution of this Thesis is the proposal and evaluation of a framework of topic-motivated contextualization based on the dynamic and non-supervised adaptation of language models for the enhancement of an automatic speech recognition system. This adaptation is based on an combined approach (from the perspective of both information retrieval and machine learning fields) whereby we identify the topics that are being discussed in an audio recording. The topic identification, therefore, enables the system to perform an adaptation of the language model according to the contextual conditions. The proposed framework can be divided in two major systems: a topic identification system and a dynamic language model adaptation system. This Thesis can be outlined from the perspective of the particular contributions made in each of the fields that composes the proposed framework: _ Regarding the topic identification system, we have focused on the enhancement of the document preprocessing techniques in addition to contributing in the definition of more robust criteria for the selection of index-terms. – Within both information retrieval and machine learning based approaches, the efficiency of topic identification systems, depends, to a large extent, on the mechanisms of preprocessing applied to the documents. Among the many operations that encloses the preprocessing procedures, an adequate selection of index-terms is critical to establish conceptual and semantic relationships between terms and documents. This process might also be weakened by a poor choice of stopwords or lack of precision in defining stemming rules. In this regard we compare and evaluate different criteria for preprocessing the documents, as well as for improving the selection of the index-terms. This allows us to not only reduce the size of the indexing structure but also to strengthen the topic identification process. – One of the most crucial aspects, in relation to the performance of topic identification systems, is to assign different weights to different terms depending on their contribution to the content of the document. In this sense we evaluate and propose alternative approaches to traditional weighting schemes (such as tf-idf ) that allow us to improve the specificity of terms, and to better identify the topics that are related to documents. _ Regarding the dynamic language model adaptation, we divide the contextualization process into different steps. – We propose supervised and unsupervised approaches for the generation of topic-based language models. The first of them is intended to generate topic-based language models by grouping the documents, in the training set, according to the original topic labels of the corpus. Nevertheless, a goal of this Thesis is to evaluate whether or not the use of these labels to generate language models is optimal in terms of recognition accuracy. For this reason, we propose a second approach, an unsupervised one, in which the objective is to group the data in the training set into automatic topic clusters based on the semantic similarity between the documents. By means of clustering approaches we expect to obtain a more cohesive association of the documents that are related by similar concepts, thus improving the coverage of the topic-based language models and enhancing the performance of the recognition system. – We develop various strategies in order to create a context-dependent language model. Our aim is that this model reflects the semantic context of the current utterance, i.e. the most relevant topics that are being discussed. This model is generated by means of a linear interpolation between the topic-based language models related to the most relevant topics. The estimation of the interpolation weights is based mainly on the outcome of the topic identification process. – Finally, we propose a methodology for the dynamic adaptation of a background language model. The adaptation process takes into account the context-dependent model as well as the information provided by the topic identification process. The scheme used for the adaptation is a linear interpolation between the background model and the context-dependent one. We also study different approaches to determine the interpolation weights used in this adaptation scheme. Once we defined the basis of our topic-motivated contextualization framework, we propose its application into an automatic speech recognition system. We focus on two aspects: the contextualization of the language models used by the system, and the incorporation of semantic-related information into a topic-based adaptation process. To achieve this, we propose an experimental framework based in ‘a two stages’ recognition architecture. In the first stage of the architecture, Information Retrieval and Machine Learning techniques are used to identify the topics in a transcription of an audio segment. This transcription is generated by the recognition system using a background language model. According to the confidence on the topics that have been identified, the dynamic language model adaptation is carried out. In the second stage of the recognition architecture, an adapted language model is used to re-decode the utterance. To test the benefits of the proposed framework, we carry out the evaluation of each of the major systems aforementioned. The evaluation is conducted on speeches of political domain using the EPPS (European Parliamentary Plenary Sessions) database from the European TC-STAR project. We analyse several performance metrics that allow us to compare the improvements of the proposed systems against the baseline ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In October 1998, the National Library of Medicine (NLM) launched a pilot project to learn about the role of public libraries in providing health information to the public and to generate information that would assist NLM and the National Network of Libraries of Medicine (NN/LM) in learning how best to work with public libraries in the future. Three regional medical libraries (RMLs), eight resource libraries, and forty-one public libraries or library systems from nine states and the District of Columbia were selected for participation. The pilot project included an evaluation component that was carried out in parallel with project implementation. The evaluation ran through September 1999. The results of the evaluation indicated that participating public librarians were enthusiastic about the training and information materials provided as part of the project and that many public libraries used the materials and conducted their own outreach to local communities and groups. Most libraries applied the modest funds to purchase additional Internet-accessible computers and/or upgrade their health-reference materials. However, few of the participating public libraries had health information centers (although health information was perceived as a top-ten or top-five topic of interest to patrons). Also, the project generated only minimal usage of NLM's consumer health database, known as MEDLINEplus, from the premises of the monitored libraries (patron usage from home or office locations was not tracked). The evaluation results suggested a balanced follow-up by NLM and the NN/LM, with a few carefully selected national activities, complemented by a package of targeted activities that, as of January 2000, are being planned, developed, or implemented. The results also highlighted the importance of building an evaluation component into projects like this one from the outset, to assure that objectives were met and that evaluative information was available on a timely basis, as was the case here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, many efforts have been made in the academic world to adapt the new degrees to the new European Higher Education Area (EHEA). New technologies have been the most important factor to carry out this adaptation. In particular, the tools 2.0 have been spreading quickly, not just the Web 2.0, but even in all the educational levels. Nevertheless, it is now necessary to evaluate whether all these efforts and all the changes, carried out in order to obtain improved academic performance among students, have provided good results. Therefore, the aim of this paper is focused on studying the impact of the implementation of information and communication technologies (ICTs) in a subject belonging to a Master from the University of Alicante in the academic year (2010-2011). In special, it is an elective course called "Advanced Visual Ergonomics" from the Master of Clinical Optometry and Vision. The methodology used to teach this course differs from the traditional one in many respects. For example, one of the resources used for the development of this course is a blog developed specifically to coordinate a series of virtual works, whose purpose is that the student goes into specific aspects of the current topic. Next, the student participates in an active role by writing a personal assessment on the blog. However, in the course planning, there is an attendance to lessons, where the teacher presents certain issues in a more traditional way, that is, with a lecture supported with audiovisual materials, such as materials generated in powerpoint. To evaluate the quality of the results achieved with this methodology, in this work the personal assessment of the students, who have completed this course during this academic year, are collected. In particular, we want to know their opinion about the used resources, as well as the followed methodology. The tool used to collect this information was a questionnaire. This questionnaire evaluates different aspects of the course: a general opinion, quality of the received information, satisfaction about the followed methodology and the student´s critical awareness. The design of this questionnaire is very important to get conclusive information about the methodology followed in the course. The questionnaire has to have an adequate number of questions; whether it has many questions, it might be boring for the student who would pay no enough attention. The questions should be well-written, with a clear structure and message, to avoid confusion and an ambiguity. The questions should be objectives, without any suggestion for a desired answer. In addition, the questionnaire should be interesting to encourage the student´ s interest. In conclusion, this questionnaire developed for this subject provided good information to evaluate whether the methodology was a useful tool to teach "Advanced Visual Ergonomics". Furthermore, the student´s opinion collected by this questionnaire might be very helpful to improve this didactic resource.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical machine translation (SMT) is an approach to Machine Translation (MT) that uses statistical models whose parameter estimation is based on the analysis of existing human translations (contained in bilingual corpora). From a translation student’s standpoint, this dissertation aims to explain how a phrase-based SMT system works, to determine the role of the statistical models it uses in the translation process and to assess the quality of the translations provided that system is trained with in-domain goodquality corpora. To that end, a phrase-based SMT system based on Moses has been trained and subsequently used for the English to Spanish translation of two texts related in topic to the training data. Finally, the quality of this output texts produced by the system has been assessed through a quantitative evaluation carried out with three different automatic evaluation measures and a qualitative evaluation based on the Multidimensional Quality Metrics (MQM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enterprise systems interoperability (ESI) is an important topic for business currently. This situation is evidenced, at least in part, by the number and extent of potential candidate protocols for such process interoperation, viz., ebXML, BPML, BPEL, and WSCI. Wide-ranging support for each of these candidate standards already exists. However, despite broad acceptance, a sound theoretical evaluation of these approaches has not yet been provided. We use the Bunge-Wand-Weber (BWW) models, in particular, the representation model, to provide the basis for such a theoretical evaluation. We, and other researchers, have shown the usefulness of the representation model for analyzing, evaluating, and engineering techniques in the areas of traditional and structured systems analysis, object-oriented modeling, and process modeling. In this work, we address the question, what are the potential semantic weaknesses of using ebXML alone for process interoperation between enterprise systems? We find that users will lack important implementation information because of representational deficiencies; due to ontological redundancy, the complexity of the specification is unnecessarily increased; and, users of the specification will have to bring in extra-model knowledge to understand constructs in the specification due to instances of ontological excess.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of bioenergy, biofuels and bioproducts remains at the top of the current political and research agenda. Identification of the optimum processing routes for biomass, in terms of efficiency, cost, environment and socio-economics is vital as concern grows over the remaining fossil fuel resources, climate change and energy security. It is known that the only renewable way of producing conventional hydrocarbon fuels and organic chemicals is from biomass, but the problem remains of identifying the best product mix and the most efficient way of processing biomass to products. The aim is to move Europe towards a biobased economy and it is widely accepted that biorefineries are key to this development. A methodology was required for the generation and evaluation of biorefinery process chains for converting biomass into one or more valuable products that properly considers performance, cost, environment, socio-economics and other factors that influence the commercial viability of a process. In this thesis a methodology to achieve this objective is described. The completed methodology includes process chain generation, process modelling and subsequent analysis and comparison of results in order to evaluate alternative process routes. A modular structure was chosen to allow greater flexibility and allowing the user to generate a large number of different biorefinery configurations The significance of the approach is that the methodology is defined and is thus rigorous and consistent and may be readily re-examined if circumstances change. There was the requirement for consistency in structure and use, particularly for multiple analyses. It was important that analyses could be quickly and easily carried out to consider, for example, different scales, configurations and product portfolios and so that previous outcomes could be readily reconsidered. The result of the completed methodology is the identification of the most promising biorefinery chains from those considered as part of the European Biosynergy Project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of my research is consumer brand equity (CBE). My thesis is that the success or otherwise of a brand is better viewed from the consumers’ perspective. I specifically focus on consumers as a unique group of stakeholders whose involvement with brands is crucial to the overall success of branding strategy. To this end, this research examines the constellation of ideas on brand equity that have hitherto been offered by various scholars. Through a systematic integration of the concepts and practices identified but these scholars (concepts and practices such as: competitiveness, consumer searching, consumer behaviour, brand image, brand relevance, consumer perceived value, etc.), this research identifies CBE as a construct that is shaped, directed and made valuable by the beliefs, attitudes and the subjective preferences of consumers. This is done by examining the criteria on the basis of which the consumers evaluate brands and make brand purchase decisions. Understanding the criteria by which consumers evaluate brands is crucial for several reasons. First, as the basis upon which consumers select brands changes with consumption norms and technology, understanding the consumer choice process will help in formulating branding strategy. Secondly, an understanding of these criteria will help in formulating a creative and innovative agenda for ‘new brand’ propositions. Thirdly, it will also influence firms’ ability to simulate and mould the plasticity of demand for existing brands. In examining these three issues, this thesis presents a comprehensive account of CBE. This is because the first issue raised in the preceding paragraph deals with the content of CBE. The second issue addresses the problem of how to develop a reliable and valid measuring instrument for CBE. The third issue examines the structural and statistical relationships between the factors of CBE and the consequences of CBE on consumer perceived value (CPV). Using LISREL-SIMPLIS 8.30, the study finds direct and significant influential links between consumer brand equity and consumer value perception.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1974 Dr D M Bramwell published his research work at the University of Aston a part of which was the establishment of an elemental work study data base covering drainage construction. The Transport and Road Research Laboratory decided to, extend that work as part of their continuing research programme into the design and construction of buried pipelines by placing a research contract with Bryant Construction. This research may be considered under two broad categories. In the first, site studies were undertaken to validate and extend the data base. The studies showed good agreement with the existing data with the exception of the excavation trench shoring and pipelaying data which was amended to incorporate new construction plant and methods. An inter-active on-line computer system for drainage estimating was developed. This system stores the elemental data, synthesizes the standard time of each drainage operation and is used to determine the required resources and construction method of the total drainage activity. The remainder of the research was into the general topic of construction efficiency. An on-line command driven computer system was produced. This system uses a stochastic simulation technique, based on distributions of site efficiency measurements to evaluate the effects of varying performance levels. The analysis of this performance data quantities the variability inherent in construction and demonstrates how some of this variability can be reconciled by considering the characteristics of a contract. A long term trend of decreasing efficiency with contract duration was also identified. The results obtained from the simulation suite were compared to site records collected from current contracts. This showed that this approach will give comparable answers, but these are greatly affected by the site performance parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we explore the idea of social role theory (SRT) and propose a novel regularized topic model which incorporates SRT into the generative process of social media content. We assume that a user can play multiple social roles, and each social role serves to fulfil different duties and is associated with a role-driven distribution over latent topics. In particular, we focus on social roles corresponding to the most common social activities on social networks. Our model is instantiated on microblogs, i.e., Twitter and community question-answering (cQA), i.e., Yahoo! Answers, where social roles on Twitter include "originators" and "propagators", and roles on cQA are "askers" and "answerers". Both explicit and implicit interactions between users are taken into account and modeled as regularization factors. To evaluate the performance of our proposed method, we have conducted extensive experiments on two Twitter datasets and two cQA datasets. Furthermore, we also consider multi-role modeling for scientific papers where an author's research expertise area is considered as a social role. A novel application of detecting users' research interests through topical keyword labeling based on the results of our multi-role model has been presented. The evaluation results have shown the feasibility and effectiveness of our model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Short text messages a.k.a Microposts (e.g. Tweets) have proven to be an effective channel for revealing information about trends and events, ranging from those related to Disaster (e.g. hurricane Sandy) to those related to Violence (e.g. Egyptian revolution). Being informed about such events as they occur could be extremely important to authorities and emergency professionals by allowing such parties to immediately respond. In this work we study the problem of topic classification (TC) of Microposts, which aims to automatically classify short messages based on the subject(s) discussed in them. The accurate TC of Microposts however is a challenging task since the limited number of tokens in a post often implies a lack of sufficient contextual information. In order to provide contextual information to Microposts, we present and evaluate several graph structures surrounding concepts present in linked knowledge sources (KSs). Traditional TC techniques enrich the content of Microposts with features extracted only from the Microposts content. In contrast our approach relies on the generation of different weighted semantic meta-graphs extracted from linked KSs. We introduce a new semantic graph, called category meta-graph. This novel meta-graph provides a more fine grained categorisation of concepts providing a set of novel semantic features. Our findings show that such category meta-graph features effectively improve the performance of a topic classifier of Microposts. Furthermore our goal is also to understand which semantic feature contributes to the performance of a topic classifier. For this reason we propose an approach for automatic estimation of accuracy loss of a topic classifier on new, unseen Microposts. We introduce and evaluate novel topic similarity measures, which capture the similarity between the KS documents and Microposts at a conceptual level, considering the enriched representation of these documents. Extensive evaluation in the context of Emergency Response (ER) and Violence Detection (VD) revealed that our approach outperforms previous approaches using single KS without linked data and Twitter data only up to 31.4% in terms of F1 measure. Our main findings indicate that the new category graph contains useful information for TC and achieves comparable results to previously used semantic graphs. Furthermore our results also indicate that the accuracy of a topic classifier can be accurately predicted using the enhanced text representation, outperforming previous approaches considering content-based similarity measures. © 2014 Elsevier B.V. All rights reserved.