42 resultados para fuzzy based evaluation method

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In England, publicly supported advice to small firms is organized primarily through the Business Link (BL) network. Using the programme theory underlying this business support, we develop four propositions and test these empirically using data from a new survey of over 3000 English SMEs. We find strong support for the value to BL operators of a high profile to boost take-up. We find support for the BL’s market segmentation that targets intensive assistance to younger firms and those with limited liability. Allowing for sample selection, we find no significant effects on growth from ‘other’ assistance but find a significant employment boost from intensive assistance. This partially supports the programme theory assertion that BL improves business growth and strongly supports the proposition that there are differential outcomes from intensive and other assistance. This suggests an improvement in the BL network, compared with earlier studies, notably Roper et al. (2001), Roper and Hart (2005).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile technology has not yet achieved widespread acceptance in the Architectural, Engineering, and Construction (AEC) industry. This paper presents work that is part of an ongoing research project focusing on the development of multimodal mobile applications for use in the AEC industry. This paper focuses specifically on a context-relevant lab-based evaluation of two input modalities – stylus and soft-keyboard v. speech-based input – for use with a mobile data collection application for concrete test technicians. The manner in which the evaluation was conducted as well as the results obtained are discussed in detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A domain independent ICA-based watermarking method is introduced and studied by numerical simulations. This approach can be used either on images, music or video to convey a hidden message. It relies on embedding the information in a set of statistically independent sources (the independent components) as the feature space. For the experiments the medium has arbritraly chosen to be digital images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To measure changes in dispensing activity in a UK repeat dispensing pilot study and to estimate any associated cost savings. Method: Patients were provided with two successive three-monthly repeat prescriptions containing all of the items on their "repeat medicines list" and valid at a study pharmacy. Pharmacists consulted with patients at the time of supply and completed a patient-monitoring form. Prescriptions with pricing data were returned by the UK Prescription Pricing Authority. These data were used to calculate dispensing activity, the cost of dispensed items and an estimate of cost savings on non-dispensed items. A retrospective identification of items prescribed during the six months prior to the project was used to provide a comparison with those dispensed during the project and thus a more realistic estimate of changes. Setting: 350 patients from two medical practices in a large English City, with inner city and suburban locations, and served by seven pharmacies. Key findings: There were methodological challenges in establishing a robust framework for calculating changes. Based on all of the items that patients could have obtained from their repeat list, 23.8% were not dispensed during the intervention period. A correction was then made to allow for a comparison with usage in the six months prior to the study. Based on the corrected data, there was an estimated 11.3% savings in drug costs compared with the pre-intervention period. There was a marked difference in changes between the two practices, the pharmacies and individual patients. The capitation-based remuneration method was acceptable to all but one of the community pharmacists. Conclusion: The repeat dispensing system reduced dispensing volume in comparison with the control period. A repeat dispensing system with a focus on patients' needs and their use of medicines might be cost neutral.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The specific objective of the research was to evaluate proprietary audit systems. Proprietary audit systems comprise question sets containing approximately 500 questions dealing with selected aspects of health and safety management. Each question is allotted a number of points and an organisation seeks to judge its health and safety performance by the overall score achieved in the audit. Initially it was considered that the evaluation method might involve comparing the proprietary audit scores with other methods of measuring safety performance. However, what appeared to be missing in the first instance was information that organisations could use to compare the contrast question set content against their own needs. A technique was developed using the computer database FileMaker Pro. This enables questions in an audit to be sorted into categories using a process of searching for key words. Questions that are not categorised by word searching can be identified and sorted manually. The process can be completed in 2-3 hours which is considerably faster than manual categorisation of questions which typically takes about 10 days. The technique was used to compare and contrast three proprietary audits: ISRS, CHASE and QSA. Differences and similarities between these audits were successfully identified. It was concluded that in general proprietary audits need to focus to a greater extent on identifying strengths and weaknesses in occupational health and safety management systems. To do this requires the inclusion of more probing questions which consider whether risk control measures are likely to be successful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In April 2009, Google Images added a filter for narrowing search results by colour. Several other systems for searching image databases by colour were also released around this time. These colour-based image retrieval systems enable users to search image databases either by selecting colours from a graphical palette (i.e., query-by-colour), by drawing a representation of the colour layout sought (i.e., query-by-sketch), or both. It was comments left by readers of online articles describing these colour-based image retrieval systems that provided us with the inspiration for this research. We were surprised to learn that the underlying query-based technology used in colour-based image retrieval systems today remains remarkably similar to that of systems developed nearly two decades ago. Discovering this ageing retrieval approach, as well as uncovering a large user demographic requiring image search by colour, made us eager to research more effective approaches for colour-based image retrieval. In this thesis, we detail two user studies designed to compare the effectiveness of systems adopting similarity-based visualisations, query-based approaches, or a combination of both, for colour-based image retrieval. In contrast to query-based approaches, similarity-based visualisations display and arrange database images so that images with similar content are located closer together on screen than images with dissimilar content. This removes the need for queries, as users can instead visually explore the database using interactive navigation tools to retrieve images from the database. As we found existing evaluation approaches to be unreliable, we describe how we assessed and compared systems adopting similarity-based visualisations, query-based approaches, or both, meaningfully and systematically using our Mosaic Test - a user-based evaluation approach in which evaluation study participants complete an image mosaic of a predetermined target image using the colour-based image retrieval system under evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A variety of content-based image retrieval systems exist which enable users to perform image retrieval based on colour content - i.e., colour-based image retrieval. For the production of media for use in television and film, colour-based image retrieval is useful for retrieving specifically coloured animations, graphics or videos from large databases (by comparing user queries to the colour content of extracted key frames). It is also useful to graphic artists creating realistic computer-generated imagery (CGI). Unfortunately, current methods for evaluating colour-based image retrieval systems have 2 major drawbacks. Firstly, the relevance of images retrieved during the task cannot be measured reliably. Secondly, existing methods do not account for the creative design activity known as reflection-in-action. Consequently, the development and application of novel and potentially more effective colour-based image retrieval approaches, better supporting the large number of users creating media for use in television and film productions, is not possible as their efficacy cannot be reliably measured and compared to existing technologies. As a solution to the problem, this paper introduces the Mosaic Test. The Mosaic Test is a user-based evaluation approach in which participants complete an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. In this paper, we introduce the Mosaic Test and report on a user evaluation. The findings of the study reveal that the Mosaic Test overcomes the 2 major drawbacks associated with existing evaluation methods and does not require expert participants. © 2012 Springer Science+Business Media, LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The realization of the Semantic Web is constrained by a knowledge acquisition bottleneck, i.e. the problem of how to add RDF mark-up to the millions of ordinary web pages that already exist. Information Extraction (IE) has been proposed as a solution to the annotation bottleneck. In the task based evaluation reported here, we compared the performance of users without access to annotation, users working with annotations which had been produced from manually constructed knowledge bases, and users working with annotations augmented using IE. We looked at retrieval performance, overlap between retrieved items and the two sets of annotations, and usage of annotation options. Automatically generated annotations were found to add value to the browsing experience in the scenario investigated. Copyright 2005 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasingly, lab evaluations of mobile applications are incorporating mobility. The inclusion of mobility alone, however, is insufficient to generate a realistic evaluation context since real-life users will typically be required to monitor their environment while moving through it. While field evaluations represent a more realistic evaluation context, such evaluations pose difficulties, including data capture and environmental control, which mean that a lab-based evaluation is often a more practical choice. This paper describes a novel evaluation technique that mimics a realistic mobile usage context in a lab setting. The technique requires that participants monitor their environment and change the route they are walking to avoid dynamically changing hazards (much as reallife users would be required to do). Two studies that employed this technique are described, and the results (which indicate the technique is useful) are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a new edge-based matching kernel for graphs by using discrete-time quantum walks. To this end, we commence by transforming a graph into a directed line graph. The reasons of using the line graph structure are twofold. First, for a graph, its directed line graph is a dual representation and each vertex of the line graph represents a corresponding edge in the original graph. Second, we show that the discrete-time quantum walk can be seen as a walk on the line graph and the state space of the walk is the vertex set of the line graph, i.e., the state space of the walk is the edges of the original graph. As a result, the directed line graph provides an elegant way of developing new edge-based matching kernel based on discrete-time quantum walks. For a pair of graphs, we compute the h-layer depth-based representation for each vertex of their directed line graphs by computing entropic signatures (computed from discrete-time quantum walks on the line graphs) on the family of K-layer expansion subgraphs rooted at the vertex, i.e., we compute the depth-based representations for edges of the original graphs through their directed line graphs. Based on the new representations, we define an edge-based matching method for the pair of graphs by aligning the h-layer depth-based representations computed through the directed line graphs. The new edge-based matching kernel is thus computed by counting the number of matched vertices identified by the matching method on the directed line graphs. Experiments on standard graph datasets demonstrate the effectiveness of our new kernel.