900 resultados para Algorithm desigh and analysis
Resumo:
Purpose - The purpose of this paper is to show how QFD can be used as part of a structured planning and analysis framework for micro-sized enterprises to build-up their e-business capabilities. Design/methodology/approach - This case study has been produced using a new framework which integrates the balanced scorecard, value chain and quality function deployment techniques into an integrated framework known as the E-Business Planning and Analysis Framework (E-PAF). It has been produced using an action research approach. Findings - A new framework with a supporting case study is provided. This case study has demonstrated that the framework can be applied successfully to micro-sized enterprises (those with less than ten employees) to successfully plan new strategic and technical developments. This will enhance the online service that the company is able to provide. Research limitations/implications - This paper presents a single case study. The technical recommendations are currently being implemented. Originality/value - Such analytical techniques are most commonly associated with large organisations, and are not specifically associated with e-business planning. This paper provides a new framework that will be of general applicability to other similarly sized enterprises that are looking to improve e-business capabilities. © Emerald Group Publishing Limited.
Resumo:
Since the original Data Envelopment Analysis (DEA) study by Charnes et al. [Measuring the efficiency of decision-making units. European Journal of Operational Research 1978;2(6):429–44], there has been rapid and continuous growth in the field. As a result, a considerable amount of published research has appeared, with a significant portion focused on DEA applications of efficiency and productivity in both public and private sector activities. While several bibliographic collections have been reported, a comprehensive listing and analysis of DEA research covering its first 30 years of history is not available. This paper thus presents an extensive, if not nearly complete, listing of DEA research covering theoretical developments as well as “real-world” applications from inception to the year 2007. A listing of the most utilized/relevant journals, a keyword analysis, and selected statistics are presented.
Resumo:
The key to the correct application of ANOVA is careful experimental design and matching the correct analysis to that design. The following points should therefore, be considered before designing any experiment: 1. In a single factor design, ensure that the factor is identified as a 'fixed' or 'random effect' factor. 2. In more complex designs, with more than one factor, there may be a mixture of fixed and random effect factors present, so ensure that each factor is clearly identified. 3. Where replicates can be grouped or blocked, the advantages of a randomised blocks design should be considered. There should be evidence, however, that blocking can sufficiently reduce the error variation to counter the loss of DF compared with a randomised design. 4. Where different treatments are applied sequentially to a patient, the advantages of a three-way design in which the different orders of the treatments are included as an 'effect' should be considered. 5. Combining different factors to make a more efficient experiment and to measure possible factor interactions should always be considered. 6. The effect of 'internal replication' should be taken into account in a factorial design in deciding the number of replications to be used. Where possible, each error term of the ANOVA should have at least 15 DF. 7. Consider carefully whether a particular factorial design can be considered to be a split-plot or a repeated measures design. If such a design is appropriate, consider how to continue the analysis bearing in mind the problem of using post hoc tests in this situation.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
For optimum utilization of satellite-borne instrumentation, it is necessary to know precisely the orbital position of the spacecraft. The aim of this thesis is therefore two-fold - firstly to derive precise orbits with particular emphasis placed on the altimetric satellite SEASAT and secondly, to utilize the precise orbits, to improve upon atmospheric density determinations for satellite drag modelling purposes. Part one of the thesis, on precise orbit determinations, is particularly concerned with the tracking data - satellite laser ranging, altimetry and crossover height differences - and how this data can be used to analyse errors in the orbit, the geoid and sea-surface topography. The outcome of this analysis is the determination of a low degree and order model for sea surface topography. Part two, on the other hand, mainly concentrates on using the laser data to analyse and improve upon current atmospheric density models. In particular, the modelling of density changes associated with geomagnetic disturbances comes under scrutiny in this section. By introducing persistence modelling of a geomagnetic event and solving for certain geomagnetic parameters, a new density model is derived which performs significantly better than the state-of-the-art models over periods of severe geomagnetic storms at SEASAT heights. This is independently verified by application of the derived model to STARLETTE orbit determinations.
Resumo:
This dissertation studies the process of operations systems design within the context of the manufacturing organization. Using the DRAMA (Design Routine for Adopting Modular Assembly) model as developed by a team from the IDOM Research Unit at Aston University as a starting point, the research employed empirically based fieldwork and a survey to investigate the process of production systems design and implementation within four UK manufacturing industries: electronics assembly, electrical engineering, mechanical engineering and carpet manufacturing. The intention was to validate the basic DRAMA model as a framework for research enquiry within the electronics industry, where the initial IDOM work was conducted, and then to test its generic applicability, further developing the model where appropriate, within the other industries selected. The thesis contains a review of production systems design theory and practice prior to presenting thirteen industrial case studies of production systems design from the four industry sectors. The results and analysis of the postal survey into production systems design are then presented. The strategic decisions of manufacturing and their relationship to production systems design, and the detailed process of production systems design and operation are then discussed. These analyses are used to develop the generic model of production systems design entitled DRAMA II (Decision Rules for Analysing Manufacturing Activities). The model contains three main constituent parts: the basic DRAMA model, the extended DRAMA II model showing the imperatives and relationships within the design process, and a benchmark generic approach for the design and analysis of each component in the design process. DRAMA II is primarily intended for use by researchers as an analytical framework of enquiry, but is also seen as having application for manufacturing practitioners.
Resumo:
The work reported in this thesis is concerned with the improvement and expansion of the assistance given to the designer by the computer in the design of cold formed sections. The main contributions have been in four areas, which have consequently led to the fifth, the development of a methodology to optimise designs. This methodology can be considered an `Expert Design System' for cold formed sections. A different method of determining section properties of profiles was introduced, using the properties of line and circular elements. Graphics were introduced to show the outline of the profile on screen. The analysis of beam loading has been expanded to beam loading conditions where the number of supports, point loads, and uniform distributive loads can be specified by the designer. The profile can then be checked for suitability for the specified type of loading. Artificial Intelligence concepts have been introduced to give the designer decision support from the computer, in combination with the computer aided design facilities. The more complex decision support was adopted through the use of production rules. All the support was based on the British standards. A method has been introduced, by which the appropriate use of stiffeners can be determined and consequently designed by the designer. Finally, the methodology by which the designer is given assistance from the computer, without constraining the designer, was developed. This methodology gives advice to the designer on possible methods of improving the design, but allows the designer to reject that option, and analyse the profile accordingly. The methodology enables optimisation to be achieved by the designer, designing variety of profiles for a particular loading, and determining which one is best suited.
Resumo:
The potential for nonlinear optical processes in nematic-liquid-crystal cells is great due to the large phase changes resulting from reorientation of the nematic-liquid-crystal director. Here the combination of diffraction and self-diffraction effects are studied simultaneously by the use of a pair of focused laser beams which are coincident on a homeotropically aligned liquid-crystal cell. The result is a complicated diffraction pattern in the far field. This is analyzed in terms of the continuum theory for liquid crystals, using a one-elastic-constant approximation to solve the reorientation profile. Very good comparison between theory and experiment is obtained. An interesting transient grating, existing due to the viscosity of the liquid-crystal material, is observed in theory and practice for large cell-tilt angles.
Resumo:
This research sets out to compare the values in British and German political discourse, especially the discourse of social policy, and to analyse their relationship to political culture through an analysis of the values of health care reform. The work proceeds from the hypothesis that the known differences in political culture between the two countries will be reflected in the values of political discourse, and takes a comparison of two major recent legislative debates on health care reform as a case study. The starting point in the first chapter is a brief comparative survey of the post-war political cultures of the two countries, including a brief account of the historical background to their development and an overview of explanatory theoretical models. From this are developed the expected contrasts in values in accordance with the hypothesis. The second chapter explains the basis for selecting the corpus texts and the contextual information which needs to be recorded to make a comparative analysis, including the context and content of the reform proposals which comprise the case study. It examines any contextual factors which may need to be taken into account in the analysis. The third and fourth chapters explain the analytical method, which is centred on the use of definition-based taxonomies of value items and value appeal methods to identify, on a sentence-by-sentence basis, the value items in the corpus texts and the methods used to make appeals to those value items. The third chapter is concerned with the classification and analysis of values, the fourth with the classification and analysis of value appeal methods. The fifth chapter will present and explain the results of the analysis, and the sixth will summarize the conclusions and make suggestions for further research.
Resumo:
Purpose – Research on the relationship between customer satisfaction and customer loyalty has advanced to a stage that requires a more thorough examination of moderator variables. Limited research shows how moderators influence the relationship between customer satisfaction and customer loyalty in a service context; this article aims to present empirical evidence of the conditions in which the satisfaction-loyalty relationship becomes stronger or weaker. Design/methodology/approach – Using a sample of more than 700 customers of DIY retailers and multi-group structural equation modelling, the authors examine moderating effects of several firm-related variables, variables that result from firm/employee-customer interactions and individual-level variables (i.e. loyalty cards, critical incidents, customer age, gender, income, expertise). Findings – The empirical results suggest that not all of the moderators considered influence the satisfaction-loyalty link. Specifically, critical incidents and income are important moderators of the relationship between customer satisfaction and customer loyalty. Practical implications – Several of the moderator variables considered in this study are manageable variables. Originality/value – This study should prove valuable to academic researchers as well as service and retailing managers. It systematically analyses the moderating effect of firm-related and individual-level variables on the relationship between customer satisfaction and loyalty. It shows the differential effect of different types of moderator variables on the satisfaction-loyalty link.
Resumo:
With few exceptions (e.g. Fincham & Clark, 2002; Lounsbury, 2002, 2007; Montgomery & Oliver, 2007), we know little about how emerging professions, such as management consulting, professionalize and establish their services as a taken-for-granted element of social life. This is surprising given that professionals have long been recognized as “institutional agents” (DiMaggio & Powell, 1983; Scott, 2008) (see Chapter 17) and professionalization projects have been closely associated with institutionalization (DiMaggio, 1991). Therefore, in this chapter we take a closer look at a specific type of entrepreneurship in PSFs; drawing on the concept of “institutional entrepreneurship” (DiMaggio, 1988; Garud, Hardy, & Maguire, 2007; Hardy & Maguire, 2008) we describe some generic strategies by which proto-professions can enhance their “institutional capital” (Oliver, 1997), that is, their capacity to extract institutionally contingent resources such as legitimacy, reputation, or client relationships from their environment.
Resumo:
4-Hydroxy-2-nonenal (HNE) is one of the most studied products of phospholipid peroxidation, owing to its reactivity and cytotoxicity. It can be formed by several radical-dependent oxidative routes involving the formation of hydroperoxides, alkoxyl radicals, epoxides, and fatty acyl cross-linking reactions. Cleavage of the oxidized fatty acyl chain results in formation of HNE from the methyl end, and 9-oxo-nonanoic acid from the carboxylate or esterified end of the chain, although many other products are also possible. HNE can be metabolized in tissues by a variety of pathways, leading to detoxification and excretion. HNE-adducts to proteins have been detected in inflammatory situations such as atherosclerotic lesions using polyclonal and monoclonal antibodies, which have also been applied in ELISAs and western blotting. However, in order to identify the proteins modified and the exact sites and nature of the modifications, mass spectrometry approaches are required. Combinations of enrichment strategies with targetted mass spectrometry routines such as neutral loss scanning are now facilitating detection of HNE-modified proteins in complex biological samples. This is important for characterizing the interactions of HNE with redox sensitive cell signalling proteins and understanding how it may modulate their activities either physiologically or in disease. © 2013 The Author.