13 resultados para sub-solutions and super-solutions
em Aston University Research Archive
Resumo:
The use of the multiple indicators, multiple causes model to operationalize formative variables (the formative MIMIC model) is advocated in the methodological literature. Yet, contrary to popular belief, the formative MIMIC model does not provide a valid method of integrating formative variables into empirical studies and we recommend discarding it from formative models. Our arguments rest on the following observations. First, much formative variable literature appears to conceptualize a causal structure between the formative variable and its indicators which can be tested or estimated. We demonstrate that this assumption is illogical, that a formative variable is simply a researcher-defined composite of sub-dimensions, and that such tests and estimates are unnecessary. Second, despite this, researchers often use the formative MIMIC model as a means to include formative variables in their models and to estimate the magnitude of linkages between formative variables and their indicators. However, the formative MIMIC model cannot provide this information since it is simply a model in which a common factor is predicted by some exogenous variables—the model does not integrate within it a formative variable. Empirical results from such studies need reassessing, since their interpretation may lead to inaccurate theoretical insights and the development of untested recommendations to managers. Finally, the use of the formative MIMIC model can foster fuzzy conceptualizations of variables, particularly since it can erroneously encourage the view that a single focal variable is measured with formative and reflective indicators. We explain these interlinked arguments in more detail and provide a set of recommendations for researchers to consider when dealing with formative variables.
Resumo:
Editorial: The contributions to this special issue of the International Journal of Technology Management are all based on selected papers presented at the European Conference on Management of Technology held at Aston University, Birmingham, UK in June 1995. This conference was held on behalf of the International Association for Management of Technology (IAMOT) and was the first of the association’s major conferences to be held outside North America. The overall theme of the conference was ‘Technological Innovation and Global Challenges’. Altogether more than 130 papers were presented within four sub-themes and twenty seven topic sessions. This special issue draws on papers within five difference topic sessions: ‘Small firm linkages’; ‘The global company’; ‘New technology based firms’; ‘Financing innovation’; ‘Technology and development’. Together they cover a wide range of issues around the common question of accessing resources for innovation in small and medium sized enterprises. They present a global perspective on this important subject with authors from The Netherlands, Canada, USA, Ireland, France, Finland, Brazil and UK. A wide range of subjects are covered including the move away from public support for innovation, the role of alliances and networks, linkages to larger enterprises and the social implications associated with small enterprise innovation in developing countries.
Resumo:
The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.
Resumo:
The aim of this study is to determine if nonlinearities have affected purchasing power parity (PPP) since 1885. Also using recent advances in the econometrics of structural change we segment the sample space according to the identified breaks and look at whether the PPP condition holds in each sub-sample and whether this involves linear or non-linear adjustment. Our results suggest that during some sub-periods, PPP holds, although whether it holds or not and whether the adjustment is linear or non-linear, depends primarily on the type of exchange rate regime in operation at any point in time.
Resumo:
Magnesian limestone is a key construction component of many historic buildings that is under constant attack from environmental pollutants notably by oxides of sulfur via acid rain, particulate matter sulfate and gaseous SO 2 emissions. Hydrophobic surface coatings offer a potential route to protect existing stonework in cultural heritage sites, however, many available coatings act by blocking the stone microstructure, preventing it from 'breathing' and promoting mould growth and salt efflorescence. Here we report on a conformal surface modification method using self-assembled monolayers of naturally sourced free fatty acids combined with sub-monolayer fluorinated alkyl silanes to generate hydrophobic (HP) and super hydrophobic (SHP) coatings on calcite. We demonstrate the efficacy of these HP and SHP surface coatings for increasing limestone resistance to sulfation, and thus retarding gypsum formation under SO/H O and model acid rain environments. SHP treatment of 19th century stone from York Minster suppresses sulfuric acid permeation.
Resumo:
The thesis describes the work carried out to develop a prototype knowledge-based system 'KBS-SETUPP' to generate process plans for the manufacture of seamless tubes. The work is specifically related to a plant in which hollows are made from solid billets using a rotary piercing process and then reduced to required size and finished properties using the fixed plug cold drawing process. The thesis first discusses various methods of tube production in order to give a general background of tube manufacture. Then a review of the automation of the process planning function is presented in terms of its basic sub-tasks and the techniques and suitability of a knowledge-based system is established. In the light of such a review and a case study, the process planning problem is formulated in the domain of seamless tube manufacture, its basic sub-tasks are identified and capabilities and constraints of the available equipment in the specific plant are established. The task of collecting and collating the process planning knowledge in seamless tube manufacture is discussed and is mostly fulfilled from domain experts, analysing of existing manufacturing records specific to plant, textbooks and applicable Standards. For the cold drawing mill, tube-drawing schedules have been rationalised to correspond with practice. The validation of such schedules has been achieved by computing the process parameters and then comparing these with the drawbench capacity to avoid over-loading. The existing models cannot be simulated in the computer program as such, therefore a mathematical model has been proposed which estimates the process parameters which are in close agreement with experimental values established by other researchers. To implement the concepts, a Knowledge-Based System 'KBS- SETUPP' has been developed on Personal Computer using Turbo- Prolog. The system is capable of generating process plans, production schedules and some additional capabilities to supplement process planning. The system generated process plans have been compared with the actual plans of the company and it has been shown that the results are satisfactory and encouraging and that the system has the capabilities which are useful.
Resumo:
We propose a novel recursive-algorithm based maximum a posteriori probability (MAP) detector in spectrally-efficient coherent wavelength division multiplexing (CoWDM) systems, and investigate its performance in a 1-bit/s/Hz on-off keyed (OOK) system limited by optical-signal-to-noise ratio. The proposed method decodes each sub-channel using the signal levels not only of the particular sub-channel but also of its adjacent sub-channels, and therefore can effectively compensate deterministic inter-sub-channel crosstalk as well as inter-symbol interference arising from narrow-band filtering and chromatic dispersion (CD). Numerical simulation of a five-channel OOK-based CoWDM system with 10Gbit/s per channel using either direct or coherent detection shows that the MAP decoder can eliminate the need for phase control of each optical carrier (which is necessarily required in a conventional CoWDM system), and greatly relaxes the spectral design of the demultiplexing filter at the receiver. It also significantly improves back-to-back sensitivity and CD tolerance of the system.
Resumo:
The potential benefits of implementing Component-Based Development (CBD) methodologies in a globally distributed environment are many. Lessons from the aeronautics, automotive, electronics and computer hardware industries, in which Component-Based (CB) architectures have been successfully employed for setting up globally distributed design and production activities, have consistently shown that firms have managed to increase the rate of reused components and sub-assemblies, and to speed up the design and production process of new products.
Resumo:
The purpose of this paper is to delineate a green supply chain (GSC) performance measurement framework using an intra-organisational collaborative decision-making (CDM) approach. A fuzzy analytic network process (ANP)-based green-balanced scorecard (GrBSc) has been used within the CDM approach to assist in arriving at a consistent, accurate and timely data flow across all cross-functional areas of a business. A green causal relationship is established and linked to the fuzzy ANP approach. The causal relationship involves organisational commitment, eco-design, GSC process, social performance and sustainable performance constructs. Sub-constructs and sub-sub-constructs are also identified and linked to the causal relationship to form a network. The fuzzy ANP approach suitably handles the vagueness of the linguistics information of the CDM approach. The CDM approach is implemented in a UK-based carpet-manufacturing firm. The performance measurement approach, in addition to the traditional financial performance and accounting measures, aids in firms decision-making with regard to the overall organisational goals. The implemented approach assists the firm in identifying further requirements of the collaborative data across the supply-cain and information about customers and markets. Overall, the CDM-based GrBSc approach assists managers in deciding if the suppliers performances meet the industry and environment standards with effective human resource. © 2013 Taylor & Francis.
Resumo:
The anulus fibrosus (AF) of the intervertebral disc consists of concentric sheets of collagenous matrix that is synthesised during embryogenesis by aligned disc cells. This highly organised structure may be severely disrupted during disc degeneration and/or herniation. Cell scaffolds that incorporate topographical cues as contact guidance have been used successfully to promote the healing of injured tendons. Therefore, we have investigated the effects of topography on disc cell growth. We show that disc cells from the AF and nucleus pulposus (NP) behaved differently in monolayer culture on micro-grooved membranes of polycaprolactone (PCL). Both cell types aligned to and migrated along the membrane's micro-grooves and ridges, but AF cells were smaller (or less spread), more bipolar and better aligned to the micro-grooves than NP cells. In addition, AF cells were markedly more immunopositive for type I collagen, but less immunopositive for chondroitin-6-sulphated proteoglycans than NP cells. There was no evidence of extracellular matrix (ECM) deposition. Disc cells cultured on non-grooved PCL did not show any preferential alignment at sub-confluence and did not differ in their pattern of immunopositivity to those on grooved PCL. We conclude that substratum topography is effective in aligning disc cell growth and may be useful in tissue engineering for the AF. However, there is a need to optimise cell sources and/or environmental conditions (e.g. mechanical influences) to promote the synthesis of an aligned ECM.
Resumo:
The effects of temperature on hydrogen assisted fatigue crack propagation are investigated in three steels in the low-to-medium strength range; a low alloy structural steel, a super duplex stainless steel, and a super ferritic stainless steel. Significant enhancement of crack growth rates is observed in hydrogen gas at atmospheric pressure in all three materials. Failure occurs via a mechanism of time independent, transgranular, cyclic cleavage over a frequency range of 0.1-5 Hz. Increasing the temperature in hydrogen up to 80°C markedly reduces the degree of embrittlement in the structural and super ferritic steels. No such effect is observed in the duplex stainless steel until the temperature exceeds 120°C. The temperature response may be understood by considering the interaction between absorbed hydrogen and micro-structural traps, which are generated in the zone of intense plastic deformation ahead of the fatigue crack tip. © 1992.
Resumo:
When designing a practical swarm robotics system, self-organized task allocation is key to make best use of resources. Current research in this area focuses on task allocation which is either distributed (tasks must be performed at different locations) or sequential (tasks are complex and must be split into simpler sub-tasks and processed in order). In practice, however, swarms will need to deal with tasks which are both distributed and sequential. In this paper, a classic foraging problem is extended to incorporate both distributed and sequential tasks. The problem is analysed theoretically, absolute limits on performance are derived, and a set of conditions for a successful algorithm are established. It is shown empirically that an algorithm which meets these conditions, by causing emergent cooperation between robots can achieve consistently high performance under a wide range of settings without the need for communication. © 2013 IEEE.
Resumo:
Liposomes not only offer the ability to enhance drug delivery, but can effectively act as vaccine delivery systems and adjuvants. Their flexibility in size, charge, bilayer rigidity and composition allow for targeted antigen delivery via a range of administration routes. In the development of liposomal adjuvants, the type of immune response promoted has been linked to their physico-chemical characteristics, with the size and charge of the liposomal particles impacting on liposome biodistribution, exposure in the lymph nodes and recruitment of the innate immune system. The addition of immunostimulatory agents can further potentiate their immunogenic properties. Here, we outline the attributes that should be considered in the design and manufacture of liposomal adjuvants for the delivery of sub-unit and nucleic acid based vaccines.