33 resultados para Chemo- And Multi-enzymatic Processes
em Aston University Research Archive
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Relationships between clustering, description length, and regularisation are pointed out, motivating the introduction of a cost function with a description length interpretation and the unusual and useful property of having its minimum approximated by the densest mode of a distribution. A simple inverse kinematics example is used to demonstrate that this property can be used to select and learn one branch of a multi-valued mapping. This property is also used to develop a method for setting regularisation parameters according to the scale on which structure is exhibited in the training data. The regularisation technique is demonstrated on two real data sets, a classification problem and a regression problem.
Resumo:
The authors investigate channel incentives as extra-contractual governance processes that maintain and extend marketing channel relationships. More specifically, instrumental incentives are monetary-based payments made by a manufacturer in a unilateral channel arrangement to motivate distributor compliance, while equity incentives are bilateral expectations of fair treatment that motivate both parties to continue to cooperate with one another. A model of the antecedents and performance consequences of channel incentives is conceptualized and tested on 314 marketing channel relationships using a structural equation modeling methodology. The findings support the conceptual model and suggest that unique facets of the channel relationship explain the type of incentive mechanism in use.
Resumo:
Single- and multi-core passive and active germanate and tellurite glass fibers represent a new class of fiber host for in-fiber photonics devices and applications in mid-IR wavelength range, which are in increasing demand. Fiber Bragg grating (FBG) structures have been proven as one of the most functional in-fiber devices and have been mass-produced in silicate fibers by UV-inscription for almost countless laser and sensor applications. However, because of the strong UV absorption in germanate and tellurite fibers, FBG structures cannot be produced by UVinscription. In recent years femtosecond (fs) lasers have been developed for laser machining and microstructuring in a variety of glass fibers and planar substrates. A number of papers have been reported on fabrication of FBGs and long-period gratings in optical fibers and also on the photosensitivity mechanism using 800nm fs lasers. In this paper, we demonstrate for the first time the fabrication of FBG structures created in passive and active single- and three-core germanate and tellurite glass fibers by using 800nm fs-inscription and phase mask technique. With a fs peak power intensity in the order of 1011W/cm2, the FBG spectra with 2nd and 3rd order resonances at 1540nm and 1033nm in a single-core germanate glass fiber and 2nd order resonances between ~1694nm and ~1677nm with strengths up to 14dB in all three cores of three-core passive and active tellurite fibers were observed. Thermal and strain properties of the FBGs made in these mid-IR glass fibers were characterized, showing an average temperature responsivity of ~20pm/°C and a strain sensitivity of 1.219±0.003pm/µe.
Resumo:
Benchmarking techniques have evolved over the years since Xerox’s pioneering visits to Japan in the late 1970s. The focus of benchmarking has also shifted during this period. By tracing in detail the evolution of benchmarking in one specific area of business activity, supply and distribution management, as seen by the participants in that evolution, creates a picture of a movement from single function, cost-focused, competitive benchmarking, through cross-functional, cross-sectoral, value-oriented benchmarking to process benchmarking. As process efficiency and effectiveness become the primary foci of benchmarking activities, the measurement parameters used to benchmark performance converge with the factors used in business process modelling. The possibility is therefore emerging of modelling business processes and then feeding the models with actual data from benchmarking exercises. This would overcome the most common criticism of benchmarking, namely that it intrinsically lacks the ability to move beyond current best practice. In fact the combined power of modelling and benchmarking may prove to be the basic building block of informed business process re-engineering.
Resumo:
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
Resumo:
Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log-log) slopes from -2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels.
Resumo:
In 1974 Dr D M Bramwell published his research work at the University of Aston a part of which was the establishment of an elemental work study data base covering drainage construction. The Transport and Road Research Laboratory decided to, extend that work as part of their continuing research programme into the design and construction of buried pipelines by placing a research contract with Bryant Construction. This research may be considered under two broad categories. In the first, site studies were undertaken to validate and extend the data base. The studies showed good agreement with the existing data with the exception of the excavation trench shoring and pipelaying data which was amended to incorporate new construction plant and methods. An inter-active on-line computer system for drainage estimating was developed. This system stores the elemental data, synthesizes the standard time of each drainage operation and is used to determine the required resources and construction method of the total drainage activity. The remainder of the research was into the general topic of construction efficiency. An on-line command driven computer system was produced. This system uses a stochastic simulation technique, based on distributions of site efficiency measurements to evaluate the effects of varying performance levels. The analysis of this performance data quantities the variability inherent in construction and demonstrates how some of this variability can be reconciled by considering the characteristics of a contract. A long term trend of decreasing efficiency with contract duration was also identified. The results obtained from the simulation suite were compared to site records collected from current contracts. This showed that this approach will give comparable answers, but these are greatly affected by the site performance parameters.
Resumo:
Social groups form an important part of our daily lives. Within these groups pressures exist which encourage the individual to comply with the group’s viewpoint. This influence, which creates social conformity, is known as ‘majority influence’ and is the dominant process of social control. However, there also exists a ‘minority influence’, which emerges from a small subsection of the group and is a dynamic force for social change. Minority Influence and Innovation seeks to identify the conditions under which minority influence can prevail, to change established norms, stimulate original thinking and help us to see the world in new ways. With chapters written by a range of expert contributors, areas of discussion include: •processes and theoretical issues •the factors which affect majority and minority influence •interactions between majority and minority group members This book offers a thorough evaluation of the most important current developments within this field and presents consideration of the issues that will be at the forefront of future research. As such it will be of interest to theorists and practitioners working in social psychology. This book offers a thorough evaluation of the most important current developments within this field and presents consideration of the issues that will be at the forefront of future research. As such it will be of interest to theorists and practitioners working in social psychology.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
A new bridge technique for the measurement of the dielectric absorption of liquids and solutions at microwave frequencies has been described and its accuracy assessed. 'l'he dielectric data of the systems studied is discussed in terms of the relaxation processes contributing to the dielectric absorption and the apparent dipole moments. Pyridine, thiophen and furan in solution have a distribution of relaxation times which may be attributed to the small size of the solute molecules relative to the solvent. Larger rigid molecules in solution were characterized by a single relaxation time as would be anticipated from theory. The dielectric data of toluene, ethyl-, isopropyl- and t-butylbenzene as pure liquids and in solution were described by two relaxation times, one identified with molecular re-orientation and a shorter relaxation time.· The subsequent work was investigation of the possible explanations of this short relaxation process. Comparable short relaxation times were obtained from the analysis of the dielectric data of solutions of p-chloro- and p-bromotoluene below 40°C, o- and m-xylene at 25°C and 1-methyl- and 2 methylnaphthalene at 50 C. Rigid molecules of similar shapes and sizes were characterized by a single relaxation time identified with molecular re-orientation. Contributions from a long relaxation process attributed to dipolar origins were reported for solutions of nitrobenzene, benzonitrile and p-nitrotoluene. A short relaxation process of possible dipolar origins contributed to the dielectric absorption of 4-methyl- and 4-t-butylpyridine in cyclohexane at 25°C. It was concluded that the most plausible explanation of the short relaxation process of the alkyl-aryl hydrocarbons studied appears to be intramolecular relaxation about the alkyl-aryl bond. Finally the mean relaxation times of some phenylsubstituted compounds were investigated to evaluate any shortening due to contributions from the process of relaxation about the phenyl-central atom bond. The relaxation times of triphenylsilane and phenyltrimethylsilane were significantly short.