45 resultados para design-based inference
em Aston University Research Archive
Resumo:
Inference algorithms based on evolving interactions between replicated solutions are introduced and analyzed on a prototypical NP-hard problem: the capacity of the binary Ising perceptron. The efficiency of the algorithm is examined numerically against that of the parallel tempering algorithm, showing improved performance in terms of the results obtained, computing requirements and simplicity of implementation. © 2013 American Physical Society.
Resumo:
Most pavement design procedures incorporate reliability to account for design inputs-associated uncertainty and variability effect on predicted performance. The load and resistance factor design (LRFD) procedure, which delivers economical section while considering design inputs variability separately, has been recognised as an effective tool to incorporate reliability into design procedures. This paper presents a new reliability-based calibration in LRFD format for a mechanics-based fatigue cracking analysis framework. This paper employs a two-component reliability analysis methodology that utilises a central composite design-based response surface approach and a first-order reliability method. The reliability calibration was achieved based on a number of field pavement sections that have well-documented performance history and high-quality field and laboratory data. The effectiveness of the developed LRFD procedure was evaluated by performing pavement designs of various target reliabilities and design conditions. The result shows an excellent agreement between the target and actual reliabilities. Furthermore, it is clear from the results that more design features need to be included in the reliability calibration to minimise the deviation of the actual reliability from the target reliability.
Resumo:
This thesis deals with the problem of Information Systems design for Corporate Management. It shows that the results of applying current approaches to Management Information Systems and Corporate Modelling fully justify a fresh look to the problem. The thesis develops an approach to design based on Cybernetic principles and theories. It looks at Management as an informational process and discusses the relevance of regulation theory to its practice. The work proceeds around the concept of change and its effects on the organization's stability and survival. The idea of looking at organizations as viable systems is discussed and a design to enhance survival capacity is developed. It takes Ashby's theory of adaptation and developments on ultra-stability as a theoretical framework and considering conditions for learning and foresight deduces that a design should include three basic components: A dynamic model of the organization- environment relationships; a method to spot significant changes in the value of the essential variables and in a certain set of parameters; and a Controller able to conceive and change the other two elements and to make choices among alternative policies. Further considerations of the conditions for rapid adaptation in organisms composed of many parts, and the law of Requisite Variety determine that successful adaptive behaviour requires certain functional organization. Beer's model of viable organizations is put in relation to Ashby's theory of adaptation and regulation. The use of the Ultra-stable system as abstract unit of analysis permits developing a rigorous taxonomy of change; it starts distinguishing between change with in behaviour and change of behaviour to complete the classification with organizational change. It relates these changes to the logical categories of learning connecting the topic of Information System design with that of organizational learning.
Resumo:
A novel method of fiber Bragg grating design based on tailored group delay is presented. The method leads to designs that are superior to the previously reported results. © OSA 2012.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
This thesis presents a detailed, experiment-based study of generation of ultrashort optical pulses from diode lasers. Simple and cost-effective techniques were used to generate high power, high quality optical short pulses at various wavelength windows. The major achievements presented in the thesis is summarised as follows. High power pulses generation is one of the major topics discussed in the thesis. Although gain switching is the simplest way for ultrashort pulse generation, it proves to be quite effective to deliver high energy pulses on condition that the pumping pulses with extremely fast rising time and high enough amplitude are applied on specially designed pulse generators. In the experiment on a grating-coupled surface emitting laser (GCSEL), peak power as high as 1W was achieved even when its spectral bandwidth was controlled within 0.2nm. Another experiment shows violet picosecond pulses with peak power as high as 7W was achieved when the intensive electrical pulses were applied on optimised DC bias to pump on InGaN violet diode laser. The physical mechanism of this phenomenon, as we considered, may attributed to the self-organised quantum dots structure in the laser. Control of pulse quality, including spectral quality and temporal profile, is an important issue for high power pulse generation. The ways to control pulse quality described in the thesis are also based on simple and effective techniques. For instance, GCSEL used in our experiment has a specially designed air-grating structure for out-coupling of optical signals; hence, a tiny flat aluminium mirror was placed closed to the grating section and resulted in a wavelength tuning range over 100nm and the best side band suppression ratio of 40dB. Self-seeding, as an effective technique for spectral control of pulsed lasers, was demonstrated for the first time in a violet diode laser. In addition, control of temporal profile of the pulse is demonstrated in an overdriven DFB laser. Wavelength tuneable fibre Bragg gratings were used to tailor the huge energy tail of the high power pulse. The whole system was compact and robust. The ultimate purpose of our study is to design a new family of compact ultrafast diode lasers. Some practical ideas of laser design based on gain-switched and Q-switched devices are also provided in the end.
Resumo:
Not withstanding the high demand of metal powder for automotive and High Tech applications, there are still many unclear aspects of the production process. Only recentlyhas supercomputer performance made possible numerical investigation of such phenomena. This thesis focuses on the modelling aspects of primary and secondary atomization. Initially two-dimensional analysis is carried out to investigate the influence of flow parameters (reservoir pressure and gas temperature principally) and nozzle geometry on final powder yielding. Among the different types, close coupled atomizers have the best performance in terms of cost and narrow size distribution. An isentropic contoured nozzle is introduced to minimize the gas flow losses through shock cells: the results demonstrate that it outperformed the standard converging-diverging slit nozzle. Furthermore the utilization of hot gas gave a promising outcome: the powder size distribution is narrowed and the gas consumption reduced. In the second part of the thesis, the interaction of liquid metal and high speed gas near the feeding tube exit was studied. Both axisymmetric andnon-axisymmetric geometries were simulated using a 3D approach. The filming mechanism was detected only for very small metal flow rates (typically obtained in laboratory scale atomizers). When the melt flow increased, the liquid core overtook the adverse gas flow and entered in the high speed wake directly: in this case the disruption isdriven by sinusoidal surface waves. The process is characterized by fluctuating values of liquid volumes entering the domain that are monitored only as a time average rate: it is far from industrial robustness and capability concept. The non-axisymmetric geometry promoted the splitting of the initial stream into four cores, smaller in diameter and easier to atomize. Finally a new atomization design based on the lesson learned from previous cases simulation is presented.
Resumo:
This study is primarily concerned with the problem of break-squeal in disc brakes, using moulded organic disc pads. Moulded organic friction materials are complex composites and due to this complexity it was thought that they are unlikely to be of uniform composition. Variation in composition would under certain conditions of the braking system, cause slight changes in its vibrational characteristics thus causing resonance in the high audio-frequency range. Dynamic mechanical propertes appear the most likely parameters to be related to a given composition's tendency to promote squeal. Since it was necessary to test under service conditions a review was made of all the available commercial test instruments but as none were suitable it was necessary to design and develop a new instrument. The final instrument design, based on longitudinal resonance, enabled modulus and damping to be determined over a wide range of temperatures and frequencies. This apparatus has commercial value since it is not restricted to friction material testing. Both used and unused pads were tested and although the cause of brake squeal was not definitely established, the results enabled formulation of a tentative theory of the possible conditions for brake squeal. The presence of a temperature of minimum damping was indicated which may be of use to braking design engineers. Some auxilIary testing was also performed to establish the effect of water, oil and brake fluid and also to determine the effect of the various components of friction materials.
Resumo:
For micro gas turbines (MGT) of around 1 kW or less, a commercially suitable recuperator must be used to produce a thermal efficiency suitable for use in UK Domestic Combined Heat and Power (DCHP). This paper uses computational fluid dynamics (CFD) to investigate a recuperator design based on a helically coiled pipe-in-pipe heat exchanger which utilises industry standard stock materials and manufacturing techniques. A suitable mesh strategy was established by geometrically modelling separate boundary layer volumes to satisfy y + near wall conditions. A higher mesh density was then used to resolve the core flow. A coiled pipe-in-pipe recuperator solution for a 1 kW MGT DCHP unit was established within the volume envelope suitable for a domestic wall-hung boiler. Using a low MGT pressure ratio (necessitated by using a turbocharger oil cooled journal bearing platform) meant unit size was larger than anticipated. Raising MGT pressure ratio from 2.15 to 2.5 could significantly reduce recuperator volume. Dimensional reasoning confirmed the existence of optimum pipe diameter combinations for minimum pressure drop. Maximum heat exchanger effectiveness was achieved using an optimum or minimum pressure drop pipe combination with large pipe length as opposed to a large pressure drop pipe combination with shorter pipe length. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
The purpose of this research is to propose a procurement system across other disciplines and retrieved information with relevant parties so as to have a better co-ordination between supply and demand sides. This paper demonstrates how to analyze the data with an agent-based procurement system (APS) to re-engineer and improve the existing procurement process. The intelligence agents take the responsibility of searching the potential suppliers, negotiation with the short-listed suppliers and evaluating the performance of suppliers based on the selection criteria with mathematical model. Manufacturing firms and trading companies spend more than half of their sales dollar in the purchase of raw material and components. Efficient data collection with high accuracy is one of the key success factors to generate quality procurement which is to purchasing right material at right quality from right suppliers. In general, the enterprises spend a significant amount of resources on data collection and storage, but too little on facilitating data analysis and sharing. To validate the feasibility of the approach, a case study on a manufacturing small and medium-sized enterprise (SME) has been conducted. APS supports the data and information analyzing technique to facilitate the decision making such that the agent can enhance the negotiation and suppler evaluation efficiency by saving time and cost.
Resumo:
To meet changing needs of customers and to survive in the increasingly globalised and competitive environment, it is necessary for companies to equip themselves with intelligent tools, thereby enabling managerial levels to use the tactical decision in a better way. However, the implementation of an intelligent system is always a challenge in Small- and Medium-sized Enterprises (SMEs). Therefore, a new and simple approach with 'process rethinking' ability is proposed to generate ongoing process improvements over time. In this paper, a roadmap of the development of an agent-based information system is described. A case example has also been provided to show how the system can assist non-specialists, for example, managers and engineers to make right decisions for a continual process improvement. Copyright © 2006 Inderscience Enterprises Ltd.
Resumo:
The role of beneficiaries in the humanitarian supply chain is highlighted in the imperative to meet their needs but disputed in terms of their actual decision-making and purchasing power. This paper discusses the use of a beneficiary-focused, community-based approach in the case of a post-crisis housing reconstruction programme. In the community-based approach, beneficiaries become active members of the humanitarian supply chain. Implications of this community-based approach are discussed in the light of supply chain design and aid effectiveness. © 2010 Taylor & Francis.