12 resultados para Practical problems

em Aston University Research Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Case studies in copper-alloy rolling mill companies showed that existing planning systems suffer from numerous shortcomings. Where computerised systems are in use, these tend to simply emulate older manual systems and still rely heavily on modification by experienced planners on the shopfloor. As the size and number of orders increase, the task of process planners, while seeking to optimise the manufacturing objectives and keep within the production constraints, becomes extremely complicated because of the number of options for mixing or splitting the orders into batches. This thesis develops a modular approach to computerisation of the production management and planning functions. The full functional specification of each module is discussed, together with practical problems associated with their phased implementation. By adapting the Distributed Bill of Material concept from Material Requirements Planning (MRP) philosophy, the production routes generated by the planning system are broken down to identify the rolling stages required. Then to optimise the use of material at each rolling stage, the system generates an optimal cutting pattern using a new algorithm that produces practical solutions to the cutting stock problem. It is shown that the proposed system can be accommodated on a micro-computer, which brings it into the reach of typical companies in the copper-alloy rolling industry, where profit margins are traditionally low and the cost of widespread use of mainframe computers would be prohibitive.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research concerns information systems and information systems development. The thesis describes an approach to information systems development called Multiview. This is a methodology which seeks to combine the strengths of a number of different, existing approaches in a coherent manner. Many of these approaches are radically different in terms of concepts, philosophy, assumptions, methods, techniques and tools. Three case studies are described presenting Multiview 'in action'. The first is used mainly to expose the strengths and weaknesses of an early version of the approach discussed in the thesis. Tools and techniques are described in the thesis which aim to strengthen the approach. Two further case studies are presented to illustrate the use of this second version of Multiview. This is not put forward as an 'ideal methodology' and the case studies expose some of the difficulties and practical problems of information systems work and the use of the methodology. A more contingency based approach to information systems development is advocated using Multiview as a framework rather than a prescriptive tool. Each information systems project and the use of the framework is unique, contingent on the particular problem situation. The skills of different analysts, the backgrounds of users and the situations in which they are constrained to work have always to be taken into account in any project. The realities of the situation will cause departure from the 'ideal methodology' in order to allow for the exigencies of the real world. Multiview can therefore be said to be an approach used to explore the application area in order to develop an information system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research sets out to assess if the PHC system in rural Nigeria is effective by testing the research hypothesis: `PHC can be effective if and only if the Health Care Delivery System matches the attitudes and expectations of the Community'. The field surveys to accomplish this task were carried out in IBO, YORUBA, and HAUSA rural communities. A variety of techniques have been used as Research Methodology and these include questionnaires, interviews and personal observations of events in the rural community. This thesis embraces three main parts. Part I traces the socio-cultural aspects of PHC in rural Nigeria, describes PHC management activities in Nigeria and the practical problems inherent in the system. Part II describes various theoretical and practical research techniques used for the study and concentrates on the field work programme, data analysis and the research hypothesis-testing. Part III focusses on general strategies to improve PHC system in Nigeria to make it more effective. The research contributions to knowledge and the summary of main conclusions of the study are highlighted in this part also. Based on testing and exploring the research hypothesis as stated above, some conclusions have been arrived at, which suggested that PHC in rural Nigeria is ineffective as revealed in people's low opinions of the system and dissatisfaction with PHC services. Many people had expressed the view that they could not obtain health care services in time, at a cost they could afford and in a manner acceptable to them. Following the conclusions, some alternative ways to implement PHC programmes in rural Nigeria have been put forward to improve and make the Nigerian PHC system more effective.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this project was to develop the education work of an environmental pressure group. The research devised and implemented a project to produce multi-media teaching packs on the urban environment. Whilst this involved understanding environmental education it was necessary to research beyond this to include the various structural and dynamic constraints on change in the field. This presented a number of methodological difficulties; from the resolution of which a model of the research process involved in this project has been developed. It is argued that research oriented towards practical change requires the insights of an experienced practitioner to be combined with the rigours of controlled systematic enquiry. Together these function as a model-building process encompassing intuition, induction and deduction. Model testing is carried out through repeated intervention in the field; thus an interplay between researcher and client ensues such that the project develops in a mutually acceptable direction. In practice, this development will be both unpredictable and erratic. Although the conclusions reached here are based on a single case study they address general methodological issues likely to be encountered in different field settings concerned with different practical problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In order to study the structure and function of a protein, it is generally required that the protein in question is purified away from all others. For soluble proteins, this process is greatly aided by the lack of any restriction on the free and independent diffusion of individual protein particles in three dimensions. This is not the case for membrane proteins, as the membrane itself forms a continuum that joins the proteins within the membrane with one another. It is therefore essential that the membrane is disrupted in order to allow separation and hence purification of membrane proteins. In the present review, we examine recent advances in the methods employed to separate membrane proteins before purification. These approaches move away from solubilization methods based on the use of small surfactants, which have been shown to suffer from significant practical problems. Instead, the present review focuses on methods that stem from the field of nanotechnology and use a range of reagents that fragment the membrane into nanometre-scale particles containing the protein complete with the local membrane environment. In particular, we examine a method employing the amphipathic polymer poly(styrene-co-maleic acid), which is able to reversibly encapsulate the membrane protein in a 10 nm disc-like structure ideally suited to purification and further biochemical study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pavement analysis and design for fatigue cracking involves a number of practical problems like material assessment/screening and performance prediction. A mechanics-aided method can answer these questions with satisfactory accuracy in a convenient way when it is appropriately implemented. This paper presents two techniques to implement the pseudo J-integral based Paris’ law to evaluate and predict fatigue cracking in asphalt mixtures and pavements. The first technique, quasi-elastic simulation, provides a rational and appropriate reference modulus for the pseudo analysis (i.e., viscoelastic to elastic conversion) by making use of the widely used material property: dynamic modulus. The physical significance of the quasi-elastic simulation is clarified. Introduction of this technique facilitates the implementation of the fracture mechanics models as well as continuum damage mechanics models to characterize fatigue cracking in asphalt pavements. The second technique about modeling fracture coefficients of the pseudo J-integral based Paris’ law simplifies the prediction of fatigue cracking without performing fatigue tests. The developed prediction models for the fracture coefficients rely on readily available mixture design properties that directly affect the fatigue performance, including the relaxation modulus, air void content, asphalt binder content, and aggregate gradation. Sufficient data are collected to develop such prediction models and the R2 values are around 0.9. The presented case studies serve as examples to illustrate how the pseudo J-integral based Paris’ law predicts fatigue resistance of asphalt mixtures and assesses fatigue performance of asphalt pavements. Future applications include the estimation of fatigue life of asphalt mixtures/pavements through a distinct criterion that defines fatigue failure by its physical significance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 'moving targets' algorithm for training recurrent networks is reviewed and applied to a task which demonstrates the ability of this algorithm to use distant contextual information. Some practical difficulties are discussed, especially with regard to the minimization process. Results on performance and computational requirements of several different 2nd-order minimization algorithms are presented for moving target problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New postgraduate students embark on their research journey typically with little or no experience in doing research. Supervisors and other more experienced student researchers might help them to find their feet during the first few weeks of their research by sharing their own experience of how they solved similar problems during their research. In this way each novice researcher can learn and benefit from other researchers´ ways of resolving problems. This paper discusses the real concerns that researchers reflected upon during a two-day research workshop, where researchers share problems, exchange ideas for overcoming them and learn from each other´s experiences of conducting research. The output from the workshop is in the form of hints and tips that can guide novice researchers when faced with initial problems. The paper can also be used by a department to induct a novice researcher into their environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with various aspects of Air Pollution due to smell, the impact it has on communities exposed to it, the means by which it may be controlled and the manner in which a local authority may investigate the problems it causes. The approach is a practical one drawing on examples occurring within a Local Authority's experience and for that reason the research is anecdotal and is not a comprehensive treatise on the full range of options available. Odour Pollution is not yet a well organised discipline and might be considered esoteric as it is necessary to incorporate elements of science and the humanities. It has been necessary to range widely across a number of aspects of the subject so that discussion is often restricted but many references have been included to enable a reader to pursue a particular point in greater depth. In a `fuzzy' subject there is often a yawning gap separating theory and practice, thus case studies have been used to illustrate the interplay of various disciplines in resolution of a problem. The essence of any science is observation and measurement. Observation has been made of the spread of odour pollution through a community and also of relevant meterological data so that a mathematical model could be constructed and its predictions checked. It has been used to explore the results of some options for odour control. Measurements of odour perception and human behaviour seldom have the precision and accuracy of the physical sciences. However methods of social research enabled individual perception of odour pollution to be quantified and an insight gained into reaction of a community exposed to it. Odours have four attributes that can be measured and together provide a complete description of its perception. No objective techniques of measurement have yet been developed but in this thesis simple, structured procedures of subjective assessment have been improvised and their use enabled the functioning of the components of an odour control system to be assessed. Such data enabled the action of the system to be communicated using terms that are understood by a non specialist audience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Two real world data sets, containing electricity load demands and foreign exchange market prices, are used to test several different methods, ranging from linear models with fixed parameters, to non-linear models which adapt both parameters and model order on-line. Training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. The results of our experiments show that there are advantages to be gained in tracking real world non-stationary data through the use of more complex adaptive models.