37 resultados para drawbacks

em Aston University Research Archive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In emerging markets, the amount of mobile communication and the number of occasions mobile phones are used are increasing. More and more settings appropriate or not for mobile phone usage are being exposed. Although prohibited by many governments, there is evidence that use of new mobile devices while driving are somehow becoming current everyday practice, hence legitimatizing usage for many users. Dominant dangerous behavior in the absence of enforced legal framework is being deployed and has become routine for many m-users. This chapter adopts a qualitative case study approach (20 cases) to examine the public transport drivers' motives, logic and legitimacy processes. The question which these issues raise in the light of advancing m-technologies is: How do, in the context of emerging market, undesired emerging routines enactment get to be reflected upon and voluntarily disregarded to maximize the benefits of m-technologies while minimizing their drawbacks? Findings point out at multiple motives for usage including external social pressure through the ubiquitous 24/7 usage of mtechnology, lack of alternative communication protocol, real time need for action and from an internal perspectives boredoms, lack of danger awareness, blurring of the boundaries between personal and business life and lack of job fulfillment are uncovered as key factors. As secondary dynamic factors such as education, drivers work' histories, impunity, lack of strong consumer opposition appear central in shaping the development of the routines. © 2011, IGI Global.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Serial and parallel interconnection of photonic devices is integral to the construction of any all-optical data processing system. This thesis presents results from a series of experiments centering on the use of the nonlinear-optical loop mirror (NOLM) switch in architectures for the manipulation and generation of ultrashort pulses. Detailed analysis of soliton switching in a single NOLM and cascade of two NOLM's is performed, centering on primary limitations to device operation, effect of cascading on amplitude response, and impact of switching on the characteristics of incident pulses. By using relatively long input pulses, device failure due to stimulated Raman generation is postponed to demonstrate multiple-peaked switching for the first time. It is found that while cascading leads to a sharpening of the overall switching characteristic, pulse spectral and temporal integrity is not significantly degraded, and emerging pulses retain their essential soliton character. In addition, by including an asymmetrically placed in-fibre Bragg reflector as a wavelength selective loss element in the basic NOLM configuration, both soliton self-switching and dual-wavelength control-pulse switching are spectrally quantised. Results are presented from a novel dual-wavelength laser configuration generating pulse trains with an ultra-low rms inter-pulse-stream timing jitter level of 630fs enabling application in ultrafast switching environments at data rates as high as 130GBits/s. In addition, the fibre NOLM is included in architectures for all-optical memory, demonstrating storage and logical inversion of a 0.5kByte random data sequence; and ultrafast phase-locking of a gain-switched distributed feedback laser at 1.062GHz, the fourteenth harmonic of the system baseband frequency. The stringent requirements for environmental robustness of these architectures highlight the primary weaknesses of the NOLM in its fibre form and recommendations to overcome its inherent drawbacks are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pyrolysis is one of several thermochemical technologies that convert solid biomass into more useful and valuable bio-fuels. Pyrolysis is thermal degradation in the complete or partial absence of oxygen. Under carefully controlled conditions, solid biomass can be converted to a liquid known as bie-oil in 75% yield on dry feed. Bio-oil can be used as a fuel but has the drawback of having a high level of oxygen due to the presence of a complex mixture of molecular fragments of cellulose, hemicellulose and lignin polymers. Also, bio-oil has a number of problems in use including high initial viscosity, instability resulting in increased viscosity or phase separation and high solids content. Much effort has been spent on upgrading bio-oil into a more usable liquid fuel, either by modifying the liquid or by major chemical and catalytic conversion to hydrocarbons. The overall primary objective was to improve oil stability by exploring different ways. The first was to detennine the effect of feed moisture content on bio-oil stability. The second method was to try to improve bio-oil stability by partially oxygenated pyrolysis. The third one was to improve stability by co-pyrolysis with methanol. The project was carried out on an existing laboratory pyrolysis reactor system, which works well with this project without redesign or modification too much. During the finishing stages of this project, it was found that the temperature of the condenser in the product collection system had a marked impact on pyrolysis liquid stability. This was discussed in this work and further recommendation given. The quantity of water coming from the feedstock and the pyrolysis reaction is important to liquid stability. In the present work the feedstock moisture content was varied and pyrolysis experiments were carried out over a range of temperatures. The quality of the bio-oil produced was measured as water content, initial viscosity and stability. The result showed that moderate (7.3-12.8 % moisture) feedstock moisture led to more stable bio-oil. One of drawbacks of bio-oil was its instability due to containing unstable oxygenated chemicals. Catalytic hydrotreatment of the oil and zeolite cracking of pyrolysis vapour were discllssed by many researchers, the processes were intended to eliminate oxygen in the bio-oil. In this work an alternative way oxygenated pyrolysis was introduced in order to reduce oil instability, which was intended to oxidise unstable oxygenated chemicals in the bio-oil. The results showed that liquid stability was improved by oxygen addition during the pyrolysis of beech wood at an optimum air factor of about 0.09-0.15. Methanol as a postproduction additive to bio-oil has been studied by many researchers and the most effective result came from adding methanol to oil just after production. Co-pyrolysis of spruce wood with methanol was undertaken in the present work and it was found that methanol improved liquid stability as a co-pyrolysis solvent but was no more effective than when used as a postproduction additive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This review suggests an evidence-based algorithm for sequential testing in infective endocarditis. It discusses blood culture and the merits and drawbacks of serology in making the diagnosis. Newer techniques are briefly reviewed. The proposed algorithm will complement the Duke criteria in clinical practice. © 2003 The British Infection Society. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes the development of a complete data visualisation system for large tabular databases, such as those commonly found in a business environment. A state-of-the-art 'cyberspace cell' data visualisation technique was investigated and a powerful visualisation system using it was implemented. Although allowing databases to be explored and conclusions drawn, it had several drawbacks, the majority of which were due to the three-dimensional nature of the visualisation. A novel two-dimensional generic visualisation system, known as MADEN, was then developed and implemented, based upon a 2-D matrix of 'density plots'. MADEN allows an entire high-dimensional database to be visualised in one window, while permitting close analysis in 'enlargement' windows. Selections of records can be made and examined, and dependencies between fields can be investigated in detail. MADEN was used as a tool for investigating and assessing many data processing algorithms, firstly data-reducing (clustering) methods, then dimensionality-reducing techniques. These included a new 'directed' form of principal components analysis, several novel applications of artificial neural networks, and discriminant analysis techniques which illustrated how groups within a database can be separated. To illustrate the power of the system, MADEN was used to explore customer databases from two financial institutions, resulting in a number of discoveries which would be of interest to a marketing manager. Finally, the database of results from the 1992 UK Research Assessment Exercise was analysed. Using MADEN allowed both universities and disciplines to be graphically compared, and supplied some startling revelations, including empirical evidence of the 'Oxbridge factor'.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing cost of developing complex software systems has created a need for tools which aid software construction. One area in which significant progress has been made is with the so-called Compiler Writing Tools (CWTs); these aim at automated generation of various components of a compiler and hence at expediting the construction of complete programming language translators. A number of CWTs are already in quite general use, but investigation reveals significant drawbacks with current CWTs, such as lex and yacc. The effective use of a CWT typically requires a detailed technical understanding of its operation and involves tedious and error-prone input preparation. Moreover, CWTs such as lex and yacc address only a limited aspect of the compilation process; for example, actions necessary to perform lexical symbol valuation and abstract syntax tree construction must be explicitly coded by the user. This thesis presents a new CWT called CORGI (COmpiler-compiler from Reference Grammar Input) which deals with the entire `front-end' component of a compiler; this includes the provision of necessary data structures and routines to manipulate them, both generated from a single input specification. Compared with earlier CWTs, CORGI has a higher-level and hence more convenient user interface, operating on a specification derived directly from a `reference manual' grammar for the source language. Rather than developing a compiler-compiler from first principles, CORGI has been implemented by building a further shell around two existing compiler construction tools, namely lex and yacc. CORGI has been demonstrated to perform efficiently in realistic tests, both in terms of speed and the effectiveness of its user interface and error-recovery mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Glioblastoma multiforme (GBM) is a malignant brain tumour for which there is currently no effective treatment regime. It is thought to develop due to the overexpression of a number of genes, including the epidermal growth factor receptor (EGFR), which is found in over 40% of GBM. Novel forms of treatment such as antisense therapy may allow for the specific inhibition of aberrant genes and thus they are optimistic therapies for future treatment of GBM. Oligodeoxynucleotides (ODNs) are small pieces of DNA that are often modified to increase their stability to nucleases and can be targeted to the aberrant gene in order to inhibit it and thus prevent its transcription into protein. By specifically binding to mRNA in an antisense manner, they can bring about its degradation by a variety of mechanisms including the activation of RNase H and thus have great potential as therapeutic agents. One of the main drawbacks to the utilisation of this therapy so far is the lack of techniques that can successfully predict accessible regions on the target mRNA that the ODNs can bind to. DNA chip technology has been utilised here to predict target sequences on the EGFR mRNA and these ODNs (AS 1 and AS2) have been tested in vitro for their stability, uptake into cells and their efficacy on cellular growth, EGFR protein and mRNA. Studies showed that phosphorothioate and 2'O-methyl ODNs were significantly more stable than phosphodiester ODNs both in serum and serum-free conditions and that the mechanism of uptake into A431 cells was temperature dependent and more efficient with the use of optimised lipofectin. Efficacy results show that AS 1 and AS2 phosphorothioate antisense ODNs were capable of inhibiting cell proliferation by 69% ±4% and 65% ±4.5% respectively at 500nM in conjunction with a non-toxic dose of lipofectinTM used to enhance cellular delivery. Furthermore, control ODN sequences, 2' O-methyl derivatives and a third ODN sequence, that was found not to be capable of binding efficiently to the EGFR mRNA by DNA chip technology, showed no significant effect on cell proliferation. AS 1 almost completely inhibited EGFR protein levels within 48 hours with two doses of 500nM AS 1 with no effect on other EGFR family member proteins or by control sequences. RNA analysis showed a decrease in mRNA levels of 32.4% ±0.8% but techniques require further optimisation to confirm this. As there are variations found between human glioblastoma in situ and those developed as xenografts, analysis of effect of AS 1 and AS2 was performed on primary tumour cell lines derived from glioma patients. ODN treatment showed a specific knockdown of cell growth compared to any of the controls used. Furthermore, combination therapies were tested on A431 cell growth to determine the advantage of combining different antisense approaches and that of conventional drugs. Results varied between the combination treatments but indicated that with optimisation of treatment regimes and delivery techniques that combination therapies utilising antisense therapies would be plausible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A thorough investigation of the recommended colorimetric method for the determination of malathion (an organophosphorus pesticide) has led to the identification of the major cause of all the problems with which the method suffers. The method, which involves the extraction of the copper (II) complex or the hydrolysis product of malathion from aqueous solution into immiscible organic solvents, has many drawbacks. For example, the colour of the organic extract fades very quickly and a slight increase in the contact time of the hydrolysis product and the copper reagent within the aqueous solution, results in a decrease in the ab-solute absorbance. Also, the presence of any reducing agents can be a significant source of error. In the present work, it has been shown that the basic cause of all these problems is the ability of copper (II) ion to be reduced to copper (I) ion. It has further been shown that these problems can be resolved by re-placing copper (II) by bismuth (III). This has led to the development of a modified colorimetric method for the determination. of malathion, which has distinct advantages over all other existing methods in terms of reagents required, ease in application, avoidance of interferences and stability of colour for extended periods of time. The modified colorimetric method described above has been further improved by making use of a ligand exchange reaction involving dithizone. The resulting final organic extract in this case is bright orange in colour, the absorbance of which can be measured even with simple photometers. The usefulness of the modified colorimetric method has been demonstrated by determining malathion in technical products, and in aqueous solution containing the compound down to sub ppm levels. The scope and applicability of atomic absorption spectrophotometry has been extended by demonstrating for the first time that the technique can be used for the indirect determination of malathion. Almost all of the work described above has been accepted for publication by international journals and considerable interest in the work has been shown by chemists working in the field of pesticide analysis and research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis reviews the existing manufacturing control techniques and identifies their practical drawbacks when applied in a high variety, low and medium volume environment. It advocates that the significant drawbacks inherent in such systems, could impair their applications under such manufacturing environment. The key weaknesses identified in the system were: capacity insensitive nature of Material Requirements Planning (MRP); the centralised approach to planning and control applied in Manufacturing Resources Planning (MRP IT); the fact that Kanban can only be used in repetitive environments; Optimised Productivity Techniques's (OPT) inability to deal with transient bottlenecks, etc. On the other hand, cellular systems offer advantages in simplifying the control problems of manufacturing and the thesis reviews systems designed for cellular manufacturing including Distributed Manufacturing Resources Planning (DMRP) and Flexible Manufacturing System (FMS) controllers. It advocates that a newly developed cellular manufacturing control methodology, which is fully automatic, capacity sensitive and responsive, has the potential to resolve the core manufacturing control problems discussed above. It's development is envisaged within the framework of a DMRP environment, in which each cell is provided with its own MRP II system and decision making capability. It is a cellular based closed loop control system, which revolves on single level Bill-Of-Materials (BOM) structure and hence provides better linkage between shop level scheduling activities and relevant entries in the MPS. This provides a better prospect of undertaking rapid response to changes in the status of manufacturing resources and incoming enquiries. Moreover, it also permits automatic evaluation of capacity and due date constraints and hence facilitates the automation of MPS within such system. A prototype cellular manufacturing control model, was developed to demonstrate the underlying principles and operational logic of the cellular manufacturing control methodology, based on the above concept. This was shown to offer significant advantages from the prospective of operational planning and control. Results of relevant tests proved that the model is capable of producing reasonable due date and undertake automation of MPS. The overall performance of the model proved satisfactory and acceptable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The manufacture of copper alloy flat rolled metals involves hot and cold rolling operations, together with annealing and other secondary processes, to transform castings (mainly slabs and cakes) into such shapes as strip, plate, sheet, etc. Production is mainly to customer orders in a wide range of specifications for dimensions and properties. However, order quantities are often small and so process planning plays an important role in this industry. Much research work has been done in the past in relation to the technology of flat rolling and the details of the operations, however, there is little or no evidence of any research in the planning of processes for this type of manufacture. Practical observation in a number of rolling mills has established the type of manual process planning traditionally used in this industry. This manual approach, however, has inherent drawbacks, being particularly dependent on the individual planners who gain their knowledge over a long span of practical experience. The introduction of the retrieval CAPP approach to this industry was a first step to reduce these problems. But this could not provide a long-term answer because of the need for an experienced planner to supervise generation of any plan. It also fails to take account of the dynamic nature of the parameters involved in the planning, such as the availability of resources, operation conditions and variations in the costs. The other alternative is the use of a generative approach to planning in the rolling mill context. In this thesis, generative methods are developed for the selection of optimal routes for single orders and then for batches of orders, bearing in mind equipment restrictions, production costs and material yield. The batch order process planning involves the use of a special cluster analysis algorithm for optimal grouping of the orders. This research concentrates on cold-rolling operations. A prototype model of the proposed CAPP system, including both single order and batch order planning options, has been developed and tested on real order data in the industry. The results were satisfactory and compared very favourably with the existing manual and retrieval methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Decomposition of domestic wastes in an anaerobic environment results in the production of landfill gas. Public concern about landfill disposal and particularly the production of landfill gas has been heightened over the past decade. This has been due in large to the increased quantities of gas being generated as a result of modern disposal techniques, and also to their increasing effect on modern urban developments. In order to avert diasters, effective means of preventing gas migration are required. This, in turn requires accurate detection and monitoring of gas in the subsurface. Point sampling techniques have many drawbacks, and accurate measurement of gas is difficult. Some of the disadvantages of these techniques could be overcome by assessing the impact of gas on biological systems. This research explores the effects of landfill gas on plants, and hence on the spectral response of vegetation canopies. Examination of the landfill gas/vegetation relationship is covered, both by review of the literature and statistical analysis of field data. The work showed that, although vegetation health was related to landfill gas, it was not possible to define a simple correlation. In the landfill environment, contribution from other variables, such as soil characteristics, frequently confused the relationship. Two sites are investigated in detail, the sites contrasting in terms of the data available, site conditions, and the degree of damage to vegetation. Gas migration at the Panshanger site was dominantly upwards, affecting crops being grown on the landfill cap. The injury was expressed as an overall decline in plant health. Discriminant analysis was used to account for the variations in plant health, and hence the differences in spectral response of the crop canopy, using a combination of soil and gas variables. Damage to both woodland and crops at the Ware site was severe, and could be easily related to the presence of gas. Air photographs, aerial video, and airborne thematic mapper data were used to identify damage to vegetation, and relate this to soil type. The utility of different sensors for this type of application is assessed, and possible improvements that could lead to more widespread use are identified. The situations in which remote sensing data could be combined with ground survey are identified. In addition, a possible methodology for integrating the two approaches is suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many areas of northern India, salinity renders groundwater unsuitable for drinking and even for irrigation. Though membrane treatment can be used to remove the salt, there are some drawbacks to this approach e.g. (1) depletion of the groundwater due to over-abstraction, (2) saline contamination of surface water and soil caused by concentrate disposal and (3) high electricity usage. To address these issues, a system is proposed in which a photovoltaic-powered reverse osmosis (RO) system is used to irrigate a greenhouse (GH) in a stand-alone arrangement. The concentrate from the RO is supplied to an evaporative cooling system, thus reducing the volume of the concentrate so that finally it can be evaporated in a pond to solid for safe disposal. Based on typical meteorological data for Delhi, calculations based on mass and energy balance are presented to assess the sizing and cost of the system. It is shown that solar radiation, freshwater output and evapotranspiration demand are readily matched due to the approximately linear relation among these variables. The demand for concentrate varies independently, however, thus favouring the use of a variable recovery arrangement. Though enough water may be harvested from the GH roof to provide year-round irrigation, this would require considerable storage. Some practical options for storage tanks are discussed. An alternative use of rainwater is in misting to reduce peak temperatures in the summer. An example optimised design provides internal temperatures below 30EC (monthly average daily maxima) for 8 months of the year and costs about €36,000 for the whole system with GH floor area of 1000 m2 . Further work is needed to assess technical risks relating to scale-deposition in the membrane and evaporative pads, and to develop a business model that will allow such a project to succeed in the Indian rural context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evaluation and benchmarking in content-based image retrieval has always been a somewhat neglected research area, making it difficult to judge the efficacy of many presented approaches. In this paper we investigate the issue of benchmarking for colour-based image retrieval systems, which enable users to retrieve images from a database based on lowlevel colour content alone. We argue that current image retrieval evaluation methods are not suited to benchmarking colour-based image retrieval systems, due in main to not allowing users to reflect upon the suitability of retrieved images within the context of a creative project and their reliance on highly subjective ground-truths. As a solution to these issues, the research presented here introduces the Mosaic Test for evaluating colour-based image retrieval systems, in which test-users are asked to create an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. We report on our findings from a user study which suggests that the Mosaic Test overcomes the major drawbacks associated with existing image retrieval evaluation methods, by enabling users to reflect upon image selections and automatically measuring image relevance in a way that correlates with the perception of many human assessors. We therefore propose that the Mosaic Test be adopted as a standardised benchmark for evaluating and comparing colour-based image retrieval systems.