35 resultados para Combined system

em Aston University Research Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Partial information leakage in deterministic public-key cryptosystems refers to a problem that arises when information about either the plaintext or the key is leaked in subtle ways. Quite a common case is where there are a small number of possible messages that may be sent. An attacker may be able to crack the scheme simply by enumerating all the possible ciphertexts. Two methods are proposed for facing the partial information leakage problem in RSA that incorporate a random element into the encrypted message to increase the number of possible ciphertexts. The resulting scheme is, effectively, an RSA-like cryptosystem which exhibits probabilistic encryption. The first method involves encrypting several similar messages with RSA and then using the Quadratic Residuosity Problem (QRP) to mark the intended one. In this way, an adversary who has correctly guessed two or more of the ciphertexts is still in doubt about which message is the intended one. The cryptographic strength of the combined system is equal to the computational difficulty of factorising a large integer; ideally, this should be feasible. The second scheme uses error-correcting codes for accommodating the random component. The plaintext is processed with an error-correcting code and deliberately corrupted before encryption. The introduced corruption lies within the error-correcting ability of the code, so as to enable the recovery of the original message. The random corruption offers a vast number of possible ciphertexts corresponding to a given plaintext; hence an attacker cannot deduce any useful information from it. The proposed systems are compared to other cryptosystems sharing similar characteristics, in terms of execution time and ciphertext size, so as to determine their practical utility. Finally, parameters which determine the characteristics of the proposed schemes are also examined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The advent of the harmonic neutralised shunt Converter Compensator as a practical means of reactive power compensation in power transmission systems has cleared ground for wider application of this type of equipment. An experimental 24-pulse voltage sourced convector has been successfully applied in controlling the terminal power factor of a 1.5kW, 240V three phase cage rotor induction motor, whose winding has been used in place of the usual phase shifting transformers. To achieve this, modifications have been made to the conventional stator winding of the induction machine. These include an unconventional phase spread and facilitation of compensator connections to selected tapping points between stator coils to give a three phase winding with a twelve phase connection to the twenty four pulse converter. Theoretical and experimental assessments of the impact of these modifications and attachment of the compensator have shown that there is a slight reduction in the torque developed at a given slip and in the combined system efficiency. There is also an increase in the noise level, also a consequence of the harmonics. The stator leakage inductance gave inadequate coupling reactance between the converter and the effective voltage source, necessitating the use of external inductors in each of the twelve phases. The terminal power factor is fully controllable when the induction machine is used either as a motor or as a generator.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Agriculture accounts for ~70% of freshwater usage worldwide. Seawater desalination alone cannot meet the growing needs for irrigation and food production, particularly in hot, desert environments. Greenhouse cultivation of high-value crops uses just a fraction of freshwater per unit of food produced when compared with open field cultivation. However, desert greenhouse producers face three main challenges: freshwater supply, plant nutrient supply, and cooling of the greenhouse. The common practice of evaporative cooling for greenhouses consumes large amounts of fresh water. In Saudi Arabia, the most common greenhouse cooling schemes are fresh water-based evaporative cooling, often using fossil groundwater or energy-intensive desalinated water, and traditional refrigeration-based direct expansion cooling, largely powered by the burning of fossil fuels. The coastal deserts have ambient conditions that are seasonally too humid to support adequate evaporative cooling, necessitating additional energy consumption in the dehumidification process of refrigeration-based cooling. This project evaluates the use of a combined-system liquid desiccant dehumidifier and membrane distillation unit that can meet the dual needs of cooling and freshwater supply for a greenhouse in a hot and humid environment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this work has been to investigate the principle of combined bioreaction and separation in a simulated counter-current chromatographic bioreactor-separator system (SCCR-S). The SCCR-S system consisted of twelve 5.4cm i.d x 75cm long columns packed with calcium charged cross-linked polystyrene resin. Three bioreactions, namely the saccharification of modified starch to maltose and dextrin using the enzyme maltogenase, the hydrolysis of lactose to galactose and glucose in the presence of the enzyme lactase and the biosynthesis of dextran from sucrose using the enzyme dextransucrase. Combined bioreaction and separation has been successfully carried out in the SCCR-S system for the saccharification of modified starch to maltose and dextrin. The effects of the operating parameters (switch time, eluent flowrate, feed concentration and enzyme activity) on the performance of the SCCR-S system were investigated. By using an eluent of dilute enzyme solution, starch conversions of up to 60% were achieved using lower amounts of enzyme than the theoretical amount required by a conventional bioreactor to produce the same amount of maltose over the same time period. Comparing the SCCR-S system to a continuous annular chromatograph (CRAC) for the saccharification of modified starch showed that the SCCR-S system required only 34.6-47.3% of the amount of enzyme required by the CRAC. The SCCR-S system was operated in the batch and continuous modes as a bioreactor-separator for the hydrolysis of lactose to galactose and glucose. By operating the system in the continuous mode, the operating parameters were further investigated. During these experiments the eluent was deionised water and the enzyme was introduced into the system through the same port as the feed. The galactose produced was retarded and moved with the stationary phase to be purge as the galactose rich product (GalRP) while the glucose moved with the mobile phase and was collected as the glucose rich product (GRP). By operating at up to 30%w/v lactose feed concentrations, complete conversions were achieved using only 48% of the theoretical amount of enzyme required by a conventional bioreactor to hydrolyse the same amount of glucose over the same time period. The main operating parameters affecting the performance of the SCCR-S system operating in the batch mode were investigated and the results compared to those of the continuous operation of the SCCR-S system. . During the biosynthesis of dextran in the SCCR-S system, a method of on-line regeneration of the resin was required to operate the system continuously. Complete conversion was achieved at sucrose feed concentrations of 5%w/v with fructose rich. products (FRP) of up to 100% obtained. The dextran rich products were contaninated by small amounts of glucose and levan formed during the bioreaction. Mathematical modelling and computer simulation of the SCCR-S. system operating in the continuous mode for the hydrolysis of lactose has been carried out. .

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In many areas of northern India, salinity renders groundwater unsuitable for drinking and even for irrigation. Though membrane treatment can be used to remove the salt, there are some drawbacks to this approach e.g. (1) depletion of the groundwater due to over-abstraction, (2) saline contamination of surface water and soil caused by concentrate disposal and (3) high electricity usage. To address these issues, a system is proposed in which a photovoltaic-powered reverse osmosis (RO) system is used to irrigate a greenhouse (GH) in a stand-alone arrangement. The concentrate from the RO is supplied to an evaporative cooling system, thus reducing the volume of the concentrate so that finally it can be evaporated in a pond to solid for safe disposal. Based on typical meteorological data for Delhi, calculations based on mass and energy balance are presented to assess the sizing and cost of the system. It is shown that solar radiation, freshwater output and evapotranspiration demand are readily matched due to the approximately linear relation among these variables. The demand for concentrate varies independently, however, thus favouring the use of a variable recovery arrangement. Though enough water may be harvested from the GH roof to provide year-round irrigation, this would require considerable storage. Some practical options for storage tanks are discussed. An alternative use of rainwater is in misting to reduce peak temperatures in the summer. An example optimised design provides internal temperatures below 30EC (monthly average daily maxima) for 8 months of the year and costs about €36,000 for the whole system with GH floor area of 1000 m2 . Further work is needed to assess technical risks relating to scale-deposition in the membrane and evaporative pads, and to develop a business model that will allow such a project to succeed in the Indian rural context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improving healthcare quality is a growing need of any society. Although various quality improvement projects are routinely deployed by the healthcare professional, they are characterised by a fragmented approach, i.e. they are not linked with the strategic intent of the organisation. This study introduces a framework which integrates all quality improvement projects with the strategic intent of the organisation. It first derives the strengths, weaknesses, opportunities and threats (SWOT) matrix of the system with the involvement of the concerned stakeholders (clinical professional), which helps identify a few projects, the implementation of which ensures achievement of desired quality. The projects are then prioritised using the analytic hierarchy process with the involvement of the concerned stakeholders (clinical professionals) and implemented in order to improve system performance. The effectiveness of the method has been demonstrated using a case study in the intensive care unit of Queen Elizabeth Hospital in Bridgetown, Barbados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is shown, through numerical simulations, that by using a combination of dispersion management and periodic saturable absorption it is possible to transmit solitonlike pulses with greatly increased energy near to the zero net dispersion wavelength. This system is shown to support the stable propagation of solitons over transoceanic distances for a wide range of input powers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel wavelength-division-multiplexed in-fibre Bragg grating sensor system combined with high resolution drift-compensated interferometric wavelength-shift detection is described. This crosstalk-free system is based on the use of an interferometric wavelength scanner and a low resolution spectrometer. A four element system is demonstrated for temperature measurement, and a resolution of ±0.1°C has been achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis various mathematical methods of studying the transient and dynamic stabiIity of practical power systems are presented. Certain long established methods are reviewed and refinements of some proposed. New methods are presented which remove some of the difficulties encountered in applying the powerful stability theories based on the concepts of Liapunov. Chapter 1 is concerned with numerical solution of the transient stability problem. Following a review and comparison of synchronous machine models the superiority of a particular model from the point of view of combined computing time and accuracy is demonstrated. A digital computer program incorporating all the synchronous machine models discussed, and an induction machine model, is described and results of a practical multi-machine transient stability study are presented. Chapter 2 reviews certain concepts and theorems due to Liapunov. In Chapter 3 transient stability regions of single, two and multi~machine systems are investigated through the use of energy type Liapunov functions. The treatment removes several mathematical difficulties encountered in earlier applications of the method. In Chapter 4 a simple criterion for the steady state stability of a multi-machine system is developed and compared with established criteria and a state space approach. In Chapters 5, 6 and 7 dynamic stability and small signal dynamic response are studied through a state space representation of the system. In Chapter 5 the state space equations are derived for single machine systems. An example is provided in which the dynamic stability limit curves are plotted for various synchronous machine representations. In Chapter 6 the state space approach is extended to multi~machine systems. To draw conclusions concerning dynamic stability or dynamic response the system eigenvalues must be properly interpreted, and a discussion concerning correct interpretation is included. Chapter 7 presents a discussion of the optimisation of power system small sjgnal performance through the use of Liapunov functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Various micro-radial compressor configurations were investigated using one-dimensional meanline and computational fluid dynamics (CFD) techniques for use in a micro gas turbine (MGT) domestic combined heat and power (DCHP) application. Blade backsweep, shaft speed, and blade height were varied at a constant pressure ratio. Shaft speeds were limited to 220 000 r/min, to enable the use of a turbocharger bearing platform. Off-design compressor performance was established and used to determine the MGT performance envelope; this in turn was used to assess potential cost and environmental savings in a heat-led DCHP operating scenario within the target market of a detached family home. A low target-stage pressure ratio provided an opportunity to reduce diffusion within the impeller. Critically for DCHP, this produced very regular flow, which improved impeller performance for a wider operating envelope. The best performing impeller was a low-speed, 170 000 r/min, low-backsweep, 15° configuration producing 71.76 per cent stage efficiency at a pressure ratio of 2.20. This produced an MGT design point system efficiency of 14.85 per cent at 993 W, matching prime movers in the latest commercial DCHP units. Cost and CO2 savings were 10.7 per cent and 6.3 per cent, respectively, for annual power demands of 17.4 MWht and 6.1 MWhe compared to a standard condensing boiler (with grid) installation. The maximum cost saving (on design point) was 14.2 per cent for annual power demands of 22.62 MWht and 6.1 MWhe corresponding to an 8.1 per cent CO2 saving. When sizing, maximum savings were found with larger heat demands. When sized, maximum savings could be made by encouraging more electricity export either by reducing household electricity consumption or by increasing machine efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the research carried out in this report was to observe the first ever in-situ sonochemical reaction in the NMR Spectrometer in the megahertz region of ultrasound. Several reactions were investigated as potential systems for a sonochemical reaction followed by NMR spectroscopy. The primary problem to resolve when applying ultrasound to a chemical reaction is that of heating. Ultrasound causes the liquid to move and produces 'hot spots' resulting in an increase in sample temperature. The problem was confronted by producing a device that would counteract this effect and so remove the need to account for heating. However, the design of the device limited the length of time during which it would function. Longer reaction times were required to enable observations to be carried out in the NMR spectrometer. The fIrst and most obvious reactions attempted were those of the well-known ultrasonic dosimeter. Such a reaction would, theoretically, enable the author to simultaneously observe a reaction and determine the exact power entering the system for direct comparison of results. Unfortunately, in order to monitor the reactions in the NMR spectrometer the reactant concentrations had to be signifIcantly increased, which resulted in a notable increase in reaction time, making the experiment too lengthy to follow in the time allocated. The Diels-Alder Reaction is probably one of the most highly investigated reaction systems in the field of chemistry and it was this to which the author turned her attention. Previous authors have carried out ultrasonic investigations, with considerable success, for the reaction of anthracene with maleic anhydride. It was this reaction in particular that was next attempted. The first ever sonochemically enhanced reaction using a frequency of ultrasound in the megahertz (MHz) region was successfully carried out as bench experiments. Due to the complexity of the component reactants the product would precipitate from the solution and because the reaction could only be monitored by its formation, it was not possible to observe the reaction in the NMR spectrometer. The solvolysis of 2-chloro-2-methylpropane was examined in various solvent systems; the most suitable of which was determined to be aqueous 2-methylpropan-2-ol. The experiment was successfully enhanced by the application of ultrasound and monitored in-situ in the NMR spectrometer. The increase in product formation of an ultrasonic reaction over that of a traditional thermal reaction occurred. A range of 1.4 to 2.9 fold improvement was noted, dependent upon the reaction conditions investigated. An investigation into the effect of sonication upon a large biological molecule, in this case aqueous lysozyme, was carried out. An easily observed effect upon the sample was noted but no explanation for the observed effects could be established.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The IRDS standard is an international standard produced by the International Organisation for Standardisation (ISO). In this work the process for producing standards in formal standards organisations, for example the ISO, and in more informal bodies, for example the Object Management Group (OMG), is examined. This thesis examines previous models and classifications of standards. The previous models and classifications are then combined to produce a new classification. The IRDS standard is then placed in a class in the new model as a reference anticipatory standard. Anticipatory standards are standards which are developed ahead of the technology in order to attempt to guide the market. The diffusion of the IRDS is traced over a period of eleven years. The economic conditions which affect the diffusion of standards are examined, particularly the economic conditions which prevail in compatibility markets such as the IT and ICT markets. Additionally the consequences of the introduction of gateway or converter devices into a market where a standard has not yet been established is examined. The IRDS standard did not have an installed base and this hindered its diffusion. The thesis concludes that the IRDS standard was overtaken by new developments such as object oriented technologies and middleware. This was partly because of the slow development process of developing standards in traditional organisations which operate on a consensus basis and partly because the IRDS standard did not have an installed base. Also the rise and proliferation of middleware products resulted in exchange mechanisms becoming dominant rather than repository solutions. The research method used in this work is a longitudinal study of the development and diffusion of the ISO/EEC IRDS standard. The research is regarded as a single case study and follows the interpretative epistemological point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We experimentally demonstrated a highly sensitive twist sensor system based on a 45° and an 81° tilted fibre grating (TFG). The 81°-TFG has a set of dual-peaks that are due to the birefringence induced by its extremely tilted structure. When the 81°-TFG subjected to twist, the coupling to the two peaks would interchange from each other, providing a mechanism to measure and monitor the twist. We have investigated the performance of the sensor system by three interrogation methods (spectral, power-measurement and voltage-measurement). The experimental results clearly show that the 81°-TFG and the 45°-TFG could be combined forming a full fibre twist sensor system capable of not just measuring the magnitude but also recognising the direction of the applied twist.