16 resultados para variable rate application
em Aston University Research Archive
Resumo:
Mapping and sediment sampling in reefs of the Pulau Seribu group (southwest Java Sea) shows the existence of ten physiographic zones and subzones represented by seven lithofacies. Reefs in the northern part of the archipelago are smaller, more closely spaced and morphologically sim pler than those in the south. This pattern is attributed to differences in subsidence rate. A th reedimensional model is proposed for the evo lution of these reefs but borehole data are requi red to test this model. Miocene limestones are described in detail from hydrocarbon reservoirs in the Batu Raja Formation of the same area. Brief comparisons a re made with surface outcrops of approximately coeval carbonate developments. The lithofacies developed within these limestones reflect variations in hydrodynam ic regime and basement topography . Ele\le.n diagenetic processes affected the Batu Raja limestones and the dist ribution of these is primarily related to sealevel fluctuations. Early diagenesis was marine and characterised by micritisation and preCipitation of fibrous and bladed cements. Dolomitisat ion occurred in the mixed- water zone and its variable intensity is attributed to the configuration of the carbonate body relative to this zone. Subsequently the limestones were subjected to freshwater phreatic zone diagenesis resulting in dissolution and cementation, and a t a late stage underwent burial compaction. Secondary porosity, which \ar9e1.y determines the suitability of these limestones as hydrocarbon reserVOirs, is a function of the variable intensity of dissolution and cementation, burial compaction, dolomitisation and possibly micrite neomorphism. The sedimentary processes that generated the Batu Raja buildups are inferred f rom comparisons with the Pulau Seribu and other Recent analogues. The contrasting pinnacle form of the Pulau Seribu patch reefs compared with the low relief of the Batu Raja buUdups results from differences in the initial substrate topography and subsequent subsidence rate
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
The procedure for successful scale-up of batchwise emulsion polymerisation has been studied. The relevant literature on liquid-liquid dispersion on scale-up and on emulsion polymerisation has been crit1cally reviewed. Batchwise emulsion polymerisation of styrene in a specially built 3 litre, unbaffled, reactor confirmed that impeller speed had a direct effect on the latex particle size and on the reaction rate. This was noted to be more significant at low soap concentrations and the phenomenon was related to the depletion of micelle forming soap by soap adsorption onto the monomer emulsion surface. The scale-up procedure necessary to maintain constant monomer emulsion surface area in an unbaffled batch reactor was therefore investigated. Three geometrically similar 'vessels of 152, 229 and 305mm internal diameter, and a range of impeller speeds (190 to 960 r.p.m.) were employed. The droplet sizes were measured either through photomicroscopy or via a Coulter Counter. The power input to the impeller was also measured. A scale-up procedure was proposed based on the governing relationship between droplet diameter, impeller speed and impeller diameter. The relationships between impeller speed soap concentration, latex particle size and reaction rate were investigated in a series of polymerisations employing an amended commercial recipe for polystyrene. The particle size was determined via a light transmission technique. Two computer models, based on the Smith and Ewart approach but taking into account the adsorption/desorption of soap at the monomer surface, were successful 1n predicting the particle size and the progress of the reaction up to the end of stage II, i.e. to the end of the period of constant reaction rate.
Resumo:
Bove, Pervan, Beatty, and Shiu [Bove, LL, Pervan, SJ, Beatty, SE, Shiu, E. Service worker role in encouraging customer organizational citizenship behaviors. J Bus Res 2009;62(7):698–705.] develop and test a latent variable model of the role of service workers in encouraging customers' organizational citizenship behaviors. However, Bove et al. [Bove, LL, Pervan, SJ, Beatty, SE, Shiu, E. Service worker role in encouraging customer organizational citizenship behaviors. J Bus Res 2009;62(7):698–705.] claim support for hypothesized relationships between constructs that, due to insufficient discriminant validity regarding certain constructs, may be inaccurate. This research comment discusses what discriminant validity represents, procedures for establishing discriminant validity, and presents an example of inaccurate discriminant validity assessment based upon the work of Bove et al. [Bove, LL, Pervan, SJ, Beatty, SE, Shiu, E. Service worker role in encouraging customer organizational citizenship behaviors. J Bus Res 2009;62(7):698–705.]. Solutions to discriminant validity problems and a five-step procedure for assessing discriminant validity then conclude the paper. This comment hopes to motivate a review of discriminant validity issues and offers assistance to future researchers conducting latent variable analysis.
Resumo:
The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one technique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.
Resumo:
Hydroxyl terminated polybutadiene (HTPB) has been used as a rocket propellant binder which is required to be stored for at least twenty years. It is found that the excellent stress-strain characteristics of this propellant can be totally lost, during this long storage, due to the deterioration of the polybutadiene chains. As a result, the propellant can not stand the service loads, which may lead to a catastrophe. The study of the HTPB binder degradation, below 80°C, has been carried out by investigating the environmental factors and the changes which occur along the macromolecular chains. Results have shown that oxygen is the main factor which causes the crosslinking and chain scission reactions. The former is the predominant reaction and proceeds rapidly under oxygen sufficient environment. The unsaturation of polymer chain, which provides the desired physical properties to the binder, was lost with the increase in crosslink density. At the same time hydroperoxides were found to form and decompose along the polymer chains. Therefore, the deterioration of the binder results from the oxidation of polymer chains. Since the oxidation reaction occurred at higher rate than oxygen diffusion rate and oxygen diffusion rate is inversely proportional to the crosslink density, the binder, below the surface layer in a thick section container, could be naturally protected under an oxygen deficient condition for a long time. Investigation of the effectiveness of antioxidants in HTPB binder has shown that the efficiency of an antioxidant depends on its ability to scavenge radicals. Generally, aromatic amines are the most effective binder antioxidants. But when a peroxide decomposer is combined with an aromatic amine at the appropriate ratio, a synergistic effect is obtained, which gives the lowest binder gel increase rate.
Resumo:
The use of antibiotics was investigated in twelve acute hospitals in England. Data was collected electronically and by questionnaire for the financial years 2001/2, 2002/3 and 2003/4. Hospitals were selected on the basis of their Medicines Management Self-Assessment Scores (MMAS) and included a cohort of three hospitals with integrated electronic prescribing systems. The total sample size was 6.65% of English NHS activity for 2001/2 based on Finished Consultant Episode (FCE) numbers. Data collected included all antibiotics dispensed (ATC category J01), hospital activity FCE's and beddays, Medicines Management Self-assessment scores, Antibiotic Medicines Management scores (AMS), Primary Care Trust (PCT) of origin of referral populations, PCT antibiotic prescribing rates, Index of Multiple Deprivation for each PCT. The DDD/FCE (Defined Daily Dose/FCE) was found to correlate with the DDD 100beddays (r = 0.74 p
Resumo:
Under ideal conditions ion plating produces finely grained dense coatings with excellent adhesion. The ion bombardment induced damage initiates a large number of small nuclei. Simultaneous coating and sputtering stimulates high rates of diffusion and forms an interfacial region of graded composition responsible for good adhesion. To obtain such coatings on components far industrial applications, the design and construction Of an ion plater with a 24" (O.6rn) diameter chamber were investigated and modifications of the electron beam gun were proposed. A 12" (O.3m) diameter ion plater was designed and constructed. The equipment was used to develop surfaces for solar energy applications. The conditions to give extended surfaces by sputter etching were studied. Austenitic stainless steel was sputter etched at 20 and 30 mTorr working pressure and at 3, 4 and 5 kV. Uniform etching was achieved by redesigning the specimen holder to give a uniform electrostatic field over the surfaces of the specimens. Surface protrusions were observed after sputter etching. They were caused by the sputter process and were independent of grain boundaries, surface contaminants and inclusions. The sputtering rate of stainless steel was highly dependent on the background pressure which should be kept below 10-5 Torr. Sputter etching improved the performance of stainless steel used as a solar selective surface. A twofold improvement was achieved on sputter etching bright annealed stainless steel. However, there was only slight improvement after sputter etching stainless steel which had been mechanically polished to a mirror finish. Cooling curves Were used to measure the thermal emittance of specimens.The deposition rate of copper was measured at different levels of power input and was found to be a maximum at 9.5 kW. The diameter of the copper feed rod was found to be critical for the maintenance of a uniform evaporation rate.
Resumo:
Fluidized bed spray granulators (FBMG) are widely used in the process industry for particle size growth; a desirable feature in many products, such as granulated food and medical tablets. In this paper, the first in a series of four discussing the rate of various microscopic events occurring in FBMG, theoretical analysis coupled with CFD simulations have been used to predict granule–granule and droplet–granule collision time scales. The granule–granule collision time scale was derived from principles of kinetic theory of granular flow (KTGF). For the droplet–granule collisions, two limiting models were derived; one is for the case of fast droplet velocity, where the granule velocity is considerable lower than that of the droplet (ballistic model) and another for the case where the droplet is traveling with a velocity similar to the velocity of the granules. The hydrodynamic parameters used in the solution of the above models were obtained from the CFD predictions for a typical spray fluidized bed system. The granule–granule collision rate within an identified spray zone was found to fall approximately within the range of 10-2–10-3 s, while the droplet–granule collision was found to be much faster, however, slowing rapidly (exponentially) when moving away from the spray nozzle tip. Such information, together with the time scale analysis of droplet solidification and spreading, discussed in part II and III of this study, are useful for probability analysis of the various event occurring during a granulation process, which then lead to be better qualitative and, in part IV, quantitative prediction of the aggregation rate.
Resumo:
We report the impact of longitudinal signal power profile on the transmission performance of coherently-detected 112 Gb/s m-ary polarization multiplexed quadrature amplitude modulation system after compensation of deterministic nonlinear fibre impairments. Performance improvements up to 0.6 dB (Q(eff)) are reported for a non-uniform transmission link power profile. Further investigation reveals that the evolution of the transmission performance with power profile management is fully consistent with the parametric amplification of the amplified spontaneous emission by the signal through four-wave mixing. In particular, for a non-dispersion managed system, a single-step increment of 4 dB in the amplifier gain, with respect to a uniform gain profile, at similar to 2/3(rd) of the total reach considerably improves the transmission performance for all the formats studied. In contrary a negative-step profile, emulating a failure (gain decrease or loss increase), significantly degrades the bit-error rate.
Resumo:
We propose an all-fiber method for the generation of ultrafast shaped pulse train bursts from a single pulse based on Fourier Series Developments (FDSs). The implementation of the FSD based filter only requires the use of a very simple non apodized Superimposed Fiber Bragg Grating (S-FBG) for the generation of the Shaped Output Pulse Train Burst (SOPTB). In this approach, the shape, the period and the temporal length of the generated SOPTB have no dependency on the input pulse rate.
Resumo:
Purpose: To evaluate distance and near image quality after hybrid bi-aspheric multifocal central presbyLASIK treatments. Design: Consecutive case series. Methods: Sixty-four eyes of 32 patients consecutively treated with central presbyLASIK were assessed. The mean age of the patients was 51 ± 3 years with a mean spherical equivalent refraction of-1.08 ± 2.62 diopters (D) and mean astigmatism of 0.52 ± 0.42 D. Monocular corrected distance visual acuity (CDVA), corrected near visual acuity (CNVA), and distance corrected near visual acuity (DCNVA) of nondominant eyes; binocular uncorrected distance visual acuity (UDVA); uncorrected intermediate visual acuity (UIVA); distance corrected intermediate visual acuity (DCIVA); and uncorrected near visual acuity (UNVA) were assessed pre- and postoperatively. Subjective quality of vision and near vision was assessed using the 10-item Rasch-scaled Quality of Vision and Near Activity Visual Questionnaire, respectively. Results: At 1 year postoperatively, 93% of patients achieved 20/20 or better binocular UDVA; 90% and 97% of patients had J2 or better UNVA and UIVA, respectively; 7% lost 2 Snellen lines of CDVA; Strehl ratio reduced by ~-4% ± 14%. Defocus curves revealed a loss of half a Snellen line at best focus, with no change for intermediate vergence (-1.25 D) and a mean gain of 2 lines for near vergence (-3 D). Conclusions: Presbyopic treatment using a hybrid bi-aspheric micro-monovision ablation profile is safe and efficacious. The postoperative outcomes indicate improvements in binocular vision at far, intermediate, and near distances with improved contrast sensitivity. A 19% retreatment rate should be considered to increase satisfaction levels, besides a 3% reversal rate.
Resumo:
Study on Napier grass leaf (NGL), stem (NGS) and leaf and stem (NGT) was carried out. Proximate, ultimate and structural analyses were evaluated. Functional groups and crystalline components in the biomass were examined. Pyrolysis study was conducted in a thermogravimetric analyzer under nitrogen atmosphere of 20 mL/min at constant heating rate of 10 K/min. The results reveal that Napier grass biomass has high volatile matter, higher heating value, high carbon content and lower ash, nitrogen and sulfur contents. Structural analysis shows that the biomass has considerable cellulose and lignin contents which are good candidates for good quality bio-oil production. From the pyrolysis study, degradation of extractives, hemicellulose, cellulose and lignin occurred at temperature around 478, 543, 600 and above 600 K, respectively. Kinetics of the process was evaluated using reaction order model. New equations that described the process were developed using the kinetic parameters and data compared with experimental data. The results of the models fit well to the experimental data. The proposed models may be a reliable means for describing thermal decomposition of lignocellulosic biomass under nitrogen atmosphere at constant heating rate.
Resumo:
The performance of seven minimization algorithms are compared on five neural network problems. These include a variable-step-size algorithm, conjugate gradient, and several methods with explicit analytic or numerical approximations to the Hessian.
Resumo:
With the extensive use of pulse modulation methods in telecommunications, much work has been done in the search for a better utilisation of the transmission channel.The present research is an extension of these investigations. A new modulation method, 'Variable Time-Scale Information Processing', (VTSIP), is proposed.The basic principles of this system have been established, and the main advantages and disadvantages investigated. With the proposed system, comparison circuits detect the instants at which the input signal voltage crosses predetermined amplitude levels.The time intervals between these occurrences are measured digitally and the results are temporarily stored, before being transmitted.After reception, an inverse process enables the original signal to be reconstituted.The advantage of this system is that the irregularities in the rate of information contained in the input signal are smoothed out before transmission, allowing the use of a smaller transmission bandwidth. A disadvantage of the system is the time delay necessarily introduced by the storage process.Another disadvantage is a type of distortion caused by the finite store capacity.A simulation of the system has been made using a standard speech signal, to make some assessment of this distortion. It is concluded that the new system should be an improvement on existing pulse transmission systems, allowing the use of a smaller transmission bandwidth, but introducing a time delay.