12 resultados para NON-IDEAL POWER SOURCES
em Aston University Research Archive
Resumo:
This paper focuses on the move from buyer dominance toward interdependence between buyers and suppliers in a distribution channel. The paper introduces a case study collected through in-depth interviews and participative observations. It examines the relationships between a timber supplier and its customers in the builders' merchants sector. We stress the relevance of considering actions intended to change the power balance, rather than focusing only on trust. The power balance in a dyadic relationship is dynamic, and power positions need to be constantly re-evaluated. An important power resource is information asymmetry, manifested in the supplier's information about: products, regional and local demand, and the usage of the products. For practitioners, we highlight the possibility of exerting a non-coercive power resource, such as information asymmetry, in order to increase the relative power. Furthermore, being open about the power position between a buyer and a seller can foster a more efficient collaboration.
Resumo:
John Bowlby's use of evolutionary theory as a cornerstone of his attachment theory was innovative in its day and remains useful. Del Giudice's target article extends Belsky et al.'s and Chisholm's efforts to integrate attachment theory with more current thinking about evolution, ecology, and neuroscience. His analysis would be strengthened by (1) using computer simulation to clarify and simulate the effects of early environmental stress, (2) incorporating information about non-stress related sources of individual differences, (3) considering the possibility of adaptive behavior without specific evolutionary adaptations, and (4) considering whether the attachment construct is critical to his analysis.
Resumo:
A study was made of the effect of blending practice upon selected physical properties of crude oils, and of various base oils and petroleum products, using a range of binary mixtures. The crudes comprised light, medium and heavy Kuwait crude oils. The properties included kinematic viscosity, pour point, boiling point and Reid vapour pressure. The literature related to the prediction of these properties, and the changes reported to occur on blending, was critically reviewed as a preliminary to the study. The kinematic viscosity of petroleum oils in general exhibited non-ideal behaviour upon blending. A mechanism was proposed for this behaviour which took into account the effect of asphaltenes content. A correlation was developed, as a modification of Grunberg's equation, to predict the viscosities of binary mixtures of petroleum oils. A correlation was also developed to predict the viscosities of ternary mixtures. This correlation showed better agreement with experimental data (< 6% deviation for crude oils and 2.0% for base oils) than currently-used methods, i.e. ASTM and Refutas methods. An investigation was made of the effect of temperature on the viscosities of crude oils and petroleum products at atmospheric pressure. The effect of pressure on the viscosity of crude oil was also studied. A correlation was developed to predict the viscosity at high pressures (up to 8000 psi), which gave significantly better agreement with the experimental data than the current method due to Kouzel (5.2% and 6.0% deviation for the binary and ternary mixtures respectively). Eyring's theory of viscous flow was critically investigated, and a modification was proposed which extends its application to petroleum oils. The effect of blending on the pour points of selected petroleum oils was studied together with the effect of wax formation and asphaltenes content. Depression of the pour point was always obtained with crude oil binary mixtures. A mechanism was proposed to explain the pour point behaviour of the different binary mixtures. The effects of blending on the boiling point ranges and Reid vapour pressures of binary mixtures of petroleum oils were investigated. The boiling point range exhibited ideal behaviour but the R.V.P. showed negative deviations from it in all cases. Molecular weights of these mixtures were ideal, but the densities and molar volumes were not. The stability of the various crude oil binary mixtures, in terms of viscosity, was studied over a temperature range of 1oC - 30oC for up to 12 weeks. Good stability was found in most cases.
Resumo:
The theory of vapour-liquid equilibria is reviewed, as is the present status or prediction methods in this field. After discussion of the experimental methods available, development of a recirculating equilibrium still based on a previously successful design (the modified Raal, Code and Best still of O'Donnell and Jenkins) is described. This novel still is designed to work at pressures up to 35 bar and for the measurement of both isothermal and isobaric vapour-liquid equilibrium data. The equilibrium still was first commissioned by measuring the saturated vapour pressures of pure ethanol and cyclohexane in the temperature range 77-124°C and 80-142°C respectively. The data obtained were compared with available literature experimental values and with values derived from an extended form of the Antoine equation for which parameters were given in the literature. Commissioning continued with the study of the phase behaviour of mixtures of the two pure components as such mixtures are strongly non-ideal, showing azeotopic behaviour. Existing data did not exist above one atmosphere pressure. Isothermal measurements were made at 83.29°C and 106.54°C, whilst isobaric measurements were made at pressures of 1 bar, 3 bar and 5 bar respectively. The experimental vapour-liquid equilibrium data obtained are assessed by a standard literature method incorporating a themodynamic consistency test that minimises the errors in all the measured variables. This assessment showed that reasonable x-P-T data-sets had been measured, from which y-values could be deduced, but that the experimental y-values indicated the need for improvements in the design of the still. The final discussion sets out the improvements required and outlines how they might be attained.
Resumo:
We have simulated the performance of various apertures used in Coded Aperture Imaging - optically. Coded pictures of extended and continuous-tone planar objects from the Annulus, Twin Annulus, Fresnel Zone Plate and the Uniformly Redundant Array have been decoded using a noncoherent correlation process. We have compared the tomographic capabilities of the Twin Annulus with the Uniformly Redundant Arrays based on quadratic residues and m-sequences. We discuss the ways of reducing the 'd. c.' background of the various apertures used. The non-ideal System-Point-Spread-Function inherent in a noncoherent optical correlation process produces artifacts in the reconstruction. Artifacts are also introduced as a result of unwanted cross-correlation terms from out-of-focus planes. We find that the URN based on m-sequences exhibits good spatial resolution and out-of-focus behaviour when imaging extended objects.
Resumo:
We introduce a continuum model describing data losses in a single node of a packet-switched network (like the Internet) which preserves the discrete nature of the data loss process. By construction, the model has critical behavior with a sharp transition from exponentially small to finite losses with increasing data arrival rate. We show that such a model exhibits strong fluctuations in the loss rate at the critical point and non-Markovian power-law correlations in time, in spite of the Markovian character of the data arrival process. The continuum model allows for rather general incoming data packet distributions and can be naturally generalized to consider the buffer server idleness statistics.
Resumo:
Horizontal Subsurface Flow Treatment Wetlands (HSSF TWs) are used by Severn Trent Water as a low-cost tertiary wastewater treatment for rural locations. Experience has shown that clogging is a major operational problem that reduces HSSF TW lifetime. Clogging is caused by an accumulation of secondary wastewater solids from upstream processes and decomposing leaf litter. Clogging occurs as a sludge layer where wastewater is loaded on the surface of the bed at the inlet. Severn Trent systems receive relatively high hydraulic loading rates, which causes overland flow and reduces the ability to mineralise surface sludge accumulations. A novel apparatus and method, the Aston Permeameter, was created to measure hydraulic conductivity in situ. Accuracy is ±30 %, which was considered adequate given that conductivity in clogged systems varies by several orders of magnitude. The Aston Permeameter was used to perform 20 separate tests on 13 different HSSF TWs in the UK and the US. The minimum conductivity measured was 0.03 m/d at Fenny Compton (compared with 5,000 m/d clean conductivity), which was caused by an accumulation of construction fines in one part of the bed. Most systems displayed a 2 to 3 order of magnitude variation in conductivity in each dimension. Statistically significant transverse variations in conductivity were found in 70% of the systems. Clogging at the inlet and outlet was generally highest where flow enters the influent distribution and exits the effluent collection system, respectively. Surface conductivity was lower in systems with dense vegetation because plant canopies reduce surface evapotranspiration and decelerate sludge mineralisation. An equation was derived to describe how the water table profile is influenced by overland flow, spatial variations in conductivity and clogging. The equation is calibrated using a single parameter, the Clog Factor (CF), which represents the equivalent loss of porosity that would reproduce measured conductivity according to the Kozeny-Carman Equation. The CF varies from 0 for ideal conditions to 1 for completely clogged conditions. Minimum CF was 0.54 for a system that had recently been refurbished, which represents the deviation from ideal conditions due to characteristics of non-ideal media such as particle size distribution and morphology. Maximum CF was 0.90 for a 15 year old system that exhibited sludge accumulation and overland flow across the majority of the bed. A Finite Element Model of a 15 m long HSSF TW was used to indicate how hydraulics and hydrodynamics vary as CF increases. It was found that as CF increases from 0.55 to 0.65 the subsurface wetted area increases, which causes mean hydraulic residence time to increase from 0.16 days to 0.18 days. As CF increases from 0.65 to 0.90, the extent of overland flow increases from 1.8 m to 13.1 m, which reduces hydraulic efficiency from 37 % to 12 % and reduces mean residence time to 0.08 days.
Resumo:
A multistage distillation column in which mass transfer and a reversible chemical reaction occurred simultaneously, has been investigated to formulate a technique by which this process can be analysed or predicted. A transesterification reaction between ethyl alcohol and butyl acetate, catalysed by concentrated sulphuric acid, was selected for the investigation and all the components were analysed on a gas liquid chromatograph. The transesterification reaction kinetics have been studied in a batch reactor for catalyst concentrations of 0.1 - 1.0 weight percent and temperatures between 21.4 and 85.0 °C. The reaction was found to be second order and dependent on the catalyst concentration at a given temperature. The vapour liquid equilibrium data for six binary, four ternary and one quaternary systems are measured at atmospheric pressure using a modified Cathala dynamic equilibrium still. The systems with the exception of ethyl alcohol - butyl alcohol mixtures, were found to be non-ideal. Multicomponent vapour liquid equilibrium compositions were predicted by a computer programme which utilised the Van Laar constants obtained from the binary data sets. Good agreement was obtained between the predicted and experimental quaternary equilibrium vapour compositions. Continuous transesterification experiments were carried out in a six stage sieve plate distillation column. The column was 3" in internal diameter and of unit construction in glass. The plates were 8" apart and had a free area of 7.7%. Both the liquid and vapour streams were analysed. The component conversion was dependent on the boilup rate and the reflux ratio. Because of the presence of the reaction, the concentration of one of the lighter components increased below the feed plate. In the same region a highly developed foam was formed due to the presence of the catalyst. The experimental results were analysed by the solution of a series of simultaneous enthalpy and mass equations. Good agreement was obtained between the experimental and calculated results.
Resumo:
We suggest a model for data losses in a single node (memory buffer) of a packet-switched network (like the Internet) which reduces to one-dimensional discrete random walks with unusual boundary conditions. By construction, the model has critical behavior with a sharp transition from exponentially small to finite losses with increasing data arrival rate. We show that for a finite-capacity buffer at the critical point the loss rate exhibits strong fluctuations and non-Markovian power-law correlations in time, in spite of the Markovian character of the data arrival process.
Resumo:
We report on recent progress in the generation of non-diffracting (Bessel) beams from semiconductor light sources including both edge-emitting and surface-emitting semiconductor lasers as well as light-emitting diodes (LEDs). Bessel beams at the power level of Watts with central lobe diameters of a few to tens of micrometers were achieved from compact and highly efficient lasers. The practicality of reducing the central lobe size of the Bessel beam generated with high-power broad-stripe semiconductor lasers and LEDs to a level unachievable by means of traditional focusing has been demonstrated. We also discuss an approach to exceed the limit of power density for the focusing of radiation with high beam propagation parameter M2. Finally, we consider the potential of the semiconductor lasers for applications in optical trapping/tweezing and the perspectives to replace their gas and solid-state laser counterparts for a range of implementations in optical manipulation towards lab-on-chip configurations. © 2014 Elsevier Ltd.
Resumo:
An important field of application of lasers is biomedical optics. Here, they offer great utility for diagnosis, therapy and surgery. For the development of novel methods of laser-based biomedical diagnostics careful study of light propagation in biological tissues is necessary to enhance our understanding of the optical measurements undertaken, increase research and development capacity and the diagnostic reliability of optical technologies. Ultimately, fulfilling these requirements will increase uptake in clinical applications of laser based diagnostics and therapeutics. To address these challenges informative biomarkers relevant to the biological and physiological function or disease state of the organism must be selected. These indicators are the results of the analysis of tissues and cells, such as blood. For non-invasive diagnostics peripheral blood, cells and tissue can potentially provide comprehensive information on the condition of the human organism. A detailed study of the light scattering and absorption characteristics can quickly detect physiological and morphological changes in the cells due to thermal, chemical, antibiotic treatments, etc [1-5]. The selection of a laser source to study the structure of biological particles also benefits from the fact that gross pathological changes are not induced and diagnostics make effective use of the monochromatic directional coherence properties of laser radiation.
Resumo:
Non-orthogonal multiple access (NOMA) is emerging as a promising multiple access technology for the fifth generation cellular networks to address the fast growing mobile data traffic. It applies superposition coding in transmitters, allowing simultaneous allocation of the same frequency resource to multiple intra-cell users. Successive interference cancellation is used at the receivers to cancel intra-cell interference. User pairing and power allocation (UPPA) is a key design aspect of NOMA. Existing UPPA algorithms are mainly based on exhaustive search method with extensive computation complexity, which can severely affect the NOMA performance. A fast proportional fairness (PF) scheduling based UPPA algorithm is proposed to address the problem. The novel idea is to form user pairs around the users with the highest PF metrics with pre-configured fixed power allocation. Systemlevel simulation results show that the proposed algorithm is significantly faster (seven times faster for the scenario with 20 users) with a negligible throughput loss than the existing exhaustive search algorithm.