922 resultados para expense


Relevância:

10.00% 10.00%

Publicador:

Resumo:

B-ISDN is a universal network which supports diverse mixes of service, applications and traffic. ATM has been accepted world-wide as the transport technique for future use in B-ISDN. ATM, being a simple packet oriented transfer technique, provides a flexible means for supporting a continuum of transport rates and is efficient due to possible statistical sharing of network resources by multiple users. In order to fully exploit the potential statistical gain, while at the same time provide diverse service and traffic mixes, an efficient traffic control must be designed. Traffic controls which include congestion and flow control are a fundamental necessity to the success and viability of future B-ISDN. Congestion and flow control is difficult in the broadband environment due to the high speed link, the wide area distance, diverse service requirements and diverse traffic characteristics. Most congestion and flow control approaches in conventional packet switched networks are reactive in nature and are not applicable in the B-ISDN environment. In this research, traffic control procedures mainly based on preventive measures for a private ATM-based network are proposed and their performance evaluated. The various traffic controls include CAC, traffic flow enforcement, priority control and an explicit feedback mechanism. These functions operate at call level and cell level. They are carried out distributively by the end terminals, the network access points and the internal elements of the network. During the connection set-up phase, the CAC decides the acceptance or denial of a connection request and allocates bandwidth to the new connection according to three schemes; peak bit rate, statistical rate and average bit rate. The statistical multiplexing rate is based on a `bufferless fluid flow model' which is simple and robust. The allocation of an average bit rate to data traffic at the expense of delay obviously improves the network bandwidth utilisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents experimental and theoretical work on the use of dark optical solitons as data carriers in communications systems. The background chapters provide an introduction to nonlinear optics, and to dark solitons, described as intensity dips in a bright background, with an asymmetrical phase profile. The motivation for the work is explained, considering both the superior stability of dark solitons and the need for a soliton solution suitable for the normal, rather than the anomalous (bright soliton) dispersion regime. The first chapters present two generation techniques, producing packets of dark solitons via bright pulse interaction, and generating continuous trains of dark pulses using a fibre laser. The latter were not dark solitons, but were suitable for imposition of the required phase shift by virtue of their extreme stability. The later chapters focus on the propagation and control of dark solitons. Their response to periodic loss and gain is shown to result in the exponential growth of spectral sidebands. This may be suppressed by reducing the periodicity of the loss/gain cycle or using periodic filtering. A general study of the response of dark solitons to spectral filtering is undertaken, showing dramatic differences in the behaviour of black and 99.9% grey solitons. The importance of this result is highlighted by simulations of propagation in noisy systems, where the timing jitter resulting from random noise is actually enhanced by filtering. The results of using sinusoidal phase modulation to control pulse position are presented, showing that the control is at the expense of serious modulation of the bright background. It is concluded that in almost every case, dark and bright solitons have very different properties, and to continue to make comparisons would not be so productive as to develop a deeper understanding of the interactions between the dark soliton and its bright background.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates four reference fuels and three low lignin Lolium Festuca grasses which were subjected to pyrolysis to produce pyrolysis oils. The oils were analysed to determine their quality and stability, enabling the identification of feedstock traits which affect oil stability. Two washed feedstocks were also subjected to pyrolysis to investigate whether washing can enhance pyrolysis oil quality. It was found that the mineral matter had the dominate effect on pyrolysis in compared to lignin content, in terms of pyrolysis yields for organics, char and gases. However the higher molecular weight compounds present in the pyrolysis oil are due to the lignin derived compounds as determined by results of GPC and liquid-GC/MS. The light organic fraction also increased in yield, but reduced in water content as metals increased at the expense of the lignin content. It was found that the fresh oil and aged oil had different compound intensities/concentrations, which is due to a large number of reactions occurring when the oil is aged day by day. These findings agree with previous reports which suggest that a large amount of re-polymerisation occurs as levoglucosan yields increase during the aging progress, while hydroxyacetaldehyde decrease. In summary the paper reports a window for producing a more stable pyrolysis oil by the use of energy crops, and also show that washing of biomass can improve oil quality and stability for high ash feedstocks, but less so for the energy crops.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problems associated with x-ray-transparent denture base are defined and conventional approaches to their solution are assessed. Consideration of elemental absorption parameters leads to the postulation that atoms such as zinc, and bromine, may be effective radiopacifiers over at least part of the clinical x-ray spectrum. These elements had hitherto been considered too light to be effective. Investigation of copolymers of methylmethacrylate and p-bromostyrene revealed no deleterious effects arising from the aromatically brominated monomer (aliphatic bromination caused UV destabilisation). For effective x-ray absorption a higher level of bromination would be necessary, but the expense of suitable compounds made further study unjustifiable. Incorporation of zinc atoms into the polymer was accomplished by copolymerisation of zinc acrylate with methylmethacrylate in solution. At high zinc levels this produced a powder copolymer convenient for addition to dental polymers in the dough moulding process. The resulting mouldings showed increasing brittleness at high loadings of copolymer. Fracture was shown to be through the powder particles rather than around them, indicating the source of weakness to be in the internal structure of the copolymer. The copolymer was expected to be cross-linked through divalent zinc ions and its insolubility and infusibility supported this. Cleavage of the ionic cross links with formic acid produced a zinc-free linear copolymer of high molecular weight. Addition of low concentrations of acrylic acid to the dough moulding monomer appeared to 'labilise' the cross links producing a more homogeneous moulding with adequate wet strength. Toxicologically the zinc-containing materials are satisfactory and though zinc is extracted at a measurable rate in an aqueous system, this is very small and should be acceptable over the life of a denture. In other respects the composite is quite satisfactory and though a marketable product is not claimed the system is considered worthy of further study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of cancer cachexia on host metabolism has been studied in mice transplanted with either the MAC16 adenocarcinoma which induces profound loss of host body weight and depletion of lipid stores or, the MAC13 adenocarcinoma which is of the same histological type, but which grows without an effect on host body weight. Oxidation of D-[U-14C]glucose was elevated in both tumour-bearing states irrespective of cachexia, when compared with non tumour-bearing controls. Both the MAC16 and MAC13 tumours in vivo utilised glucose at the expense of the brain, where its use was partially replaced by 3-hydroxybutyrate, a ketone body. Oxidation of both [U-14C]palmitic acid and [1-14C]triolein was significantly increased in MAC16 tumour-bearing animals and decreased in MAC13 tumour-bearing animals when compared with non tumour-bearing controls, suggesting that in cachectic tumour-bearing animals, mobilisation of body lipids is accompanied by an increased utilisation by the host. Weight loss in MAC16 tumour-bearing animals is associated with the production of a lipolytic factor. Injection of this partially purified lipolytic factor induced weight loss in recipient animals which could be maintained over time in tumour-bearing animals. This suggests that the tumour acts as a sink for the free fatty acids liberated as a result of the mobilisatation of adipose stores. Lipids are important as an energy source in cachectic animals because of their high calorific value and because glucose is being diverted away from host tissues to support tumour growth. Their importance is further demonstrated by the evidence of a MAC16 tumour-associated lipolytic factor. This lipolytic factor is the key to understanding the alterations in host metabolism that occur in tumour-induced cachexia, and may provide future alternatives for the reversal of cachexia and the treatment of cancer itself.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aerial photography was used to determine the land use in a test area of the Nigerian savanna in 1950 and 1972. Changes in land use were determined and correlated with accessibility, appropriate low technology methods being used to make it easy to extend the investigation to other areas without incurring great expense. A test area of 750 sq km was chosen located in Kaduna State of Nigeria. The geography of the area is summarised together with the local knowledge which is essential for accurate photo interpretation. A land use classification was devised and tested for use with medium scale aerial photography of the savanna. The two sets of aerial photography at 1:25 000 scale were sampled using systematic dot grids. A dot density of 8 1/2 dots per sq km was calculated to give an acceptable estimate of land use. Problems of interpretation included gradation between categories, sample position uncertainty and personal bias. The results showed that in 22 years the amount of cultivated land in the test area had doubled while there had been a corresponding decrease in the amount of uncultivated land particularly woodland. The intensity of land use had generally increased. The distribution of land use changes was analysed and correlated with accessibility. Highly significant correlations were found for 1972 which had not existed in 1950. Changes in land use could also be correlated with accessibility. It was concluded that in the 22 year test period there had been intensification of land use, movement of human activity towards the main road, and a decrease in natural vegetation particularly close to the road. The classification of land use and the dot grid method of survey were shown to be applicable to a savanna test area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A hot filtration unit downstream of a 1kg/h fluidised bed fast pyrolysis reactor was designed and built. The filter unit operates at 450oC and consists of 1 exchangeable filter candle with reverse pulse cleaning system. Hot filtration experiments up to 7 hours were performed with beech wood as feedstock. It was possible to produce fast pyrolysis oils with a solid content below 0.01 wt%. The additional residence time of the pyrolysis vapours and secondary vapour cracking on the filter cake caused an increase of non-condensable gases at the expense of organic liquid yield. The oils produced with hot filtration showed superior quality properties regarding viscosity than standard pyrolysis oils. The oils were analysed by rotational viscosimetry and gel permeation chromatography before and after accelerated aging. During filtration the separated particulates accumulate on the candle surface and build up the filter cake. The filter cake leads to an increase in pressure drop between the raw gas and the clean gas side of the filter candle. At a certain pressure drop the filter cake has to be removed by reverse pulse cleaning to regenerate the pressure drop. The experiments showed that successful pressure drop recovery was possible during the initial filtration cycles, thereafter further cycles showed minor pressure drop recovery and therefore a steady increase in differential pressure. Filtration with pre-coating the candle to form an additional layer between the filter candle and cake resulted in total removal of the dust cake.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents research within empirical financial economics with focus on liquidity and portfolio optimisation in the stock market. The discussion on liquidity is focused on measurement issues, including TAQ data processing and measurement of systematic liquidity factors (FSO). Furthermore, a framework for treatment of the two topics in combination is provided. The liquidity part of the thesis gives a conceptual background to liquidity and discusses several different approaches to liquidity measurement. It contributes to liquidity measurement by providing detailed guidelines on the data processing needed for applying TAQ data to liquidity research. The main focus, however, is the derivation of systematic liquidity factors. The principal component approach to systematic liquidity measurement is refined by the introduction of moving and expanding estimation windows, allowing for time-varying liquidity co-variances between stocks. Under several liability specifications, this improves the ability to explain stock liquidity and returns, as compared to static window PCA and market average approximations of systematic liquidity. The highest ability to explain stock returns is obtained when using inventory cost as a liquidity measure and a moving window PCA as the systematic liquidity derivation technique. Systematic factors of this setting also have a strong ability in explaining a cross-sectional liquidity variation. Portfolio optimisation in the FSO framework is tested in two empirical studies. These contribute to the assessment of FSO by expanding the applicability to stock indexes and individual stocks, by considering a wide selection of utility function specifications, and by showing explicitly how the full-scale optimum can be identified using either grid search or the heuristic search algorithm of differential evolution. The studies show that relative to mean-variance portfolios, FSO performs well in these settings and that the computational expense can be mitigated dramatically by application of differential evolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examined solar thermal collectors for use in alternative hybrid solar-biomass power plant applications in Gujarat, India. Following a preliminary review, the cost-effective selection and design of the solar thermal field were identified as critical factors underlying the success of hybrid plants. Consequently, the existing solar thermal technologies were reviewed and ranked for use in India by means of a multi-criteria decision-making method, the Analytical Hierarchy Process (AHP). Informed by the outcome of the AHP, the thesis went on to pursue the Linear Fresnel Reflector (LFR), the design of which was optimised with the help of ray-tracing. To further enhance collector performance, LFR concepts incorporating novel mirror spacing and drive mechanisms were evaluated. Subsequently, a new variant, termed the Elevation Linear Fresnel Reflector (ELFR) was designed, constructed and tested at Aston University, UK, therefore allowing theoretical models for the performance of a solar thermal field to be verified. Based on the resulting characteristics of the LFR, and data gathered for the other hybrid system components, models of hybrid LFR- and ELFR-biomass power plants were developed and analysed in TRNSYS®. The techno-economic and environmental consequences of varying the size of the solar field in relation to the total plant capacity were modelled for a series of case studies to evaluate different applications: tri-generation (electricity, ice and heat), electricity-only generation, and process heat. The case studies also encompassed varying site locations, capacities, operational conditions and financial situations. In the case of a hybrid tri-generation plant in Gujarat, it was recommended to use an LFR solar thermal field of 14,000 m2 aperture with a 3 tonne biomass boiler, generating 815 MWh per annum of electricity for nearby villages and 12,450 tonnes of ice per annum for local fisheries and food industries. However, at the expense of a 0.3 ¢/kWh increase in levelised energy costs, the ELFR increased saving of biomass (100 t/a) and land (9 ha/a). For solar thermal applications in areas with high land cost, the ELFR reduced levelised energy costs. It was determined that off-grid hybrid plants for tri-generation were the most feasible application in India. Whereas biomass-only plants were found to be more economically viable, it was concluded that hybrid systems will soon become cost competitive and can considerably improve current energy security and biomass supply chain issues in India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is desirable that energy performance improvement is not realized at the expense of other network performance parameters. This paper investigates the trade off between energy efficiency, spectral efficiency and user QoS performance for a multi-cell multi-user radio access network. Specifically, the energy consumption ratio (ECR) and the spectral efficiency of several common frequency domain packet schedulers in a cellular E-UTRAN downlink are compared for both the SISO transmission mode and the 2x2 Alamouti Space Frequency Block Code (SFBC) MIMO transmission mode. It is well known that the 2x2 SFBC MIMO transmission mode is more spectrally efficient compared to the SISO transmission mode, however, the relationship between energy efficiency and spectral efficiency is undecided. It is shown that, for the E-UTRAN downlink with fixed transmission power, spectral efficiency improvement results into energy efficiency improvement. The effect of SFBC MIMO versus SISO on the user QoS performance is also studied. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to examine the quality of evidence collected during interview. Current UK national guidance on the interviewing of victims and witnesses recommends a phased approach, allowing the interviewee to deliver their free report before any questioning takes place, and stipulating that during this free report the interviewee should not be interrupted. Interviewers, therefore, often find it necessary during questioning to reactivate parts of the interviewee's free report for further elaboration. Design/methodology/approach: The first section of this paper draws on a collection of police interviews with women reporting rape, and discusses one method by which this is achieved - the indirect quotation of the interviewee by the interviewer - exploring the potential implications for the quality of evidence collected during this type of interview. The second section of the paper draws on the same data set and concerns itself with a particular method by which information provided by an interviewee has its meaning "fixed" by the interviewer. Findings: It is found that "formulating" is a recurrent practice arising from the need to clarify elements of the account for the benefit of what is termed the "overhearing audience" - in this context, the police scribe, CPS, and potentially the Court. Since the means by which this "fixing" is achieved necessarily involves the foregrounding of elements of the account deemed to be particularly salient at the expense of other elements which may be entirely deleted, formulations are rarely entirely neutral. Their production, therefore, has the potential to exert undue interviewer influence over the negotiated "final version" of interviewees' accounts. Originality/value: The paper highlights the fact that accurate re-presentations of interviewees' accounts are a crucial tool in ensuring smooth progression of interviews and that re-stated speech and formulation often have implications for the quality of evidence collected during significant witness interviews. © Emerald Group Publishing Limited.