22 resultados para General-method

em Aston University Research Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

To reduce global biodiversity loss, there is an urgent need to determine the most efficient allocation of conservation resources. Recently, there has been a growing trend for many governments to supplement public ownership and management of reserves with incentive programs for conservation on private land. This raises important questions, such as the extent to which private land conservation can improve conservation outcomes, and how it should be mixed with more traditional public land conservation. We address these questions, using a general framework for modelling environmental policies and a case study examining the conservation of endangered native grasslands to the west of Melbourne, Australia. Specifically, we examine three policies that involve i) spending all resources on creating public conservation areas; ii) spending all resources on an ongoing incentive program where private landholders are paid to manage vegetation on their property with 5-year contracts; and iii) splitting resources between these two approaches. The performance of each strategy is quantified with a vegetation condition change model that predicts future changes in grassland quality. Of the policies tested, no one policy was always best and policy performance depended on the objectives of those enacting the policy. Although policies to promote conservation on private land are proposed and implemented in many areas, they are rarely evaluated in terms of their ecological consequences. This work demonstrates a general method for evaluating environmental policies and highlights the utility of a model which combines ecological and socioeconomic processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with investigations of the effects of molecular encounters on nuclear magnetic resonance spin-lattice relaxation times, with particular reference to mesitylene in mixtures with cyclohexane and TMS. The purpose of the work was to establish the best theoretical description of T1 and assess whether a recently identified mechanism (buffeting), that influences n.m.r. chemical shifts, governs Tl also. A set of experimental conditions are presented that allow reliable measurements of Tl and the N. O. E. for 1H and 13C using both C. W. and F.T. n.m.r. spectroscopy. Literature data for benzene, cyclohexane and chlorobenzene diluted by CC14 and CS2 are used to show that the Hill theory affords the best estimation of their correlation times but appears to be mass dependent. Evaluation of the T1 of the mesitylene protons indicates that a combined Hill-Bloembergen-Purcell-Pound model gives an accurate estimation of T1; subsequently this was shown to be due to cancellation of errors in the calculated intra and intemolecular components. Three experimental methods for the separation of the intra and intermolecular relaxation times are described. The relaxation times of the 13C proton satellite of neat bezene, 1,4 dioxane and mesitylene were measured. Theoretical analyses of the data allow the calculation of Tl intra. Studies of intermolecular NOE's were found to afford a general method of separating observed T1's into their intra and intermolecular components. The aryl 1H and corresponding 13C T1 values and the NOE for the ring carbon of mesitylene in CC14 and C6H12-TMS have been used in combination to determine T1intra and T1inter. The Hill and B.P.P. models are shown to predict similarly inaccurate values for T1linter. A buffeting contribution to T1inter is proposed which when applied to the BPP model and to the Gutowsky-Woessner expression for T1inter gives an inaccuracy of 12% and 6% respectively with respect to theexperimentally based T1inter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using the integrable nonlinear Schrodinger equation (NLSE) as a channel model, we describe the application of nonlinear spectral management for effective mitigation of all nonlinear distortions induced by the fiber Kerr effect. Our approach is a modification and substantial development of the so-called eigenvalue communication idea first presented in A. Hasegawa, T. Nyu, J. Lightwave Technol. 11, 395 (1993). The key feature of the nonlinear Fourier transform (inverse scattering transform) method is that for the NLSE, any input signal can be decomposed into the so-called scattering data (nonlinear spectrum), which evolve in a trivial manner, similar to the evolution of Fourier components in linear equations. We consider here a practically important weakly nonlinear transmission regime and propose a general method of the effective encoding/modulation of the nonlinear spectrum: The machinery of our approach is based on the recursive Fourier-type integration of the input profile and, thus, can be considered for electronic or all-optical implementations. We also present a novel concept of nonlinear spectral pre-compensation, or in other terms, an effective nonlinear spectral pre-equalization. The proposed general technique is then illustrated through particular analytical results available for the transmission of a segment of the orthogonal frequency division multiplexing (OFDM) formatted pattern, and through WDM input based on Gaussian pulses. Finally, the robustness of the method against the amplifier spontaneous emission is demonstrated, and the general numerical complexity of the nonlinear spectrum usage is discussed. © 2013 Optical Society of America.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A theory of an optimal distribution of the gain of in-line amplifiers in dispersion-managed transmission systems is developed. As an example of the application of the general method, a design of the line with periodically imbalanced in-line amplification is proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We develop a theory of an optimal distribution of the gain of in-line amplifiers in dispersion-managed transmission systems. As an example of the application of the general method we propose a design of the line with periodically imbalanced in-line amplification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: A natural glycoprotein usually exists as a spectrum of glycosylated forms, where each protein molecule may be associated with an array of oligosaccharide structures. The overall range of glycoforms can have a variety of different biophysical and biochemical properties, although details of structure–function relationships are poorly understood, because of the microheterogeneity of biological samples. Hence, there is clearly a need for synthetic methods that give access to natural and unnatural homogeneously glycosylated proteins. The synthesis of novel glycoproteins through the selective reaction of glycosyl iodoacetamides with the thiol groups of cysteine residues, placed by site-directed mutagenesis at desired glycosylation sites has been developed. This provides a general method for the synthesis of homogeneously glycosylated proteins that carry saccharide side chains at natural or unnatural glycosylation sites. Here, we have shown that the approach can be applied to the glycoprotein hormone erythropoietin, an important therapeutic glycoprotein with three sites of N-glycosylation that are essential for in vivo biological activity. Results: Wild-type recombinant erythropoietin and three mutants in which glycosylation site asparagine residues had been changed to cysteines (His10-WThEPO, His10-Asn24Cys, His10-Asn38Cys, His10-Asn83CyshEPO) were overexpressed and purified in yields of 13 mg l−1 from Escherichia coli. Chemical glycosylation with glycosyl-β-N-iodoacetamides could be monitored by electrospray MS. Both in the wild-type and in the mutant proteins, the potential side reaction of the other four cysteine residues (all involved in disulfide bonds) were not observed. Yield of glycosylation was generally about 50% and purification of glycosylated protein from non-glycosylated protein was readily carried out using lectin affinity chromatography. Dynamic light scattering analysis of the purified glycoproteins suggested that the glycoforms produced were monomeric and folded identically to the wild-type protein. Conclusions: Erythropoietin expressed in E. coli bearing specific Asn→Cys mutations at natural glycosylation sites can be glycosylated using β-N-glycosyl iodoacetamides even in the presence of two disulfide bonds. The findings provide the basis for further elaboration of the glycan structures and development of this general methodology for the synthesis of semi-synthetic glycoproteins. Results: Wild-type recombinant erythropoietin and three mutants in which glycosylation site asparagine residues had been changed to cysteines (His10-WThEPO, His10-Asn24Cys, His10-Asn38Cys, His10-Asn83CyshEPO) were overexpressed and purified in yields of 13 mg l−1 from Escherichia coli. Chemical glycosylation with glycosyl-β-N-iodoacetamides could be monitored by electrospray MS. Both in the wild-type and in the mutant proteins, the potential side reaction of the other four cysteine residues (all involved in disulfide bonds) were not observed. Yield of glycosylation was generally about 50% and purification of glycosylated protein from non-glycosylated protein was readily carried out using lectin affinity chromatography. Dynamic light scattering analysis of the purified glycoproteins suggested that the glycoforms produced were monomeric and folded identically to the wild-type protein. Conclusions: Erythropoietin expressed in E. coli bearing specific Asn→Cys mutations at natural glycosylation sites can be glycosylated using β-N-glycosyl iodoacetamides even in the presence of two disulfide bonds. The findings provide the basis for further elaboration of the glycan structures and development of this general methodology for the synthesis of semi-synthetic glycoproteins

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This theoretical study shows the technical feasibility of self-powered geothermal desalination of groundwater sources at <100 °C. A general method and framework are developed and then applied to specific case studies. First, the analysis considers an ideal limit to performance based on exergy analysis using generalised idealised assumptions. This thermodynamic limit applies to any type of process technology. Then, the analysis focuses specifically on the Organic Rankine Cycle (ORC) driving Reverse Osmosis (RO), as these are among the most mature and efficient applicable technologies. Important dimensionless parameters are calculated for the ideal case of the self-powered arrangement and semi-ideal case where only essential losses dependent on the RO system configuration are considered. These parameters are used to compare the performance of desalination systems using ORC-RO under ideal, semi-ideal and real assumptions for four case studies relating to geothermal sources located in India, Saudi Arabia, Tunisia and Turkey. The overall system recovery ratio (the key performance measure for the self-powered process) depends strongly on the geothermal source temperature. It can be as high as 91.5% for a hot spring emerging at 96 °C with a salinity of 1830 mg/kg.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study a Luttinger liquid (LL) coupled to a generic environment consisting of bosonic modes with arbitrary density-density and current-current interactions. The LL can be either in the conducting phase and perturbed by a weak scatterer or in the insulating phase and perturbed by a weak link. The environment modes can also be scattered by the imperfection in the system with arbitrary transmission and reflection amplitudes. We present a general method of calculating correlation functions under the presence of the environment and prove the duality of exponents describing the scaling of the weak scatterer and of the weak link. This duality holds true for a broad class of models and is sensitive to neither interaction nor environmental modes details, thus it shows up as the universal property. It ensures that the environment cannot generate new stable fixed points of the renormalization group flow. Thus, the LL always flows toward either conducting or insulating phase. Phases are separated by a sharp boundary which is shifted by the influence of the environment. Our results are relevant, for example, for low-energy transport in (i) an interacting quantum wire or a carbon nanotube where the electrons are coupled to the acoustic phonons scattered by the lattice defect; (ii) a mixture of interacting fermionic and bosonic cold atoms where the bosonic modes are scattered due to an abrupt local change of the interaction; (iii) mesoscopic electric circuits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new general linear model (GLM) beamformer method is described for processing magnetoencephalography (MEG) data. A standard nonlinear beamformer is used to determine the time course of neuronal activation for each point in a predefined source space. A Hilbert transform gives the envelope of oscillatory activity at each location in any chosen frequency band (not necessary in the case of sustained (DC) fields), enabling the general linear model to be applied and a volumetric T statistic image to be determined. The new method is illustrated by a two-source simulation (sustained field and 20 Hz) and is shown to provide accurate localization. The method is also shown to locate accurately the increasing and decreasing gamma activities to the temporal and frontal lobes, respectively, in the case of a scintillating scotoma. The new method brings the advantages of the general linear model to the analysis of MEG data and should prove useful for the localization of changing patterns of activity across all frequency ranges including DC (sustained fields). © 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The success of mainstream computing is largely due to the widespread availability of general-purpose architectures and of generic approaches that can be used to solve real-world problems cost-effectively and across a broad range of application domains. In this chapter, we propose that a similar generic framework is used to make the development of autonomic solutions cost effective, and to establish autonomic computing as a major approach to managing the complexity of today’s large-scale systems and systems of systems. To demonstrate the feasibility of general-purpose autonomic computing, we introduce a generic autonomic computing framework comprising a policy-based autonomic architecture and a novel four-step method for the effective development of self-managing systems. A prototype implementation of the reconfigurable policy engine at the core of our architecture is then used to develop autonomic solutions for case studies from several application domains. Looking into the future, we describe a methodology for the engineering of self-managing systems that extends and generalises our autonomic computing framework further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical techniques have been finding increasing use in all aspects of fracture mechanics, and often provide the only means for analyzing fracture problems. The work presented here, is concerned with the application of the finite element method to cracked structures. The present work was directed towards the establishment of a comprehensive two-dimensional finite element, linear elastic, fracture analysis package. Significant progress has been made to this end, and features which can now be studied include multi-crack tip mixed-mode problems, involving partial crack closure. The crack tip core element was refined and special local crack tip elements were employed to reduce the element density in the neighbourhood of the core region. The work builds upon experience gained by previous research workers and, as part of the general development, the program was modified to incorporate the eight-node isoparametric quadrilateral element. Also. a more flexible solving routine was developed, and provided a very compact method of solving large sets of simultaneous equations, stored in a segmented form. To complement the finite element analysis programs, an automatic mesh generation program has been developed, which enables complex problems. involving fine element detail, to be investigated with a minimum of input data. The scheme has proven to be versati Ie and reasonably easy to implement. Numerous examples are given to demonstrate the accuracy and flexibility of the finite element technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Cauchy problem for general elliptic second-order linear partial differential equations in which the Dirichlet data in H½(?1 ? ?3) is assumed available on a larger part of the boundary ? of the bounded domain O than the boundary portion ?1 on which the Neumann data is prescribed, is investigated using a conjugate gradient method. We obtain an approximation to the solution of the Cauchy problem by minimizing a certain discrete functional and interpolating using the finite diference or boundary element method. The minimization involves solving equations obtained by discretising mixed boundary value problems for the same operator and its adjoint. It is proved that the solution of the discretised optimization problem converges to the continuous one, as the mesh size tends to zero. Numerical results are presented and discussed.