60 resultados para Application methods
Resumo:
The calculation of accurate and reliable vibrational potential functions and normal co-ordinates is discussed, for such simple polyatomic molecules as it may be possible. Such calculations should be corrected for the effects of anharmonicity and of resonance interactions between the vibrational states, and should be fitted to all the available information on all isotopic species: particularly the vibrational frequencies, Coriolis zeta constants and centrifugal distortion constants. The difficulties of making these corrections, and of making use of the observed data are reviewed. A programme for the Ferranti Mercury Computer is described by means of which harmonic vibration frequencies and normal co-ordinate vectors, zeta factors and centrifugal distortion constants can be calculated, from a given force field and from given G-matrix elements, etc. The programme has been used on up to 5 × 5 secular equations for which a single calculation and output of results takes approximately l min; it can readily be extended to larger determinants. The best methods of using such a programme and the possibility of reversing the direction of calculation are discussed. The methods are applied to calculating the best possible vibrational potential function for the methane molecule, making use of all the observed data.
Resumo:
This review article addresses recent advances in the analysis of foods and food components by capillary electrophoresis (CE). CE has found application to a number of important areas of food analysis, including quantitative chemical analysis of food additives, biochemical analysis of protein composition, and others. The speed, resolution and simplicity of CE, combined with low operating costs, make the technique an attractive option for the development of improved methods of food analysis for the new millennium.
Resumo:
Answering many of the critical questions in conservation, development and environmental management requires integrating the social and natural sciences. However, understanding the array of available quantitative methods and their associated terminology presents a major barrier to successful collaboration. We provide an overview of quantitative socio-economic methods that distils their complexity into a simple taxonomy. We outline how each has been used in conjunction with ecological models to address questions relating to the management of socio-ecological systems. We review the application of social and ecological quantitative concepts to agro-ecology and classify the approaches used to integrate the two disciplines. Our review included all published integrated models from 2003 to 2008 in 27 journals that publish agricultural modelling research. Although our focus is on agro-ecology, many of the results are broadly applicable to other fields involving an interaction between human activities and ecology. We found 36 papers that integrated social and ecological concepts in a quantitative model. Four different approaches to integration were used, depending on the scale at which human welfare was quantified. Most models viewed humans as pure profit maximizers, both when calculating welfare and predicting behaviour. Synthesis and applications. We reached two main conclusions based on our taxonomy and review. The first is that quantitative methods that extend predictions of behaviour and measurements of welfare beyond a simple market value basis are underutilized by integrated models. The second is that the accuracy of prediction for integrated models remains largely unquantified. Addressing both problems requires researchers to reach a common understanding of modelling goals and data requirements during the early stages of a project.
Resumo:
Fixed transactions costs that prohibit exchange engender bias in supply analysis due to censoring of the sample observations. The associated bias in conventional regression procedures applied to censored data and the construction of robust methods for mitigating bias have been preoccupations of applied economists since Tobin [Econometrica 26 (1958) 24]. This literature assumes that the true point of censoring in the data is zero and, when this is not the case, imparts a bias to parameter estimates of the censored regression model. We conjecture that this bias can be significant; affirm this from experiments; and suggest techniques for mitigating this bias using Bayesian procedures. The bias-mitigating procedures are based on modifications of the key step that facilitates Bayesian estimation of the censored regression model; are easy to implement; work well in both small and large samples; and lead to significantly improved inference in the censored regression model. These findings are important in light of the widespread use of the zero-censored Tobit regression and we investigate their consequences using data on milk-market participation in the Ethiopian highlands. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The horticultural industry was instrumental in the early development and exploitation of genetic techniques over a century ago. This review will describe recent advances in a range of in vitro methods and their application to plant breeding, with especial emphasis on horticultural crops. These methods include improvements in the efficiency of haploid breeding techniques in many fruit and vegetable species using either microspore-derived or ovule-derived plants. Significant molecular information is now available to supplement these essentially empirical approaches and this may enable the more predictable application of these technologies in previously intransigent crops. Similarly there are now improved techniques for isolation of somatic hybrids, by application of either in vitro fertilisation or the culture of excised ovules from interspecific crosses. In addition to examples taken from the traditional scientific literature, emphasis will also be given to the use of patent databases as a valuable source of information on recent novel technologies developed in the commercial world.
Resumo:
Physical, cultural and biological methods for weed control have developed largely independently and are often concerned with weed control in different systems: physical and cultural control in annual crops and biocontrol in extensive grasslands. We discuss the strengths and limitations of four physical and cultural methods for weed control: mechanical, thermal, cutting, and intercropping, and the advantages and disadvantages of combining biological control with them. These physical and cultural control methods may increase soil nitrogen levels and alter microclimate at soil level; this may be of benefit to biocontrol agents, although physical disturbance to the soil and plant damage may be detrimental. Some weeds escape control by these methods; we suggest that these weeds may be controlled by biocontrol agents. It will be easiest to combine biological control with. re and cutting in grasslands; within arable systems it would be most promising to combine biological control (especially using seed predators and foliar pathogens) with cover-cropping, and mechanical weeding combined with foliar bacterial and possibly foliar fungal pathogens. We stress the need to consider the timing of application of combined control methods in order to cause least damage to the biocontrol agent, along with maximum damage to the weed and to consider the wider implications of these different weed control methods.
Resumo:
The purpose of this paper is to present two multi-criteria decision-making models, including an Analytic Hierarchy Process (AHP) model and an Analytic Network Process (ANP) model for the assessment of deconstruction plans and to make a comparison between the two models with an experimental case study. Deconstruction planning is under pressure to reduce operation costs, adverse environmental impacts and duration, in the meanwhile to improve productivity and safety in accordance with structure characteristics, site conditions and past experiences. To achieve these targets in deconstruction projects, there is an impending need to develop a formal procedure for contractors to select a most appropriate deconstruction plan. Because numbers of factors influence the selection of deconstruction techniques, engineers definitely need effective tools to conduct the selection process. In this regard, multi-criteria decision-making methods such as AHP have been adopted to effectively support deconstruction technique selection in previous researches. in which it has been proved that AHP method can help decision-makers to make informed decisions on deconstruction technique selection based on a sound technical framework. In this paper, the authors present the application and comparison of two decision-making models including the AHP model and the ANP model for deconstruction plan assessment. The paper concludes that both AHP and ANP are viable and capable tools for deconstruction plan assessment under the same set of evaluation criteria. However, although the ANP can measure relationship among selection criteria and their sub-criteria, which is normally ignored in the AHP, the authors also indicate that whether the ANP model can provide a more accurate result should be examined in further research.
Resumo:
In order to establish firm evidence for the health effects of dietary polyphenol consumption, it is essential to have quantitative information regarding their dietary intake. The usefulness of the current methods, which rely mainly on the assessment of polyphenol intake using food records and food composition tables, is limited as they fail to assess total intake accurately. This review highlights the problems associated with such methods with regard to polyphenol-intake predictions. We suggest that the development of biological biomarkers, measured in both blood and urine, are essential for making accurate estimates of polyphenol intake. However, the relationship between dietary intakes and nutritional biomarkers are often highly complex. This review identifies the criteria that must be considered in the development of such biomarkers. In addition, we provide an assessment of the limited number of potential biomarkers of polyphenol intake currently available.
Resumo:
This review focuses on methodological approaches used to study the composition of human faecal microbiota. Gene sequencing is the most accurate tool for revealing the phylogenetic relationships between bacteria. The main application of fluorescence in situ hybridization (FISH) in both microscopy and flow cytometry is to enumerate faecal bacteria. While flow cytometry is a very fast method, FISH microscopy still has a considerably lower detection limit.
Resumo:
Proteomic tools-in particular, mass spectrometry (MS)-have advanced significantly in recent years, and the identification of proteins within complex mixtures is now a routine procedure. Quantitative methods of analysis are less well advanced and continue to develop. These include the use of stable isotope ratio approaches, isotopically labeled peptide standards, and nonlabeling methods. This paper summarizes the use of MS as a proteomics tool to identify and semiquantify proteins and their modified forms by using examples of relevance to the Maillard reaction. Finally, some challenges for the future are presented.
Resumo:
When competing strategies for development programs, clinical trial designs, or data analysis methods exist, the alternatives need to be evaluated in a systematic way to facilitate informed decision making. Here we describe a refinement of the recently proposed clinical scenario evaluation framework for the assessment of competing strategies. The refinement is achieved by subdividing key elements previously proposed into new categories, distinguishing between quantities that can be estimated from preexisting data and those that cannot and between aspects under the control of the decision maker from those that are determined by external constraints. The refined framework is illustrated by an application to a design project for an adaptive seamless design for a clinical trial in progressive multiple sclerosis.
Resumo:
Gaussian multi-scale representation is a mathematical framework that allows to analyse images at different scales in a consistent manner, and to handle derivatives in a way deeply connected to scale. This paper uses Gaussian multi-scale representation to investigate several aspects of the derivation of atmospheric motion vectors (AMVs) from water vapour imagery. The contribution of different spatial frequencies to the tracking is studied, for a range of tracer sizes, and a number of tracer selection methods are presented and compared, using WV 6.2 images from the geostationary satellite MSG-2.
Resumo:
In financial decision-making processes, the adopted weights of the objective functions have significant impacts on the final decision outcome. However, conventional rating and weighting methods exhibit difficulty in deriving appropriate weights for complex decision-making problems with imprecise information. Entropy is a quantitative measure of uncertainty and has been useful in exploring weights of attributes in decision making. A fuzzy and entropy-based mathematical approach is employed to solve the weighting problem of the objective functions in an overall cash-flow model. The multiproject being undertaken by a medium-size construction firm in Hong Kong was used as a real case study to demonstrate the application of entropy. Its application in multiproject cash flow situations is demonstrated. The results indicate that the overall before-tax profit was HK$ 0.11 millions lower after the introduction of appropriate weights. In addition, the best time to invest in new projects arising from positive cash flow was identified to be two working months earlier than the nonweight system.
Resumo:
In this paper,the Prony's method is applied to the time-domain waveform data modelling in the presence of noise.The following three problems encountered in this work are studied:(1)determination of the order of waveform;(2)de-termination of numbers of multiple roots;(3)determination of the residues.The methods of solving these problems are given and simulated on the computer.Finally,an output pulse of model PG-10N signal generator and the distorted waveform obtained by transmitting the pulse above mentioned through a piece of coaxial cable are modelled,and satisfactory results are obtained.So the effectiveness of Prony's method in waveform data modelling in the presence of noise is confirmed.
Resumo:
BACKGROUND AND AIM: The atherogenic potential of dietary derived lipids, chylomicrons (CM) and their remnants (CMr) is now becoming more widely recognised. To investigate factors effecting levels of CM and CMr and their importance in coronary heart disease risk it is essential to use a specific method of quantification. Two studies were carried out to investigate: (i) effects of increased daily intake of long chain n-3 polyunsaturated fatty acid (LC n-3 PUFA), and (ii) effects of increasing meal monounsaturated fatty acid (MUFA) content on the postprandial response of intestinally-derived lipoproteins. The contribution of the intestinally-derived lipoproteins to total lipaemia was assessed by triacylglycerol-rich lipoprotein (TRL) apolipoprotein B-48 (apo B-48) and retinyl ester (RE) concentrations. METHODS AND RESULTS: In a randomised controlled crossover trial (placebo vs LC n-3 PUFA) a mean daily intake of 1.4 g/day of LC n-3 PUFA failed to reduce fasting and postprandial triacylglycerol (TAG) response in 9 healthy male volunteers. Although the pattern and nature of the apo B-48 response was consistent with the TAG response following the two diets, the postprandial RE response differed on the LC n-3 PUFA diet with a lower early RE response and a delayed and more marked increase in RE in the late postprandial period compared with the control diet, but the differences did not reach levels of statistical significance. In the meal study there was no effect of MUFA/SFA content on the total lipaemic response to the meals nor on the contribution of intestinally derived lipoproteins evaluated as TAG, apo B-48 and RE responses in the TRL fraction. In both studies, the RE and apo B-48 measurements provided broadly similar information with respect to lack of effects of dietary or meal fatty acid composition and the presence of single or multiple peak responses. However the apo B-48 and RE measurements differed with respect to the timing of their peak response times, with a delayed RE peak, relalive to apo B-48, of approximately 2-3 hours for the LC n-3 PUFA diet (p = 0.002) study and 1-1.5 hours for the meal MUFA/SFA study. CONCLUSIONS: It was concluded that there are limitations of using RE as a specific CM marker, apo B-48 quantitation was found to be a more appropriate method for CM and CMr quantitation. However it was still considered of value to measure RE as it provided additional information regarding the incorporation of other constituents into the CM particle.