60 resultados para Application methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
Organic fertilizers based on seaweed extract potentially have beneficial effects on many crop plants. Herewe investigate the impact of organic fertilizer on Rosmarinus officinalis measured by both yield and oilquality. Plants grown in a temperature-controlled greenhouse with a natural photoperiod and a controlledirrigation system were treated with seaweed fertilizer and an inorganic fertilizer of matching mineralcomposition but with no organic content. Treatments were either by spraying on to the foliage or wateringdirect to the compost. The essential oil was extracted by hydro-distillation with a Clevenger apparatusand analysed by gas-chromatography mass-spectrometry (GC–MS) and NMR. The chemical composi-tions of the plants were compared, and qualitative differences were found between fertilizer treatmentsand application methods. Thus sprayed seaweed fertilizer showed a significantly higher percentage of�-pinene, �-phellandrene, �-terpinene (monoterpenes) and 3-methylenecycloheptene than other treat-ments. Italicene, �-bisabolol (sesquiterpenes), �-thujene, and E-isocitral (monoterpenes) occurred insignificantly higher percentages for plants watered with the seaweed extract. Each was significantly dif-ferent to the inorganic fertilizer and to controls. The seaweed treatments caused a significant increasein oil amount and leaf area as compared with both inorganic treatments and the control regardless ofapplication method.
Resumo:
Discussion of the numerical modeling of NDT methods based on the potential drop and the disruption of power lines to describe the nature, importance and application of modeling. La 1ère partie est consacrée aux applications aux contrôles par courants de Foucault. The first part is devoted to applications for inspection by eddy currents.
Resumo:
Control by voltage drop DC. Contrôle par chute de potentiel de courant alternatif. Control by voltage drop AC.
Resumo:
Capillary electrophoresis (CE) offers the analyst a number of key advantages for the analysis of the components of foods. CE offers better resolution than, say, high-performance liquid chromatography (HPLC), and is more adept at the simultaneous separation of a number of components of different chemistries within a single matrix. In addition, CE requires less rigorous sample cleanup procedures than HPLC, while offering the same degree of automation. However, despite these advantages, CE remains under-utilized by food analysts. Therefore, this review consolidates and discusses the currently reported applications of CE that are relevant to the analysis of foods. Some discussion is also devoted to the development of these reported methods and to the advantages/disadvantages compared with the more usual methods for each particular analysis. It is the aim of this review to give practicing food analysts an overview of the current scope of CE.
Resumo:
Increasingly, the microbiological scientific community is relying on molecular biology to define the complexity of the gut flora and to distinguish one organism from the next. This is particularly pertinent in the field of probiotics, and probiotic therapy, where identifying probiotics from the commensal flora is often warranted. Current techniques, including genetic fingerprinting, gene sequencing, oligonucleotide probes and specific primer selection, discriminate closely related bacteria with varying degrees of success. Additional molecular methods being employed to determine the constituents of complex microbiota in this area of research are community analysis, denaturing gradient gel electrophoresis (DGGE)/temperature gradient gel electrophoresis (TGGE), fluorescent in situ hybridisation (FISH) and probe grids. Certain approaches enable specific aetiological agents to be monitored, whereas others allow the effects of dietary intervention on bacterial populations to be studied. Other approaches demonstrate diversity, but may not always enable quantification of the population. At the heart of current molecular methods is sequence information gathered from culturable organisms. However, the diversity and novelty identified when applying these methods to the gut microflora demonstrates how little is known about this ecosystem. Of greater concern is the inherent bias associated with some molecular methods. As we understand more of the complexity and dynamics of this diverse microbiota we will be in a position to develop more robust molecular-based technologies to examine it. In addition to identification of the microbiota and discrimination of probiotic strains from commensal organisms, the future of molecular biology in the field of probiotics and the gut flora will, no doubt, stretch to investigations of functionality and activity of the microflora, and/or specific fractions. The quest will be to demonstrate the roles of probiotic strains in vivo and not simply their presence or absence.
Resumo:
This paper presents findings of our study on peer-reviewed papers published in the International Conference on Persuasive Technology from 2006 to 2010. The study indicated that out of 44 systems reviewed, 23 were reported to be successful, 2 to be unsuccessful and 19 did not specify whether or not it was successful. 56 different techniques were mentioned and it was observed that most designers use ad hoc definitions for techniques or methods used in design. Hence we propose the need for research to establish unambiguous definitions of techniques and methods in the field.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
A new method of clear-air turbulence (CAT) forecasting based on the Lighthill–Ford theory of spontaneous imbalance and emission of inertia–gravity waves has been derived and applied on episodic and seasonal time scales. A scale analysis of this shallow-water theory for midlatitude synoptic-scale flows identifies advection of relative vorticity as the leading-order source term. Examination of leading- and second-order terms elucidates previous, more empirically inspired CAT forecast diagnostics. Application of the Lighthill–Ford theory to the Upper Mississippi and Ohio Valleys CAT outbreak of 9 March 2006 results in good agreement with pilot reports of turbulence. Application of Lighthill–Ford theory to CAT forecasting for the 3 November 2005–26 March 2006 period using 1-h forecasts of the Rapid Update Cycle (RUC) 2 1500 UTC model run leads to superior forecasts compared to the current operational version of the Graphical Turbulence Guidance (GTG1) algorithm, the most skillful operational CAT forecasting method in existence. The results suggest that major improvements in CAT forecasting could result if the methods presented herein become operational.
Resumo:
The accurate prediction of storms is vital to the oil and gas sector for the management of their operations. An overview of research exploring the prediction of storms by ensemble prediction systems is presented and its application to the oil and gas sector is discussed. The analysis method used requires larger amounts of data storage and computer processing time than other more conventional analysis methods. To overcome these difficulties eScience techniques have been utilised. These techniques potentially have applications to the oil and gas sector to help incorporate environmental data into their information systems
Integrating methods for developing sustainability indicators that can facilitate learning and action
Resumo:
Bossel's (2001) systems-based approach for deriving comprehensive indicator sets provides one of the most holistic frameworks for developing sustainability indicators. It ensures that indicators cover all important aspects of system viability, performance, and sustainability, and recognizes that a system cannot be assessed in isolation from the systems upon which it depends and which in turn depend upon it. In this reply, we show how Bossel's approach is part of a wider convergence toward integrating participatory and reductionist approaches to measure progress toward sustainable development. However, we also show that further integration of these approaches may be able to improve the accuracy and reliability of indicators to better stimulate community learning and action. Only through active community involvement can indicators facilitate progress toward sustainable development goals. To engage communities effectively in the application of indicators, these communities must be actively involved in developing, and even in proposing, indicators. The accuracy, reliability, and sensitivity of the indicators derived from local communities can be ensured through an iterative process of empirical and community evaluation. Communities are unlikely to invest in measuring sustainability indicators unless monitoring provides immediate and clear benefits. However, in the context of goals, targets, and/or baselines, sustainability indicators can more effectively contribute to a process of development that matches local priorities and engages the interests of local people.
Resumo:
Research in construction management is diverse in content and in quality. There is much to be learned from more fundamental disciplines. Construction is a sub-set of human experience rather than a completely separate phenomenon. Therefore, it is likely that there are few problems in construction requiring the invention of a completely new theory. If construction researchers base their work only on that of other construction researchers, our academic community will become less relevant to the world at large. The theories that we develop or test must be of wider applicability to be of any real interest. In undertaking research, researchers learn a lot about themselves. Perhaps the only difference between research and education is that if we are learning about something which no-one else knows, then it is research, otherwise it is education. Self-awareness of this will help to reduce the chances of publishing work which only reveals a researcher’s own learning curve. Scientific method is not as simplistic as non-scientists claim and is the only real way of overcoming methodological weaknesses in our work. The reporting of research may convey the false impression that it is undertaken in the sequence in which it is written. Construction is not so unique and special as to require a completely different set of methods from other fields of enquiry. Until our research is reported in mainstream journals and conferences, there is little chance that we will influence the wider academic community and a concomitant danger that it will become irrelevant. The most useful insights will come from research which challenges the current orthodoxy rather than research which merely reports it.
Resumo:
Research in construction management is diverse in content and in quality. There is much to be learned from more fundamental disciplines. Construction is a sub-set of human experience rather than a completely separate phenomenon. Therefore, it is likely that there are few problems in construction requiring the invention of a completely new theory. If construction researchers base their work only on that of other construction researchers, our academic community will become less relevant to the world at large. The theories that we develop or test must be of wider applicability to be of any real interest. In undertaking research, researchers learn a lot about themselves. Perhaps the only difference between research and education is that if we are learning about something which no-one else knows, then it is research, otherwise it is education. Self-awareness of this will help to reduce the chances of publishing work which only reveals a researcher’s own learning curve. Scientific method is not as simplistic as non-scientists claim and is the only real way of overcoming methodological weaknesses in our work. The reporting of research may convey the false impression that it is undertaken in the sequence in which it is written. Construction is not so unique and special as to require a completely different set of methods from other fields of enquiry. Until our research is reported in mainstream journals and conferences, there is little chance that we will influence the wider academic community and a concomitant danger that it will become irrelevant. The most useful insights will come from research which challenges the current orthodoxy rather than research which merely reports it.