170 resultados para amplify and forward
Resumo:
Processes in the climate system that can either amplify or dampen the climate response to an external perturbation are referred to as climate feedbacks. Climate sensitivity estimates depend critically on radiative feedbacks associated with water vapor, lapse rate, clouds, snow, and sea ice, and global estimates of these feedbacks differ among general circulation models. By reviewing recent observational, numerical, and theoretical studies, this paper shows that there has been progress since the Third Assessment Report of the Intergovernmental Panel on Climate Change in (i) the understanding of the physical mechanisms involved in these feedbacks, (ii) the interpretation of intermodel differences in global estimates of these feedbacks, and (iii) the development of methodologies of evaluation of these feedbacks (or of some components) using observations. This suggests that continuing developments in climate feedback research will progressively help make it possible to constrain the GCMs’ range of climate feedbacks and climate sensitivity through an ensemble of diagnostics based on physical understanding and observations.
Resumo:
Prediction of the solar wind conditions in near-Earth space, arising from both quasi-steady and transient structures, is essential for space weather forecasting. To achieve forecast lead times of a day or more, such predictions must be made on the basis of remote solar observations. A number of empirical prediction schemes have been proposed to forecast the transit time and speed of coronal mass ejections (CMEs) at 1 AU. However, the current lack of magnetic field measurements in the corona severely limits our ability to forecast the 1 AU magnetic field strengths resulting from interplanetary CMEs (ICMEs). In this study we investigate the relation between the characteristic magnetic field strengths and speeds of both magnetic cloud and noncloud ICMEs at 1 AU. Correlation between field and speed is found to be significant only in the sheath region ahead of magnetic clouds, not within the clouds themselves. The lack of such a relation in the sheaths ahead of noncloud ICMEs is consistent with such ICMEs being skimming encounters of magnetic clouds, though other explanations are also put forward. Linear fits to the radial speed profiles of ejecta reveal that faster-traveling ICMEs are also expanding more at 1 AU. We combine these empirical relations to form a prediction scheme for the magnetic field strength in the sheaths ahead of magnetic clouds and also suggest a method for predicting the radial speed profile through an ICME on the basis of upstream measurements.
Resumo:
The development of genetically modified (GM) crops has led the European Union (EU) to put forward the concept of 'coexistence' to give fanners the freedom to plant both conventional and GM varieties. Should a premium for non-GM varieties emerge in the market, 'contamination' by GM pollen would generate a negative externality to conventional growers. It is therefore important to assess the effect of different 'policy variables'on the magnitude of the externality to identify suitable policies to manage coexistence. In this paper, taking GM herbicide tolerant oilseed rape as a model crop, we start from the model developed in Ceddia et al. [Ceddia, M.G., Bartlett, M., Perrings, C., 2007. Landscape gene flow, coexistence and threshold effect: the case of genetically modified herbicide tolerant oilseed rape (Brassica napus). Ecol. Modell. 205, pp. 169-180] use a Monte Carlo experiment to generate data and then estimate the effect of the number of GM and conventional fields, width of buffer areas and the degree of spatial aggregation (i.e. the 'policy variables') on the magnitude of the externality at the landscape level. To represent realistic conditions in agricultural production, we assume that detection of GM material in conventional produce might occur at the field level (no grain mixing occurs) or at the silos level (where grain mixing from different fields in the landscape occurs). In the former case, the magnitude of the externality will depend on the number of conventional fields with average transgenic presence above a certain threshold. In the latter case, the magnitude of the externality will depend on whether the average transgenic presence across all conventional fields exceeds the threshold. In order to quantify the effect of the relevant' policy variables', we compute the marginal effects and the elasticities. Our results show that when relying on marginal effects to assess the impact of the different 'policy variables', spatial aggregation is far more important when transgenic material is detected at field level, corroborating previous research. However, when elasticity is used, the effectiveness of spatial aggregation in reducing the externality is almost identical whether detection occurs at field level or at silos level. Our results show also that the area planted with GM is the most important 'policy variable' in affecting the externality to conventional growers and that buffer areas on conventional fields are more effective than those on GM fields. The implications of the results for the coexistence policies in the EU are discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The aim was to determine the fate of transgenic and endogenous plant DNA fragments in the blood, tissues, and digesta of broilers. Male broiler chicks (n = 24) were allocated at 1 day old to each of four treatment diets designated T1-T4. T1 and T2 contained the near isogenic nongenetically modified (GM) maize grain, whereas T3 and T4 contained GM maize grain [cry1a(b) gene]; T1 and T3 also contained the near isogenic non-GM soybean meal, whereas T2 and T4 contained GM soybean meal (cp4epsps gene). Four days prior to slaughter at 39-42 days old, 50% of the broilers on T2-T4 had the source(s) of GM ingredients replaced by their non-GM counterparts. Detection of specific DNA sequences in feed, tissue, and digesta samples was completed by polymerase chain reaction analysis. Seven primer pairs were used to amplify fragments (similar to 200 bp) from single copy genes (maize high mobility protein, soya lectin, and transgenes in the GM feeds) and multicopy genes (poultry mitochondrial cytochrome b, maize, and soya rubisco). There was no effect of treatment on the measured growth performance parameters. Except for a single detection of lectin (nontransgenic single copy gene; unsubstantiated) in the extracted DNA from one bursa tissue sample, there was no positive detection of any endogenous or transgenic single copy genes in either blood or tissue DNA samples. However, the multicopy rubisco gene was detected in a proportion of samples from all tissue types (23% of total across all tissues studied) and in low numbers in blood. Feed-derived DNA was found to survive complete degradation up to the large intestine. Transgenic DNA was detected in gizzard digesta but not in intestinal digesta 96 h after the last feeding of treatment diets containing a source of GM maize and/or soybean meal.
Resumo:
The objective was to determine the presence or absence of transgenic and endogenous plant DNA in ruminal fluid, duodenal digesta, milk, blood, and feces, and if found, to determine fragment size. Six multiparous lactating Holstein cows fitted with ruminal and duodenal cannulas received a total mixed ration. There were two treatments (T). In T1, the concentrate contained genetically modified (GM) soybean meal (cp4epsps gene) and GM corn grain (cry1a[b] gene), whereas T2 contained the near isogenic non-GM counterparts. Polymerase chain reaction analysis was used to determine the presence or absence of DNA sequences. Primers were selected to amplify small fragments from single-copy genes (soy lectin and corn high-mobility protein and cp4epsps and cry1a[b] genes from the GM crops) and multicopy genes (bovine mitochondrial cytochrome b and rubisco). Single-copy genes were only detected in the solid phase of rumen and duodenal digesta. In contrast, fragments of the rubisco gene were detected in the majority of samples analyzed in both the liquid and solid phases of ruminal and duodenal digesta, milk, and feces, but rarely in blood. The size of the rubisco gene fragments detected decreased from 1176 bp in ruminal and duodenal digesta to 351 bp in fecal samples.
Resumo:
The objective was to determine the presence or absence of transgenic and endogenous plant DNA in ruminal fluid, duodenal digesta, milk, blood, and feces, and if found, to determine fragment size. Six multiparous lactating Holstein cows fitted with ruminal and duodenal cannulas received a total mixed ration. There were two treatments (T). In T1, the concentrate contained genetically modified (GM) soybean meal (cp4epsps gene) and GM corn grain (cry1a[b] gene), whereas T2 contained the near isogenic non-GM counterparts. Polymerase chain reaction analysis was used to determine the presence or absence of DNA sequences. Primers were selected to amplify small fragments from single-copy genes (soy lectin and corn high-mobility protein and cp4epsps and cry1a[b] genes from the GM crops) and multicopy genes (bovine mitochondrial cytochrome b and rubisco). Single-copy genes were only detected in the solid phase of rumen and duodenal digesta. In contrast, fragments of the rubisco gene were detected in the majority of samples analyzed in both the liquid and solid phases of ruminal and duodenal digesta, milk, and feces, but rarely in blood. The size of the rubisco gene fragments detected decreased from 1176 bp in ruminal and duodenal digesta to 351 bp in fecal samples.
Resumo:
The area of soil disturbed using a single tine is well documented. However, modern strip tillage implements using a tine and disc design have not been assessed in the UK or in mainland Europe. Using a strip tillage implement has potential benefits for European agriculture where economic returns and sustainability are key issues. Using a strip tillage system a narrow zone is cultivated leaving most of the straw residue on the soil surface. Small field plot experiments were undertaken on three soil types and the operating parameters of forward speed, tine depth and tine design were investigated together with measurements of seedbed tilth and crop emergence. The type of tine used was found to be the primary factor in achieving the required volume of disturbance within a narrow zone whilst maintaining an area of undisturbed soil with straw residue on the surface. The winged tine produced greater disturbance at a given depth compared with the knife tine. Increasing forward speed did not consistently increase the volume of disturbance. In a sandy clay loam the tilth created and emergence of sugar beet by strip tillage and ploughing were similar but on a sandy loam the strip tillage treatments generally gave a finer tilth but poorer emergence particularly at greater working depth.
Resumo:
The tridentate Schiff base ligand, 7-amino-4-methyl-5-aza-3-hepten-2-one (HAMAH), prepared by the mono-condensation of 1,2diaminoethane and acetylacetone, reacts with Cu(BF4)(2) center dot 6H(2)O to produce initially a dinuclear Cu(II) complex, [{Cu(AMAH)}(2) (mu-4,4'-bipyJ](BF4)(2) (1) which undergoes hydrolysis in the reaction mixture and finally produces a linear polymeric chain compound, [Cu(acac)(2)(mu-4,4'-bipy)](n) (2). The geometry around the copper atom in compound 1 is distorted square planar while that in compound 2 is essentially an elongated octahedron. On the other hand, the ligand HAMAH reacts with Cu(ClO4)(2) center dot 6H(2)O to yield a polymeric zigzag chain, [{Cu(acac)(CH3OH)(mu-4,4'-bipy)}(ClO4)](n) (3). The geometry of the copper atom in 3 is square pyramidal with the two bipyridine molecules in the cis equatorial positions. All three complexes have been characterized by elemental analysis, IR and UV-Vis spectroscopy and single crystal X-ray diffraction studies. A probable explanation for the different size and shape of the reported polynuclear complexes formed by copper(II) and 4,4'-bipyridine has been put forward by taking into account the denticity and crystal field strength of the blocking ligand as well as the Jahn-Teller effect in copper(II). (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
An exploratory model for cutting is presented which incorporates fracture toughness as well as the commonly considered effects of plasticity and friction. The periodic load fluctuations Been in cutting force dynamometer tests are predicted, and considerations of chatter and surface finish follow. A non-dimensional group is put forward to classify different regimes of material response to machining. It leads to tentative explanations for the difficulties of cutting materials such as ceramics and brittlo polymers, and also relates to the formation of discontinuous chips. Experiments on a range of solids with widely varying toughness/strength ratios generally agree with the analysis.
Resumo:
Firms form consortia in order to win contracts. Once a project has been awarded to a consortium each member then concentrates on his or her own contract with the client. Therefore, consortia are marketing devices, which present the impression of teamworking, but the production process is just as fragmented as under conventional procurement methods. In this way, the consortium forms a barrier between the client and the actual construction production process. Firms form consortia, not as a simple development of normal ways of working, but because the circumstances for specific projects make it a necessary vehicle. These circumstances include projects that are too large or too complex to undertake alone or projects that require on-going services which cannot be provided by the individual firms inhouse. It is not a preferred way of working, because participants carry extra risk in the form of liability for the actions of their partners in the consortium. The behaviour of members of consortia is determined by their relative power, based on several factors, including financial commitment and ease of replacement. The level of supply chain visibility to the public sector client and to the industry is reduced by the existence of a consortium because the consortium forms an additional obstacle between the client and the firms undertaking the actual construction work. Supply chain visibility matters to the client who otherwise loses control over the process of construction or service provision, while remaining accountable for cost overruns. To overcome this separation there is a convincing argument in favour of adopting the approach put forward in the Project Partnering Contract 2000 (PPC2000) Agreement. Members of consortia do not necessarily go on to work in the same consortia again because members need to respond flexibly to opportunities as and when they arise. Decision-making processes within consortia tend to be on an ad hoc basis. Construction risk is taken by the contractor and the construction supply chain but the reputational risk is carried by all the firms associated with a consortium. There is a wide variation in the manner that consortia are formed, determined by the individual circumstances of each project; its requirements, size and complexity, and the attitude of individual project leaders. However, there are a number of close working relationships based on generic models of consortia-like arrangements for the purpose of building production, such as the Housing Corporation Guidance Notes and the PPC2000.
Resumo:
Building energy consumption(BEC) accounting and assessment is fundamental work for building energy efficiency(BEE) development. In existing Chinese statistical yearbook, there is no specific item for BEC accounting and relevant data are separated and mixed with other industry consumption. Approximate BEC data can be acquired from existing energy statistical yearbook. For BEC assessment, caloric values of different energy carriers are adopted in energy accounting and assessment field. This methodology obtained much useful conclusion for energy efficiency development. While the traditional methodology concerns only on the energy quantity, energy classification issue is omitted. Exergy methodology is put forward to assess BEC. With the new methodology, energy quantity and quality issues are both concerned in BEC assessment. To illustrate the BEC accounting and exergy assessment, a case of Chongqing in 2004 is shown. Based on the exergy analysis, BEC of Chongqing in 2004 accounts for 17.3% of the total energy consumption. This result is quite common to that of traditional methodology. As far as energy supply efficiency is concerned, the difference is highlighted by 0.417 of the exergy methodology to 0.645 of the traditional methodology.
Resumo:
Purpose – The purpose of this paper is to propose a process model for knowledge transfer in using theories relating knowledge communication and knowledge translation. Design/methodology/approach – Most of what is put forward in this paper is based on a research project titled “Procurement for innovation and knowledge transfer (ProFIK)”. The project is funded by a UK government research council – The Engineering and Physical Sciences Research Council (EPSRC). The discussions are mainly grounded on a thorough review of literature accomplished as part of the research project. Findings – The process model developed in this paper has built upon the theory of knowledge transfer and the theory of communication. Knowledge transfer, per se, is not a mere transfer of knowledge. It involves different stages of knowledge transformation. Depending on the context of knowledge transfer, it can also be influenced by many factors; some positive and some negative. The developed model of knowledge transfer attempts to encapsulate all these issues in order to create a holistic framework. Originality/value of paper – An attempt has been made in the paper to combine some of the significant theories or findings relating to knowledge transfer together, making the paper an original and valuable one.
Resumo:
The 'irrelevant sound effect' in short-term memory is commonly believed to entail a number of direct consequences for cognitive performance in the office and other workplaces (e.g. S. P. Banbury, S. Tremblay, W. J. Macken, & D. M. Jones, 2001). It may also help to identify what types of sound are most suitable as auditory warning signals. However, the conclusions drawn are based primarily upon evidence from a single task (serial recall) and a single population (young adults). This evidence is reconsidered from the standpoint of different worker populations confronted with common workplace tasks and auditory environments. Recommendations are put forward for factors to be considered when assessing the impact of auditory distraction in the workplace. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward constrained regression manner. The leave-one-out (LOO) test score is used for kernel selection. The jackknife parameter estimator subject to positivity constraint check is used for the parameter estimation of a single parameter at each forward step. As such the proposed approach is simple to implement and the associated computational cost is very low. An illustrative example is employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to that of the classical Parzen window estimate.
Resumo:
Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.