938 resultados para Exploratory and confirmatory methods
Resumo:
Physical, cultural and biological methods for weed control have developed largely independently and are often concerned with weed control in different systems: physical and cultural control in annual crops and biocontrol in extensive grasslands. We discuss the strengths and limitations of four physical and cultural methods for weed control: mechanical, thermal, cutting, and intercropping, and the advantages and disadvantages of combining biological control with them. These physical and cultural control methods may increase soil nitrogen levels and alter microclimate at soil level; this may be of benefit to biocontrol agents, although physical disturbance to the soil and plant damage may be detrimental. Some weeds escape control by these methods; we suggest that these weeds may be controlled by biocontrol agents. It will be easiest to combine biological control with. re and cutting in grasslands; within arable systems it would be most promising to combine biological control (especially using seed predators and foliar pathogens) with cover-cropping, and mechanical weeding combined with foliar bacterial and possibly foliar fungal pathogens. We stress the need to consider the timing of application of combined control methods in order to cause least damage to the biocontrol agent, along with maximum damage to the weed and to consider the wider implications of these different weed control methods.
Resumo:
G3B3 and G2MP2 calculations using Gaussian 03 have been carried out to investigate the protonation preferences for phenylboronic acid. All nine heavy atoms have been protonated in turn. With both methodologies, the two lowest protonation energies are obtained with the proton located either at the ipso carbon atom or at a hydroxyl oxygen atom. Within the G3B3 formalism, the lowest-energy configuration by 4.3 kcal . mol(-1) is found when the proton is located at the ipso carbon, rather than at the electronegative oxygen atom. In the resulting structure, the phenyl ring has lost a significant amount of aromaticity. By contrast, calculations with G2MP2 show that protonation at the hydroxyl oxygen atom is favored by 7.7 kcal . mol(-1). Calculations using the polarizable continuum model (PCM) solvent method also give preference to protonation at the oxygen atom when water is used as the solvent. The preference for protonation at the ipso carbon found by the more accurate G3B3 method is unexpected and its implications in Suzuki coupling are discussed. (C) 2006 Wiley Periodicals, Inc.
Resumo:
We introduce the notion that the energy of individuals can manifest as a higher-level, collective construct. To this end, we conducted four independent studies to investigate the viability and importance of the collective energy construct as assessed by a new survey instrument—the productive energy measure (PEM). Study 1 (n = 2208) included exploratory and confirmatory factor analyses to explore the underlying factor structure of PEM. Study 2 (n = 660) cross-validated the same factor structure in an independent sample. In study 3, we administered the PEM to more than 5000 employees from 145 departments located in five countries. Results from measurement invariance, statistical aggregation, convergent, and discriminant-validity assessments offered additional support for the construct validity of PEM. In terms of predictive and incremental validity, the PEM was positively associated with three collective attitudes—units' commitment to goals, the organization, and overall satisfaction. In study 4, we explored the relationship between the productive energy of firms and their overall performance. Using data from 92 firms (n = 5939employees), we found a positive relationship between the PEM (aggregated to the firm level) and the performance of those firms. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
This article compares the results obtained from using two different methodological approaches to elicit teachers’ views on their professional role, the key challenges and their aspirations for the future. One approach used a postal/online questionnaire, while the other used telephone interviews, posing a selection of the same questions. The research was carried out on two statistically comparable samples of teachers in England in spring 2004. Significant differences in responses were observed which seem to be attributable to the methods employed. In particular, more ‘definite’ responses were obtained in the interviews than in response to the questionnaire. This article reviews the comparative outcomes in the context of existing research and explores why the separate methods may have produced significantly different responses to the same questions.
Resumo:
In this study, we compare two different cyclone-tracking algorithms to detect North Atlantic polar lows, which are very intense mesoscale cyclones. Both approaches include spatial filtering, detection, tracking and constraints specific to polar lows. The first method uses digital bandpass-filtered mean sea level pressure (MSLP) fieldsin the spatial range of 200�600 km and is especially designed for polar lows. The second method also uses a bandpass filter but is based on the discrete cosine transforms (DCT) and can be applied to MSLP and vorticity fields. The latter was originally designed for cyclones in general and has been adapted to polar lows for this study. Both algorithms are applied to the same regional climate model output fields from October 1993 to September 1995 produced from dynamical downscaling of the NCEP/NCAR reanalysis data. Comparisons between these two methods show that different filters lead to different numbers and locations of tracks. The DCT is more precise in scale separation than the digital filter and the results of this study suggest that it is more suited for the bandpass filtering of MSLP fields. The detection and tracking parts also influence the numbers of tracks although less critically. After a selection process that applies criteria to identify tracks of potential polar lows, differences between both methods are still visible though the major systems are identified in both.
Resumo:
Our new molecular understanding of immune priming states that dendritic cell activation is absolutely pivotal for expansion and differentiation of naïve T lymphocytes, and it follows that understanding DC activation is essential to understand and design vaccine adjuvants. This chapter describes how dendritic cells can be used as a core tool to provide detailed quantitative and predictive immunomics information about how adjuvants function. The role of distinct antigen, costimulation, and differentiation signals from activated DC in priming is explained. Four categories of input signals which control DC activation – direct pathogen detection, sensing of injury or cell death, indirect activation via endogenous proinflammatory mediators, and feedback from activated T cells – are compared and contrasted. Practical methods for studying adjuvants using DC are summarised and the importance of DC subset choice, simulating T cell feedback, and use of knockout cells is highlighted. Finally, five case studies are examined that illustrate the benefit of DC activation analysis for understanding vaccine adjuvant function.
Resumo:
The paper considers second kind equations of the form (abbreviated x=y + K2x) in which and the factor z is bounded but otherwise arbitrary so that equations of Wiener-Hopf type are included as a special case. Conditions on a set are obtained such that a generalized Fredholm alternative is valid: if W satisfies these conditions and I − Kz, is injective for each z ε W then I − Kz is invertible for each z ε W and the operators (I − Kz)−1 are uniformly bounded. As a special case some classical results relating to Wiener-Hopf operators are reproduced. A finite section version of the above equation (with the range of integration reduced to [−a, a]) is considered, as are projection and iterated projection methods for its solution. The operators (where denotes the finite section version of Kz) are shown uniformly bounded (in z and a) for all a sufficiently large. Uniform stability and convergence results, for the projection and iterated projection methods, are obtained. The argument generalizes an idea in collectively compact operator theory. Some new results in this theory are obtained and applied to the analysis of projection methods for the above equation when z is compactly supported and k(s − t) replaced by the general kernel k(s,t). A boundary integral equation of the above type, which models outdoor sound propagation over inhomogeneous level terrain, illustrates the application of the theoretical results developed.
Resumo:
Background: Self-based achievement goals use one’s own intrapersonal trajectory as a standard of evaluation, and this intrapersonal trajectory may be grounded in one’s past (past-based goals) or one’s future potential potential-based goals). Potential-based goals have been overlooked in the literature to date. Aims: The primary aim of the present research is to address this oversight within the context of the 3 x 2 achievement goal framework. Samples: The Study 1 sample was 381 U.S. undergraduates; the Study 2 sample was 310 U.S. undergraduates. Methods: In Study 1, we developed scales to assess otential-approach and potential-avoidance goals, and tested their factorial validity with exploratory and confirmatory factor analyses. In Study 2, we used confirmatory factor analysis to test both the separability of past-based and potential-based goals and their higher order integration within the self-based category. Results: Study 1 supported the factorial validity of the potential-approach and potential-avoidance goal scales. Study 2 supported the separability of past-based and potential-based goals, as well as their higher order integration within the self-based category. Conclusions: This research documents the utility of the proposed distinction, and paves the way for subsequent work on antecedent and consequences of potential-approach and potential-avoidance goals. It highlights the importance
Resumo:
Morphing fears (also called transformation obsessions) involve concerns that a person may become contaminated by and acquire undesirable characteristics of others. These symptoms are found in patients with OCD and are thought to be related to mental contamination. Given the high levels of distress and interference morphing fears can cause, a reliable and valid assessment measure is needed. This article describes the development and evaluation of the Morphing Fear Questionnaire (MFQ), a 13-item measure designed to assess for the presence and severity of morphing fears. A sample of 900 participants took part in the research. Of these, 140 reported having a current diagnosis of OCD (SR-OCD) and 760 reported never having had OCD (N-OCD; of whom 24 reported a diagnosis of an anxiety disorder and 23 reported a diagnosis of depression). Factor structure, reliability, and construct and criterion related validity were investigated. Exploratory and confirmatory factor analyses supported a one-factor structure replicable across the N-OCD and SR-OCD group. The MFQ was found to have high internal consistency and good temporal stability, and showed significantly greater associations with convergent measures (assessing obsessive-compulsive symptoms, mental contamination, thought-action fusion and magical thinking) than with divergent measures (assessing depression and anxiety). Moreover, the MFQ successfully discriminated between the SR-OCD sample and the N-OCD group, anxiety disorder sample, and depression sample. These findings suggest that the MFQ has sound psychometric properties and that it can be used to assess morphing fear. Clinical implications are discussed.
Resumo:
The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.
Resumo:
Geophysics has been shown to be effective in identifying areas contaminated by waste disposal, contributing to the greater efficiency of soundings programs and the installation of monitoring wells. In the study area, four trenches were constructed with a total volume of about 25,000 m(3). They were almost totally filled with re-refined lubricating oil waste for approximately 25 years. No protection liners were used in the bottoms and laterals of the disposal trenches. The purpose of this work is to evaluate the potential of the resistivity and ground penetrating radar (GPR) methods in characterizing the contamination of this lubricant oil waste disposal area in Ribeiro Preto, SP, situated on the geological domain of the basalt spills of the Serra Geral Formation and the sandstones of the Botucatu Formation. Geophysical results were shown in 2D profiles. The geophysical methods used enabled the identification of geophysical anomalies, which characterized the contamination produced by the trenches filled with lubricant oil waste. Conductive anomalies (smaller than 185 Omega m) immediately below the trenches suggest the action of bacteria in the hydrocarbons, as has been observed in several sites contaminated by hydrocarbons in previously reported cases in the literature. It was also possible to define the geometry of the trenches, as evidenced by the GPR method. Direct sampling (chemical analysis of the soil and the water in the monitoring well) confirmed the contamination. In the soil analysis, low concentrations of several polycyclic aromatic hydrocarbons (PAHs) were found, mainly naphthalene and phenanthrene. In the water samples, an analysis verified contamination of the groundwater by lead (Pb). The geophysical methods used in the investigation provided an excellent tool for environmental characterization in this study of a lubricant oil waste disposal area, and could be applied in the study of similar areas.
Resumo:
The correlation between the microdilution (MD), Etest (R) (ET), and disk diffusion (DD) methods was determined for amphotericin B, itraconazole and fluconazole. The minimal inhibitory concentration (MIC) of those antifungal agents was established for a total of 70 Candida spp. isolates from colonization and infection. The species distribution was: Candida albicans (n = 27), C. tropicalis (n = 17), C. glabrata (n = 16), C. parapsilosis (n = 8), and C. lusitaniae (n = 2). Non-Candida albicans Candida species showed higher MICs for the three antifungal agents when compared with C. albicans isolates. The overall concordance (based on the MIC value obtained within two dilutions) between the ET and the MD method was 83% for amphotericin B, 63% for itraconazole, and 64% for fluconazole. Considering the breakpoint, the agreement between the DD and MD methods was 71% for itraconazole and 67% for fluconazole. The DD zone diameters are highly reproducible and correlate well with the MD method, making agar-based methods a viable alternative to MD for susceptibility testing. However, data on agar-based tests for itraconazole and amphotericin B are yet scarce. Thus, further research must still be carded out to ensure the standardization to other antifungal agents. J. Clin. Lab. Anal. 23:324-330, 2009. (C) 2009 Wiley-Liss, Inc.
Resumo:
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.