92 resultados para Hutchby, Ian: Conversation analysis. Principles, practices and application
em CentAUR: Central Archive University of Reading - UK
Resumo:
Bioturbation at all scales, which tends to replace the primary fabric of a sediment by the ichnofabric (the overall fabric of a sediment that has been bioturbated), is now recognised as playing a major role in facies interpretation. The manner in which the substrate may be colonized, and the physical, chemical and ecological controls (grainsize, sedimentation rate, oxygenation, nutrition, salinity, ethology, community structure and succession), together with the several ways in which the substrate is tiered by bioturbators, are the factors and processes that determine the nature of the ichnofabric. Eleven main styles of substrate tiering are described, ranging from single, pioneer colonization to complex tiering under equilibria, their modification under environmental deterioration and amelioration, and diagenetic enhancement or obscuration. Ichnofabrics may be assessed by four attributes: primary sedimentary factors, Bioturbation Index (BI), burrow size and frequency, and ichnological diversity. Construction of tier and ichnofabric constituent diagrams aid visualization and comparison. The breaks or changes in colonization and style of tiering at key stratal surfaces accentuate the surfaces, and many reflect a major environmental shift of the trace-forming biota. due to change in hydrodynamic regime (leading to non-deposition and/or erosion and/or lithification), change in salinity regime, or subaerial exposure. The succession of gradational or abrupt changes in ichnofabric through genetically related successions, together with changes in colonization and tiering across event beds, may also be interpreted in terms of changes in environmental parameters. It is not the ichnotaxa per se that are important in discriminating between ichnofabrics, but rather the environmental conditions that determine the overall style of colonization. Fabrics composed of different ichnotaxa (and different taphonomies) but similar tier structure and ichnoguild may form in similar environments of different age or different latitude. Appreciation of colonization and tiering styles places ancient ichnofabrics on a sound processrelated basis for environmental interpretation. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Capillary electrophoresis (CE) offers the analyst a number of key advantages for the analysis of the components of foods. CE offers better resolution than, say, high-performance liquid chromatography (HPLC), and is more adept at the simultaneous separation of a number of components of different chemistries within a single matrix. In addition, CE requires less rigorous sample cleanup procedures than HPLC, while offering the same degree of automation. However, despite these advantages, CE remains under-utilized by food analysts. Therefore, this review consolidates and discusses the currently reported applications of CE that are relevant to the analysis of foods. Some discussion is also devoted to the development of these reported methods and to the advantages/disadvantages compared with the more usual methods for each particular analysis. It is the aim of this review to give practicing food analysts an overview of the current scope of CE.
Resumo:
We have developed a new method for the analysis of voids in proteins (defined as empty cavities not accessible to solvent). This method combines analysis of individual discrete voids with analysis of packing quality. While these are different aspects of the same effect, they have traditionally been analysed using different approaches. The method has been applied to the calculation of total void volume and maximum void size in a non-redundant set of protein domains and has been used to examine correlations between thermal stability and void size. The tumour-suppressor protein p53 has then been compared with the non-redundant data set to determine whether its low thermal stability results from poor packing. We found that p53 has average packing, but the detrimental effects of some previously unexplained mutations to p53 observed in cancer can be explained by the creation of unusually large voids. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The article features a conversation between Rob Cross and Martin Kilduff about organizational network analysis in research and practice. It demonstrates the value of using social network perspectives in HRM. Drawing on the discussion about managing personal networks; managing the networks of others; the impact of social networking sites on perceptions of relationships; and ethical issues in organizational network analysis, we propose specific suggestions to bring social network perspectives closer to HRM researchers and practitioners and rebalance our attention to people and to their relationships.
Resumo:
Second language acquisition researchers often face particular challenges when attempting to generalize study findings to the wider learner population. For example, language learners constitute a heterogeneous group, and it is not always clear how a study’s findings may generalize to other individuals who may differ in terms of language background and proficiency, among many other factors. In this paper, we provide an overview of how mixed-effects models can be used to help overcome these and other issues in the field of second language acquisition. We provide an overview of the benefits of mixed-effects models and a practical example of how mixed-effects analyses can be conducted. Mixed-effects models provide second language researchers with a powerful statistical tool in the analysis of a variety of different types of data.
Resumo:
There is little consensus on how agriculture will meet future food demands sustainably. Soils and their biota play a crucial role by mediating ecosystem services that support agricultural productivity. However, a multitude of site-specific environmental factors and management practices interact to affect the ability of soil biota to perform vital functions, confounding the interpretation of results from experimental approaches. Insights can be gained through models, which integrate the physiological, biological and ecological mechanisms underpinning soil functions. We present a powerful modelling approach for predicting how agricultural management practices (pesticide applications and tillage) affect soil functioning through earthworm populations. By combining energy budgets and individual-based simulation models, and integrating key behavioural and ecological drivers, we accurately predict population responses to pesticide applications in different climatic conditions. We use the model to analyse the ecological consequences of different weed management practices. Our results demonstrate that an important link between agricultural management (herbicide applications and zero, reduced and conventional tillage) and earthworms is the maintenance of soil organic matter (SOM). We show how zero and reduced tillage practices can increase crop yields while preserving natural ecosystem functions. This demonstrates how management practices which aim to sustain agricultural productivity should account for their effects on earthworm populations, as their proliferation stimulates agricultural productivity. Synthesis and applications. Our results indicate that conventional tillage practices have longer term effects on soil biota than pesticide control, if the pesticide has a short dissipation time. The risk of earthworm populations becoming exposed to toxic pesticides will be reduced under dry soil conditions. Similarly, an increase in soil organic matter could increase the recovery rate of earthworm populations. However, effects are not necessarily additive and the impact of different management practices on earthworms depends on their timing and the prevailing environmental conditions. Our model can be used to determine which combinations of crop management practices and climatic conditions pose least overall risk to earthworm populations. Linking our model mechanistically to crop yield models would aid the optimization of crop management systems by exploring the trade-off between different ecosystem services.
Resumo:
The debate associated with the qualifications of business school faculty has raged since the 1959 release of the Gordon–Howell and Pierson reports, which encouraged business schools in the USA to enhance their legitimacy by increasing their faculties’ doctoral qualifications and scholarly rigor. Today, the legitimacy of specific faculty qualifications remains one of the most discussed topics in management education, attracting the interest of administrators, faculty, and accreditation agencies. Based on new institutional theory and the institutional logics perspective, this paper examines convergence and innovation in business schools through an analysis of faculty hiring criteria. The qualifications examined are academic degree, scholarly publications, teaching experience, and professional experience. Three groups of schools are examined based on type of university, position within a media ranking system, and accreditation by the Association to Advance Collegiate Schools of Business. Data are gathered using a content analysis of 441 faculty postings from business schools based in the USA over two time periods. Contrary to claims of global convergence, we find most qualifications still vary by group, even in the mature US market. Moreover, innovative hiring is more likely to be found in non-elite schools.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Sustaining soil fertility is essential to the prosperity of many households in the mid-hills of Nepal, but there are concerns that the breakdown of the traditional linkages between forest, livestock, and cropping systems is adversely affecting fertility. This study used triangulated data from surveys of households, discussion groups, and key informants in 16 wards in eastern and western Nepal to determine the existing practices for soil fertility management, the extent of such practices, and the perception of the direction of changes in soil fertility. The two principal practices for maintaining soil fertility were the application of farmyard manure (FYM) and of chemical fertilizer (mainly urea and diammonium phosphate). Green manuring, in-situ manuring, slicing terrace risers, and burning plant residues are rarely practiced. FYM usage was variable with more generally applied to khet land (average 6053 kg fresh weight manure ha(-1)) than to bari land (average 4185 kg fresh weight manure ha-1) with manure from goats and poultry preferred above that from cows and buffaloes. Almost all households (98%) apply urea to khet land and 87% to bari land, with 45% applying diammonium phosphate to both types of land. Application rates and timings of applications varied considerably both within and between wards suggesting poor knowledge transfer between the research and farming communities. The benefits of chemical fertilizers in terms of ease of application and transportation in comparison with FYM, were perceived to outweigh the widely reported detrimental hardening of soil associated with their continued usage. Among key informants, FYM applied in conjunction with chemical fertilizer was the most popular amendment, with FYM alone preferred more than chemical fertilizer alone - probably because of the latter's long-term detrimental effects. Key informant and householder surveys differed in their perception of fertility changes in the last decade probably because of differences in age and site-specific knowledge. All key informants felt that fertility had declined but among households, only about 40% perceived a decline with the remainder about evenly divided between no change and an increase. Householders with small landholdings (< 0.5 ha) were more likely to perceive increasing soil fertility while those with larger landholdings (> 2 ha) were more likely to perceive declining fertility. Perceived changes in soil fertility were not related to food self-sufficiency. The reasons for the slow spread of new technologies within wards and the poor understanding of optimal use of chemical fertilizers in conjunction with improved quality FYM may repay further investigation in terms of sustaining soil fertility in this region.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The calculation of accurate and reliable vibrational potential functions and normal co-ordinates is discussed, for such simple polyatomic molecules as it may be possible. Such calculations should be corrected for the effects of anharmonicity and of resonance interactions between the vibrational states, and should be fitted to all the available information on all isotopic species: particularly the vibrational frequencies, Coriolis zeta constants and centrifugal distortion constants. The difficulties of making these corrections, and of making use of the observed data are reviewed. A programme for the Ferranti Mercury Computer is described by means of which harmonic vibration frequencies and normal co-ordinate vectors, zeta factors and centrifugal distortion constants can be calculated, from a given force field and from given G-matrix elements, etc. The programme has been used on up to 5 × 5 secular equations for which a single calculation and output of results takes approximately l min; it can readily be extended to larger determinants. The best methods of using such a programme and the possibility of reversing the direction of calculation are discussed. The methods are applied to calculating the best possible vibrational potential function for the methane molecule, making use of all the observed data.
Resumo:
Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.
Resumo:
A statistical methodology is proposed and tested for the analysis of extreme values of atmospheric wave activity at mid-latitudes. The adopted methods are the classical block-maximum and peak over threshold, respectively based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD). Time-series of the ‘Wave Activity Index’ (WAI) and the ‘Baroclinic Activity Index’ (BAI) are computed from simulations of the General Circulation Model ECHAM4.6, which is run under perpetual January conditions. Both the GEV and the GPD analyses indicate that the extremes ofWAI and BAI areWeibull distributed, this corresponds to distributions with an upper bound. However, a remarkably large variability is found in the tails of such distributions; distinct simulations carried out under the same experimental setup provide sensibly different estimates of the 200-yr WAI return level. The consequences of this phenomenon in applications of the methodology to climate change studies are discussed. The atmospheric configurations characteristic of the maxima and minima of WAI and BAI are also examined.