23 resultados para techniques of acting
Resumo:
We have applied time series analytical techniques to the flux of lava from an extrusive eruption. Tilt data acting as a proxy for flux are used in a case study of the May–August 1997 period of the eruption at Soufrière Hills Volcano, Montserrat. We justify the use of such a proxy by simple calibratory arguments. Three techniques of time series analysis are employed: spectral, spectrogram and wavelet methods. In addition to the well-known ~9-hour periodicity shown by these data, a previously unknown periodic flux variability is revealed by the wavelet analysis as a 3-day cycle of frequency modulation during June–July 1997, though the physical mechanism responsible is not clear. Such time series analysis has potential for other lava flux proxies at other types of volcanoes.
Resumo:
Three ochre samples (A (orange-red in colour), B (red) and C (purple)) from Clearwell Caves, (Gloucestershire, UK) have been examined using an integrated analytical methodology based on the techniques of IR and diffuse reflectance UV-visible-NIR spectroscopy, X-ray diffraction, elemental analysis by ICP-AES and particle size analysis. It is shown that the chromophore in each case is haematite. The differences in colour may be accounted for by (i) different mineralogical and chemical composition in the case of the orange ochre, where hi,,her levels of dolomite and copper are seen and (ii) an unusual particle size distribution in the case of the purple ochre. When the purple ochre was ground to give the same particle size distribution as the red ochre then the colours of the two samples became indistinguishable. An analysis has now been completed of a range of ochre samples with colours from yellow to purple from the important site of Clearwell Caves. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.
Resumo:
Background: MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results: A large dataset comprising MHC-peptide structural complexes was created by remodelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion: The QSAR techniques of Genetic Function Approximation (GFA) and Genetic Partial Least Squares (G/PLS) algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.
Resumo:
A future goal in nuclear fuel reprocessing is the conversion or transmutation of the long-lived radioisotopes of minor actinides, such as americium, into short-lived isotopes by irradiation with neutrons. In order to achieve this transmutation, it is necessary to separate the minor actinides(III), [An(Ill)], from the lanthanides(III), [Ln(Ill)], by solvent extraction (partitioning), because the lanthanides absorb neutrons too effectively and hence limit neutron capture by the transmutable actinides. Partitioning using ligands containing only carbon, hydrogen, nitrogen and oxygen atoms is desirable because they are completely incinerable and thus the final volume of waste is minimised [1]. Nitric acid media will be used in the extraction experiments because it is envisaged that the An(III)/Ln(III) separation process could take place after the PUREX process. There is no doubt that the correct design of a molecule that is capable of acting as a ligand or extraction reagent is required for the effective separation of metal ions such as actinides(III) from lanthanides. Recent attention has been directed towards heterocyclic ligands with for the preferential separation of the minor actinides. Although such molecules have a rich chemistry, this is only now becoming sufficiently well understood in relation to the partitioning process [2]. The molecules shown in Figures I and 2 will be the principal focus of this study. Although the examples chosen here are used rather specific, the guidelines can be extended to other areas such as the separation of precious metals [3].
Resumo:
Three ochre samples (A (orange-red in colour), B (red) and C (purple)) from Clearwell Caves, (Gloucestershire, UK) have been examined using an integrated analytical methodology based on the techniques of IR and diffuse reflectance UV-visible-NIR spectroscopy, X-ray diffraction, elemental analysis by ICP-AES and particle size analysis. It is shown that the chromophore in each case is haematite. The differences in colour may be accounted for by (i) different mineralogical and chemical composition in the case of the orange ochre, where hi,,her levels of dolomite and copper are seen and (ii) an unusual particle size distribution in the case of the purple ochre. When the purple ochre was ground to give the same particle size distribution as the red ochre then the colours of the two samples became indistinguishable. An analysis has now been completed of a range of ochre samples with colours from yellow to purple from the important site of Clearwell Caves. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Thirty one new sodium heterosulfamates, RNHSO3Na, where the R portion contains mainly thiazole, benzothiazole, thiadiazole and pyridine ring structures, have been synthesized and their taste portfolios have been assessed. A database of 132 heterosulfamates ( both open-chain and cyclic) has been formed by combining these new compounds with an existing set of 101 heterosulfamates which were previously synthesized and for which taste data are available. Simple descriptors have been obtained using (i) measurements with Corey-Pauling-Koltun (CPK) space- filling models giving x, y and z dimensions and a volume VCPK, (ii) calculated first order molecular connectivities ((1)chi(v)) and (iii) the calculated Spartan program parameters to obtain HOMO, LUMO energies, the solvation energy E-solv and V-SPART AN. The techniques of linear (LDA) and quadratic (QDA) discriminant analysis and Tree analysis have then been employed to develop structure-taste relationships (SARs) that classify the sweet (S) and non-sweet (N) compounds into separate categories. In the LDA analysis 70% of the compounds were correctly classified ( this compares with 65% when the smaller data set of 101 compounds was used) and in the QDA analysis 68% were correctly classified ( compared to 80% previously). TheTree analysis correctly classified 81% ( compared to 86% previously). An alternative Tree analysis derived using the Cerius2 program and a set of physicochemical descriptors correctly classified only 54% of the compounds.
Resumo:
The paper reviews recent models that have applied the techniques of behavioural economics to the analysis of the tax compliance choice of an individual taxpayer. The construction of these models is motivated by the failure of the Yitzhaki version of the Allingham–Sandmo model to predict correctly the proportion of taxpayers who will evade and the effect of an increase in the tax rate upon the chosen level of evasion. Recent approaches have applied non-expected utility theory to the compliance decision and have addressed social interaction. The models we describe are able to match the observed extent of evasion and correctly predict the tax effect but do not have the parsimony or precision of the Yitzhaki model.
Resumo:
This article describes a case study involving information technology managers and their new programmer recruitment policy, but the primary interest is methodological. The processes of issue generation and selection and model conceptualization are described. Early use of “magnetic hexagons” allowed the generation of a range of issues, most of which would not have emerged if system dynamics elicitation techniques had been employed. With the selection of a specific issue, flow diagraming was used to conceptualize a model, computer implementation and scenario generation following naturally. Observations are made on the processes of system dynamics modeling, particularly on the need to employ general techniques of knowledge elicitation in the early stages of interventions. It is proposed that flexible approaches should be used to generate, select, and study the issues, since these reduce any biasing of the elicitation toward system dynamics problems and also allow the participants to take up the most appropriate problem- structuring approach.
Resumo:
Sea surface temperature has been an important application of remote sensing from space for three decades. This chapter first describes well-established methods that have delivered valuable routine observations of sea surface temperature for meteorology and oceanography. Increasingly demanding requirements, often related to climate science, have highlighted some limitations of these ap-proaches. Practitioners have had to revisit techniques of estimation, of characterising uncertainty, and of validating observations—and even to reconsider the meaning(s) of “sea surface temperature”. The current understanding of these issues is reviewed, drawing attention to ongoing questions. Lastly, the prospect for thermal remote sens-ing of sea surface temperature over coming years is discussed.
Resumo:
Dominant paradigms of causal explanation for why and how Western liberal-democracies go to war in the post-Cold War era remain versions of the 'liberal peace' or 'democratic peace' thesis. Yet such explanations have been shown to rest upon deeply problematic epistemological and methodological assumptions. Of equal importance, however, is the failure of these dominant paradigms to account for the 'neoliberal revolution' that has gripped Western liberal-democracies since the 1970s. The transition from liberalism to neoliberalism remains neglected in analyses of the contemporary Western security constellation. Arguing that neoliberalism can be understood simultaneously through the Marxian concept of ideology and the Foucauldian concept of governmentality – that is, as a complementary set of 'ways of seeing' and 'ways of being' – the thesis goes on to analyse British security in policy and practice, considering it as an instantiation of a wider neoliberal way of war. In so doing, the thesis draws upon, but also challenges and develops, established critical discourse analytic methods, incorporating within its purview not only the textual data that is usually considered by discourse analysts, but also material practices of security. This analysis finds that contemporary British security policy is predicated on a neoliberal social ontology, morphology and morality – an ideology or 'way of seeing' – focused on the notion of a globalised 'network-market', and is aimed at rendering circulations through this network-market amenable to neoliberal techniques of government. It is further argued that security practices shaped by this ideology imperfectly and unevenly achieve the realisation of neoliberal 'ways of being' – especially modes of governing self and other or the 'conduct of conduct' – and the re-articulation of subjectivities in line with neoliberal principles of individualism, risk, responsibility and flexibility. The policy and practice of contemporary British 'security' is thus recontextualised as a component of a broader 'neoliberal way of war'.
Resumo:
Phytoplankton is at the base of the marine food web. Its carbon fixation, the net primary productivity (NPP), sustains most living marine resources. In regions like the tropical Pacific (30°N–30°S), natural fluctuations of NPP have large impacts on marine ecosystems including fisheries. The capacity to predict these natural variations would provide an important asset to science-based management approaches but remains unexplored yet. In this paper, we demonstrate that natural variations of NPP in the tropical Pacific can be forecasted several years in advance beyond the physical environment, whereas those of sea surface temperature are limited to 1 y. These results open previously unidentified perspectives for the future development of science-based management techniques of marine ecosystems based on multiyear forecasts of NPP.
Resumo:
The aim of this paper is essentially twofold: first, to describe the use of spherical nonparametric estimators for determining statistical diagnostic fields from ensembles of feature tracks on a global domain, and second, to report the application of these techniques to data derived from a modern general circulation model. New spherical kernel functions are introduced that are more efficiently computed than the traditional exponential kernels. The data-driven techniques of cross-validation to determine the amount elf smoothing objectively, and adaptive smoothing to vary the smoothing locally, are also considered. Also introduced are techniques for combining seasonal statistical distributions to produce longer-term statistical distributions. Although all calculations are performed globally, only the results for the Northern Hemisphere winter (December, January, February) and Southern Hemisphere winter (June, July, August) cyclonic activity are presented, discussed, and compared with previous studies. Overall, results for the two hemispheric winters are in good agreement with previous studies, both for model-based studies and observational studies.
Resumo:
These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject.
Resumo:
It is common practice to design a survey with a large number of strata. However, in this case the usual techniques for variance estimation can be inaccurate. This paper proposes a variance estimator for estimators of totals. The method proposed can be implemented with standard statistical packages without any specific programming, as it involves simple techniques of estimation, such as regression fitting.