962 resultados para variable data printing
Resumo:
An efficient method of combining neutron diffraction data over an extended Q range with detailed atomistic models is presented. A quantitative and qualitative mapping of the organization of the chain conformation in both glass and liquid phase has been performed. The proposed structural refinement method is based on the exploitation of the intrachain features of the diffraction pattern by the use of internal coordinates for bond lengths, valence angles and torsion rotations. Models are built stochastically by assignment of these internal coordinates from probability distributions with limited variable parameters. Variation of these parameters is used in the construction of models that minimize the differences between the observed and calculated structure factors. A series of neutron scattering data of 1,4-polybutadiene at the region 20320 K is presented. Analysis of the experimental data yield bond lengths for C-C and C=C of 1.54 and 1.35 Å respectively. Valence angles of the backbone were found to be at 112 and 122.8 for the CCC and CC=C respectively. Three torsion angles corresponding to the double bond and the adjacent R and β bonds were found to occupy cis and trans, s(, trans and g( and trans states, respectively. We compare our results with theoretical predictions, computer simulations, RIS models, and previously reported experimental results.
Resumo:
Methods for producing nonuniform transformations, or regradings, of discrete data are discussed. The transformations are useful in image processing, principally for enhancement and normalization of scenes. Regradings which “equidistribute” the histogram of the data, that is, which transform it into a constant function, are determined. Techniques for smoothing the regrading, dependent upon a continuously variable parameter, are presented. Generalized methods for constructing regradings such that the histogram of the data is transformed into any prescribed function are also discussed. Numerical algorithms for implementing the procedures and applications to specific examples are described.
Resumo:
Synoptic climatology relates the atmospheric circulation with the surface environment. The aim of this study is to examine the variability of the surface meteorological patterns, which are developing under different synoptic scale categories over a suburban area with complex topography. Multivariate Data Analysis techniques were performed to a data set with surface meteorological elements. Three principal components related to the thermodynamic status of the surface environment and the two components of the wind speed were found. The variability of the surface flows was related with atmospheric circulation categories by applying Correspondence Analysis. Similar surface thermodynamic fields develop under cyclonic categories, which are contrasted with the anti-cyclonic category. A strong, steady wind flow characterized by high shear values develops under the cyclonic Closed Low and the anticyclonic H–L categories, in contrast to the variable weak flow under the anticyclonic Open Anticyclone category.
Resumo:
Meteorological (met) station data is used as the basis for a number of influential studies into the impacts of the variability of renewable resources. Real turbine output data is not often easy to acquire, whereas meteorological wind data, supplied at a standardised height of 10 m, is widely available. This data can be extrapolated to a standard turbine height using the wind profile power law and used to simulate the hypothetical power output of a turbine. Utilising a number of met sites in such a manner can develop a model of future wind generation output. However, the accuracy of this extrapolation is strongly dependent on the choice of the wind shear exponent alpha. This paper investigates the accuracy of the simulated generation output compared to reality using a wind farm in North Rhins, Scotland and a nearby met station in West Freugh. The results show that while a single annual average value for alpha may be selected to accurately represent the long term energy generation from a simulated wind farm, there are significant differences between simulation and reality on an hourly power generation basis, with implications for understanding the impact of variability of renewables on short timescales, particularly system balancing and the way that conventional generation may be asked to respond to a high level of variable renewable generation on the grid in the future.
Resumo:
Cross-bred cow adoption is an important and potent policy variable precipitating subsistence household entry into emerging milk markets. This paper focuses on the problem of designing policies that encourage and sustain milkmarket expansion among a sample of subsistence households in the Ethiopian highlands. In this context it is desirable to measure households’ ‘proximity’ to market in terms of the level of deficiency of essential inputs. This problem is compounded by four factors. One is the existence of cross-bred cow numbers (count data) as an important, endogenous decision by the household; second is the lack of a multivariate generalization of the Poisson regression model; third is the censored nature of the milk sales data (sales from non-participating households are, essentially, censored at zero); and fourth is an important simultaneity that exists between the decision to adopt a cross-bred cow, the decision about how much milk to produce, the decision about how much milk to consume and the decision to market that milk which is produced but not consumed internally by the household. Routine application of Gibbs sampling and data augmentation overcome these problems in a relatively straightforward manner. We model the count data from two sites close to Addis Ababa in a latent, categorical-variable setting with known bin boundaries. The single-equation model is then extended to a multivariate system that accommodates the covariance between crossbred-cow adoption, milk-output, and milk-sales equations. The latent-variable procedure proves tractable in extension to the multivariate setting and provides important information for policy formation in emerging-market settings
Resumo:
An important feature of agribusiness promotion programs is their lagged impact on consumption. Efficient investment in advertising requires reliable estimates of these lagged responses and it is desirable from both applied and theoretical standpoints to have a flexible method for estimating them. This note derives an alternative Bayesian methodology for estimating lagged responses when investments occur intermittently within a time series. The method exploits a latent-variable extension of the natural-conjugate, normal-linear model, Gibbs sampling and data augmentation. It is applied to a monthly time series on Turkish pasta consumption (1993:5-1998:3) and three, nonconsecutive promotion campaigns (1996:3, 1997:3, 1997:10). The results suggest that responses were greatest to the second campaign, which allocated its entire budget to television media; that its impact peaked in the sixth month following expenditure; and that the rate of return (measured in metric tons additional consumption per thousand dollars expended) was around a factor of 20.
Resumo:
What are the precise brain regions supporting the short-term retention of verbal information? A previous functional magnetic resonance imaging (fMRI) study suggested that they may be topographically variable across individuals, occurring, in most, in regions posterior to prefrontal cortex (PFC), and that detection of these regions may be best suited to a single-subject (SS) approach to fMRI analysis (Feredoes and Postle, 2007). In contrast, other studies using spatially normalized group-averaged (SNGA) analyses have localized storage-related activity to PFC. To evaluate the necessity of the regions identified by these two methods, we applied repetitive transcranial magnetic stimulation (rTMS) to SS- and SNGA-identified regions throughout the retention period of a delayed letter-recognition task. Results indicated that rTMS targeting SS analysis-identified regions of left perisylvian and sensorimotor cortex impaired performance, whereas rTMS targeting the SNGA-identified region of left caudal PFC had no effect on performance. Our results support the view that the short-term retention of verbal information can be supported by regions associated with acoustic, lexical, phonological, and speech-based representation of information. They also suggest that the brain bases of some cognitive functions may be better detected by SS than by SNGA approaches to fMRI data analysis.
Resumo:
The link between the Pacific/North American pattern (PNA) and the North Atlantic Oscillation (NAO) is investigated in reanalysis data (NCEP, ERA40) and multi-century CGCM runs for present day climate using three versions of the ECHAM model. PNA and NAO patterns and indices are determined via rotated principal component analysis on monthly mean 500 hPa geopotential height fields using the varimax criteria. On average, the multi-century CGCM simulations show a significant anti-correlation between PNA and NAO. Further, multi-decadal periods with significantly enhanced (high anti-correlation, active phase) or weakened (low correlations, inactive phase) coupling are found in all CGCMs. In the simulated active phases, the storm track activity near Newfoundland has a stronger link with the PNA variability than during the inactive phases. On average, the reanalysis datasets show no significant anti-correlation between PNA and NAO indices, but during the sub-period 1973–1994 a significant anti-correlation is detected, suggesting that the present climate could correspond to an inactive period as detected in the CGCMs. An analysis of possible physical mechanisms suggests that the link between the patterns is established by the baroclinic waves forming the North Atlantic storm track. The geopotential height anomalies associated with negative PNA phases induce an increased advection of warm and moist air from the Gulf of Mexico and cold air from Canada. Both types of advection contribute to increase baroclinicity over eastern North America and also to increase the low level latent heat content of the warm air masses. Thus, growth conditions for eddies at the entrance of the North Atlantic storm track are enhanced. Considering the average temporal development during winter for the CGCM, results show an enhanced Newfoundland storm track maximum in the early winter for negative PNA, followed by a downstream enhancement of the Atlantic storm track in the subsequent months. In active (passive) phases, this seasonal development is enhanced (suppressed). As the storm track over the central and eastern Atlantic is closely related to the NAO variability, this development can be explained by the shift of the NAO index to more positive values.
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
Vintage-based vector autoregressive models of a single macroeconomic variable are shown to be a useful vehicle for obtaining forecasts of different maturities of future and past observations, including estimates of post-revision values. The forecasting performance of models which include information on annual revisions is superior to that of models which only include the first two data releases. However, the empirical results indicate that a model which reflects the seasonal nature of data releases more closely does not offer much improvement over an unrestricted vintage-based model which includes three rounds of annual revisions.
Resumo:
Reliable evidence of trends in the illegal ivory trade is important for informing decision making for elephants but it is difficult to obtain due to the covert nature of the trade. The Elephant Trade Information System, a global database of reported seizures of illegal ivory, holds the only extensive information on illicit trade available. However inherent biases in seizure data make it difficult to infer trends; countries differ in their ability to make and report seizures and these differences cannot be directly measured. We developed a new modelling framework to provide quantitative evidence on trends in the illegal ivory trade from seizures data. The framework used Bayesian hierarchical latent variable models to reduce bias in seizures data by identifying proxy variables that describe the variability in seizure and reporting rates between countries and over time. Models produced bias-adjusted smoothed estimates of relative trends in illegal ivory activity for raw and worked ivory in three weight classes. Activity is represented by two indicators describing the number of illegal ivory transactions--Transactions Index--and the total weight of illegal ivory transactions--Weights Index--at global, regional or national levels. Globally, activity was found to be rapidly increasing and at its highest level for 16 years, more than doubling from 2007 to 2011 and tripling from 1998 to 2011. Over 70% of the Transactions Index is from shipments of worked ivory weighing less than 10 kg and the rapid increase since 2007 is mainly due to increased consumption in China. Over 70% of the Weights Index is from shipments of raw ivory weighing at least 100 kg mainly moving from Central and East Africa to Southeast and East Asia. The results tie together recent findings on trends in poaching rates, declining populations and consumption and provide detailed evidence to inform international decision making on elephants.
Resumo:
The large pine weevil, Hylobius abietis, is a serious pest of reforestation in northern Europe. However, weevils developing in stumps of felled trees can be killed by entomopathogenic nematodes applied to soil around the stumps and this method of control has been used at an operational level in the UK and Ireland. We investigated the factors affecting the efficacy of entomopathogenic nematodes in the control of the large pine weevil spanning 10 years of field experiments, by means of a meta-analysis of published studies and previously unpublished data. We investigated two species with different foraging strategies, the ‘ambusher’ Steinernema carpocapsae, the species most often used at an operational level, and the ‘cruiser’ Heterorhabditis downesi. Efficacy was measured both by percentage reduction in numbers of adults emerging relative to untreated controls and by percentage parasitism of developing weevils in the stump. Both measures were significantly higher with H. downesi compared to S. carpocapsae. General linear models were constructed for each nematode species separately, using substrate type (peat versus mineral soil) and tree species (pine versus spruce) as fixed factors, weevil abundance (from the mean of untreated stumps) as a covariate and percentage reduction or percentage parasitism as the response variable. For both nematode species, the most significant and parsimonious models showed that substrate type was consistently, but not always, the most significant variable, whether replicates were at a site or stump level, and that peaty soils significantly promote the efficacy of both species. Efficacy, in terms of percentage parasitism, was not density dependent.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
Pasture-based ruminant production systems are common in certain areas of the world, but energy evaluation in grazing cattle is performed with equations developed, in their majority, with sheep or cattle fed total mixed rations. The aim of the current study was to develop predictions of metabolisable energy (ME) concentrations in fresh-cut grass offered to non-pregnant non-lactating cows at maintenance energy level, which may be more suitable for grazing cattle. Data were collected from three digestibility trials performed over consecutive grazing seasons. In order to cover a range of commercial conditions and data availability in pasture-based systems, thirty-eight equations for the prediction of energy concentrations and ratios were developed. An internal validation was performed for all equations and also for existing predictions of grass ME. Prediction error for ME using nutrient digestibility was lowest when gross energy (GE) or organic matter digestibilities were used as sole predictors, while the addition of grass nutrient contents reduced the difference between predicted and actual values, and explained more variation. Addition of N, GE and diethyl ether extract (EE) contents improved accuracy when digestible organic matter in DM was the primary predictor. When digestible energy was the primary explanatory variable, prediction error was relatively low, but addition of water-soluble carbohydrates, EE and acid-detergent fibre contents of grass decreased prediction error. Equations developed in the current study showed lower prediction errors when compared with those of existing equations, and may thus allow for an improved prediction of ME in practice, which is critical for the sustainability of pasture-based systems.
Resumo:
Recently, in light of minimalist assumptions, some partial UG accessibility accounts to adult second language acquisition have made a distinction between the post-critical period ability to acquire new features based on their LF-interpretability (i.e. interpretable vs. uninterpretable features) (HAWKINS, 2005; HAWKINS; HATTORI, 2006; TSIMPLI; MASTROPAVLOU, 2007; TSIMPLI; DIMITRAKOPOULOU, 2007). The Interpretability Hypothesis (TSIMPLI; MASTROPAVLOU, 2007; TSIMPLI; DIMITRAKOPOULOU, 2007) claims that only uninterpretable features suffer a post-critical period failure and, therefore, cannot be acquired. Conversely, Full Access approaches claim that L2 learners have full access to UG’s entire inventory of features, and that L1/L2 differences obtain outside the narrow syntax. The phenomenon studied herein, adult acquisition of the Overt Pronoun Constraint (OPC) (MONTALBETTI, 1984) and inflected infinitives in nonnative Portuguese, challenges the Interpretability hypothesis insofar as it makes the wrong predictions for what is observed. The present data demonstrate that advanced learners of L2 Portuguese acquire the OPC and the syntax and semantics of inflected infinitives with native-like accuracy. Since inflected infinitives require the acquisition of new uninterpretable φ-features, the present data provide evidence in contra Tsimpli and colleagues’ Interpretability Hypothesis.