863 resultados para Random equivalent availability
Resumo:
This monthly report from the Iowa Department of Natural Resources is about the water quality management of Iowa's rivers, streams and lakes.
Resumo:
This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.
Resumo:
Confidence in decision making is an important dimension of managerialbehavior. However, what is the relation between confidence, on the onehand, and the fact of receiving or expecting to receive feedback ondecisions taken, on the other hand? To explore this and related issuesin the context of everyday decision making, use was made of the ESM(Experience Sampling Method) to sample decisions taken by undergraduatesand business executives. For several days, participants received 4 or 5SMS messages daily (on their mobile telephones) at random moments at whichpoint they completed brief questionnaires about their current decisionmaking activities. Issues considered here include differences between thetypes of decisions faced by the two groups, their structure, feedback(received and expected), and confidence in decisions taken as well as inthe validity of feedback. No relation was found between confidence indecisions and whether participants received or expected to receivefeedback on those decisions. In addition, although participants areclearly aware that feedback can provide both confirming and disconfirming evidence, their ability to specify appropriatefeedback is imperfect. Finally, difficulties experienced inusing the ESM are discussed as are possibilities for further researchusing this methodology.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.
Resumo:
Summary points: - The bias introduced by random measurement error will be different depending on whether the error is in an exposure variable (risk factor) or outcome variable (disease) - Random measurement error in an exposure variable will bias the estimates of regression slope coefficients towards the null - Random measurement error in an outcome variable will instead increase the standard error of the estimates and widen the corresponding confidence intervals, making results less likely to be statistically significant - Increasing sample size will help minimise the impact of measurement error in an outcome variable but will only make estimates more precisely wrong when the error is in an exposure variable
Resumo:
There is increasing evidence to suggest that the presence of mesoscopic heterogeneities constitutes the predominant attenuation mechanism at seismic frequencies. As a consequence, centimeter-scale perturbations of the subsurface physical properties should be taken into account for seismic modeling whenever detailed and accurate responses of the target structures are desired. This is, however, computationally prohibitive since extremely small grid spacings would be necessary. A convenient way to circumvent this problem is to use an upscaling procedure to replace the heterogeneous porous media by equivalent visco-elastic solids. In this work, we solve Biot's equations of motion to perform numerical simulations of seismic wave propagation through porous media containing mesoscopic heterogeneities. We then use an upscaling procedure to replace the heterogeneous poro-elastic regions by homogeneous equivalent visco-elastic solids and repeat the simulations using visco-elastic equations of motion. We find that, despite the equivalent attenuation behavior of the heterogeneous poro-elastic medium and the equivalent visco-elastic solid, the seismograms may differ due to diverging boundary conditions at fluid-solid interfaces, where there exist additional options for the poro-elastic case. In particular, we observe that the seismograms agree for closed-pore boundary conditions, but differ significantly for open-pore boundary conditions. This is an interesting result, which has potentially important implications for wave-equation-based algorithms in exploration geophysics involving fluid-solid interfaces, such as, for example, wave field decomposition.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
Arabidopsis thaliana (L.) Heynh. expressing the Crepis palaestina (L.) linoleic acid delta12-epoxygenase in its developing seeds typically accumulates low levels of vernolic acid (12,13-epoxy-octadec-cis-9-enoic acid) in comparison to levels found in seeds of the native C. palaestina. In order to determine some of the factors limiting the accumulation of this unusual fatty acid, we have examined the effects of increasing the availability of linoleic acid (9cis, 12cis-octadecadienoic acid), the substrate of the delta12-epoxygenase, on the quantity of epoxy fatty acids accumulating in transgenic A. thaliana. The addition of linoleic acid to liquid cultures of transgenic plants expressing the delta12-epoxygenase under the control of the cauliflower mosaic virus 35S promoter increased the amount of vernolic acid in vegetative tissues by 2.8-fold. In contrast, the addition to these cultures of linoelaidic acid (9trans, 12trans-octadecadienoic acid), which is not a substrate of the delta12-epoxygenase, resulted in a slight decrease in vernolic acid accumulation. Expression of the delta12-epoxygenase under the control of the napin promoter in the A. thaliana triple mutant fad3/fad7-1/fad9, which is deficient in the synthesis of tri-unsaturated fatty acids and has a 60% higher level of linoleic acid than the wild type, was found to increase the average vernolic acid content of the seeds by 55% compared to the expression of the delta12-epoxygenase in a wild-type background. Together, these results reveal that the availability of linoleic acid is an important factor affecting the synthesis of epoxy fatty acid in transgenic plants.
Resumo:
Trends in food availability in Switzerland were assessed using the Food and Agricultural Organization food balance sheets for the period 1961-2007. A relatively stable trend in the daily caloric supply was found: 3545 kcal/day in 1961 and 3465 kcal/day in 2007. Calories associated with carbohydrates decreased (slope±s.e.: -1.1±0.2 kcal/day/year), namely regarding cereals (-2.9±0.6 kcal/day/year) and fruit (-1.5±0.1 kcal/day/year), while the availability of sugars increased (1.2±0.5 kcal/day/year). In 1961, protein, fat, carbohydrates and alcohol represented 10.6, 33.5, 50.0 and 5.9% of total caloric supply, respectively; in 2007, the values were 10.8, 40.3, 43.7 and 5.2%. In 1961, palm, groundnut and sunflowerseed oil represented 3.4, 30.7 and 5.3% of total vegetable oils, respectively; in 2007, the values were 10.4, 3.7 and 31.6%. We conclude that between 1961 and 2007 total caloric availability remained relatively stable in Switzerland; the health effects of the increased and differing fat availability should be evaluated.
Resumo:
The current availability of five complete genomes of different primate species allows the analysis of genetic divergence over the last 40 million years of evolution. We hypothesized that the interspecies differences observed in susceptibility to HIV-1 would be influenced by the long-range selective pressures on host genes associated with HIV-1 pathogenesis. We established a list of human genes (n = 140) proposed to be involved in HIV-1 biology and pathogenesis and a control set of 100 random genes. We retrieved the orthologous genes from the genome of humans and of four nonhuman primates (Pan troglodytes, Pongo pygmaeus abeli, Macaca mulatta, and Callithrix jacchus) and analyzed the nucleotide substitution patterns of this data set using codon-based maximum likelihood procedures. In addition, we evaluated whether the candidate genes have been targets of recent positive selection in humans by analyzing HapMap Phase 2 single-nucleotide polymorphisms genotyped in a region centered on each candidate gene. A total of 1,064 sequences were used for the analyses. Similar median K(A)/K(S) values were estimated for the set of genes involved in HIV-1 pathogenesis and for control genes, 0.19 and 0.15, respectively. However, genes of the innate immunity had median values of 0.37 (P value = 0.0001, compared with control genes), and genes of intrinsic cellular defense had K(A)/K(S) values around or greater than 1.0 (P value = 0.0002). Detailed assessment allowed the identification of residues under positive selection in 13 proteins: AKT1, APOBEC3G, APOBEC3H, CD4, DEFB1, GML, IL4, IL8RA, L-SIGN/CLEC4M, PTPRC/CD45, Tetherin/BST2, TLR7, and TRIM5alpha. A number of those residues are relevant for HIV-1 biology. The set of 140 genes involved in HIV-1 pathogenesis did not show a significant enrichment in signals of recent positive selection in humans (intraspecies selection). However, we identified within or near these genes 24 polymorphisms showing strong signatures of recent positive selection. Interestingly, the DEFB1 gene presented signatures of both interspecies positive selection in primates and intraspecies recent positive selection in humans. The systematic assessment of long-acting selective pressures on primate genomes is a useful tool to extend our understanding of genetic variation influencing contemporary susceptibility to HIV-1.
Resumo:
Dispersed information on water retention and availability in soils may be compiled in databases to generate pedotransfer functions. The objectives of this study were: to generate pedotransfer functions to estimate soil water retention based on easily measurable soil properties; to evaluate the efficiency of existing pedotransfer functions for different geographical regions for the estimation of water retention in soils of Rio Grande do Sul (RS); and to estimate plant-available water capacity based on soil particle-size distribution. Two databases were set up for soil properties, including water retention: one based on literature data (725 entries) and the other with soil data from an irrigation scheduling and management system (239 entries). From the literature database, pedotransfer functions were generated, nine pedofunctions available in the literature were evaluated and the plant-available water capacity was calculated. The coefficient of determination of some pedotransfer functions ranged from 0.56 to 0.66. Pedotransfer functions generated based on soils from other regions were not appropriate for estimating the water retention for RS soils. The plant-available water content varied with soil texture classes, from 0.089 kg kg-1 for the sand class to 0.191 kg kg-1 for the silty clay class. These variations were more related to sand and silt than to clay content. The soils with a greater silt/clay ratio, which were less weathered and with a greater quantity of smectite clay minerals, had high water retention and plant-available water capacity.