958 resultados para Data frequency
Resumo:
The double-frequency jitter is one of the main problems in clock distribution networks. In previous works, sonic analytical and numerical aspects of this phenomenon were studied and results were obtained for one-way master-slave (OWMS) architectures. Here, an experimental apparatus is implemented, allowing to measure the power of the double-frequency signal and to confirm the theoretical conjectures. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
For the first time, we introduce and study some mathematical properties of the Kumaraswamy Weibull distribution that is a quite flexible model in analyzing positive data. It contains as special sub-models the exponentiated Weibull, exponentiated Rayleigh, exponentiated exponential, Weibull and also the new Kumaraswamy exponential distribution. We provide explicit expressions for the moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and Renyi entropy. The moments of the order statistics are calculated. We also discuss the estimation of the parameters by maximum likelihood. We obtain the expected information matrix. We provide applications involving two real data sets on failure times. Finally, some multivariate generalizations of the Kumaraswamy Weibull distribution are discussed. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
Estimation of Taylor`s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating function. Furthermore, we investigate a more general regression model allowing for site-specific covariates. This method may be efficiently implemented using a Newton scoring algorithm, with standard errors calculated from the inverse Godambe information matrix. The method is applied to a set of biomass data for benthic macrofauna from two Danish estuaries. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Interval-censored survival data, in which the event of interest is not observed exactly but is only known to occur within some time interval, occur very frequently. In some situations, event times might be censored into different, possibly overlapping intervals of variable widths; however, in other situations, information is available for all units at the same observed visit time. In the latter cases, interval-censored data are termed grouped survival data. Here we present alternative approaches for analyzing interval-censored data. We illustrate these techniques using a survival data set involving mango tree lifetimes. This study is an example of grouped survival data.
Resumo:
This paper proposes a regression model considering the modified Weibull distribution. This distribution can be used to model bathtub-shaped failure rate functions. Assuming censored data, we consider maximum likelihood and Jackknife estimators for the parameters of the model. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and we also present some ways to perform global influence. Besides, for different parameter settings, sample sizes and censoring percentages, various simulations are performed and the empirical distribution of the modified deviance residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for a martingale-type residual in log-modified Weibull regression models with censored data. Finally, we analyze a real data set under log-modified Weibull regression models. A diagnostic analysis and a model checking based on the modified deviance residual are performed to select appropriate models. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Parana (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited.
Resumo:
introduction of conservation practices in degraded agricultural land will generally recuperate soil quality, especially by increasing soil organic matter. This aspect of soil organic C (SOC) dynamics under distinct cropping and management systems can be conveniently analyzed with ecosystem models such as the Century Model. In this study, Century was used to simulate SOC stocks in farm fields of the Ibiruba region of north central Rio Grande do Sul state in Southern Brazil. The region, where soils are predominantly Oxisols, was originally covered with subtropical woodlands and grasslands. SOC dynamics was simulated with a general scenario developed with historical data on soil management and cropping systems beginning with the onset of agriculture in 1900. From 1993 to 2050, two contrasting scenarios based on no-tillage soil management were established: the status quo scenario, with crops and agricultural inputs as currently practiced in the region and the high biomass scenario with increased frequency of corn in the cropping system, resulting in about 80% higher biomass addition to soils. Century simulations were in close agreement with SOC stocks measured in 2005 in the Oxisols with finer texture surface horizon originally under woodlands. However, simulations in the Oxisols with loamy surface horizon under woodlands and in the grassland soils were not as accurate. SOC stock decreased from 44% to 50% in fields originally under woodland and from 20% to 27% in fields under grasslands with the introduction of intensive annual grain crops with intensive tillage and harrowing operations. The adoption of conservation practices in the 1980s led to a stabilization of SOC stocks followed by a partial recovery of native stocks. Simulations to 2050 indicate that maintaining status quo would allow SOC stocks to recover from 81% to 86% of the native stocks under woodland and from 80% to 91 % of the native stocks under grasslands. Adoption of a high biomass scenario would result in stocks from 75% to 95% of the original stocks under woodlands and from 89% to 102% in the grasslands by 2050. These simulations outcomes underline the importance of cropping system yielding higher biomass to further increase SOC content in these Oxisols. This application of the Century Model could reproduce general trends of SOC loss and recovery in the Oxisols of the Ibiruba region. Additional calibration and validation should be conducted before extensive usage of Century as a support tool for soil carbon sequestration projects in this and other regions can be recommended. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Microbial community structure in saltmarsh soils is stratified by depth and availability of electron acceptors for respiration. However, the majority of the microbial species that are involved in the biogeochemical transformations of iron (Fe) and sulfur (S) in such environments are not known. Here we examined the structure of bacterial communities in a high saltmarsh soil profile and discuss their potential relationship with the geochemistry of Fe and S. Our data showed that the soil horizons Ag (oxic-suboxic), Bg (suboxic), Cri (anoxic with low concentration of pyrite Fe) and Cr-2 (anoxic with high concentrations of pyrite Fe) have distinct geochemical and microbiological characteristics. In general, total S concentration increased with depth and was correlated with the presence of pyrite Fe. Soluble + exchangable-Fe, pyrite Fe and acid volatile sulfide Fe concentrations also increased with depth, whereas ascorbate extractable-Fe concentrations decreased. The occurrence of reduced forms of Fe in the horizon Ag and oxidized Fe in horizon Cr-2 suggests that the typical redox zonation, common to several marine sediments, does not occur in the saltmarsh soil profile studied. Overall, the bacterial community structure in the horizon Ag and Cr-2 shared low levels of similarity, as compared to their adjacent horizons, Bg and Cr-1, respectively. The phylogenetic analyses of bacterial 16S rRNA gene sequences from clone libraries showed that the predominant phylotypes in horizon Ag were related to Alphaproteobacteria and Bacteroidetes. In contrast, the most abundant phylotypes in horizon Cr-2 were related to Deltaproteo-bacteria, Chloroflexi, Deferribacteres and Nitrospira. The high frequency of sequences with low levels of similarity to known bacterial species in horizons Ag and Cr-2 indicates that the bacterial communities in both horizons are dominated by novel bacterial species. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Mitochondrial DNA (mtDNA) analysis has proved useful for forensic identification especially in cases where nuclear DNA is not available, such as with hair evidence. Heteroplasmy, the presence of more than one type of mtDNA in one individual, is a common situation often reported in the first and second mtDNA hypervariable regions (HV1/HV2), particularly in hair samples. However, there is no data about heteroplasmy frequency in the third mtDNA hypervariable region (HV3). To investigate possible heteroplasmy hotspots, HV3 from hair and blood samples of 100 individuals were sequenced and compared. No point heteroplasmy was observed, but length heteroplasmy was, both in C-stretch and CA repeat. To observe which CA ""alleles"" were present in each tissue, PCR products were cloned and re-sequenced. However, no variation among CA alleles was observed. Regarding forensic practice, we conclude that point heteroplasmy in HV3 is not as frequent as in the HV1/HV2.
Resumo:
The Brazilian Network of Food Data Systems (BRASILFOODS) has been keeping the Brazilian Food Composition Database-USP (TBCA-USP) (http://www.fcf.usp.br/tabela) since 1998. Besides the constant compilation, analysis and update work in the database, the network tries to innovate through the introduction of food information that may contribute to decrease the risk for non-transmissible chronic diseases, such as the profile of carbohydrates and flavonoids in foods. In 2008, data on carbohydrates, individually analyzed, of 112 foods, and 41 data related to the glycemic response produced by foods widely consumed in the country were included in the TBCA-USP. Data (773) about the different flavonoid subclasses of 197 Brazilian foods were compiled and the quality of each data was evaluated according to the USDAs data quality evaluation system. In 2007, BRASILFOODS/USP and INFOODS/FAO organized the 7th International Food Data Conference ""Food Composition and Biodiversity"". This conference was a unique opportunity for interaction between renowned researchers and participants from several countries and it allowed the discussion of aspects that may improve the food composition area. During the period, the LATINFOODS Regional Technical Compilation Committee and BRASILFOODS disseminated to Latin America the Form and Manual for Data Compilation, version 2009, ministered a Food Composition Data Compilation course and developed many activities related to data production and compilation. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Hepatocellular carcinoma (HCC) ranks in prevalence and mortality among top 10 cancers worldwide. Butyric acid (BA), a member of histone deacetylase inhibitors (HDACi) has been proposed as an anticareinogenic agent. However, its short half-life is a therapeutical limitation. This problem could be circumvented with tributyrin (TB), a proposed BA prodrug. To investigate TB effectiveness for chemoprevention, rats were treated with the compound during initial phases of ""resistant hepatocyte"" model of hepatocarcinogenesis, and cellular and molecular parameters were evaluated. TB inhibited (p < 0.05) development of hepatic preneoplastic lesions (PNL) including persistent ones considered HCC progression sites. TB increased (p < 0.05) PNL remodeling, a process whereby they tend to disappear. TB did not inhibit cell proliferation in PNL, but induced (p < 0.05) apoptosis in remodeling ones. Compared to controls, rats treated with TB presented increased (P < 0.05) hepatic levels of BA indicating its effectiveness as a prodrug. Molecular mechanisms of TB-induced hepatocarcinogenesis chemoprevention were investigated. TB increased (p < 0.05) hepatic nuclear histone H3K9 hyperacetylation specifically in PNL and p21 protein expression, which could be associated with inhibitory HDAC effects. Moreover, it reduced (p < 0.05) the frequency of persistent PNL with aberrant cytoplasmic p53 accumulation, an alteration associated with increased malignancy. Original data observed in our study support the effectiveness of TB as a prodrug of BA and as an HDACi in hepatocarcinogenesis chemoprevention. Besides histone acetylation and p21 restored expression, molecular mechanisms involved with TB anticarcinogenic actions could also be related to modulation of p53 pathways. (C) 2008 Wiley-Liss, Inc.