915 resultados para Big data, Spark, Hadoop


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tropical forests are characterized by diverse assemblages of plant and animal species compared to temperate forests. Corollary to this general rule is that most tree species, whether valued for timber or not, occur at low densities (<1 adult tree ha(-1)) or may be locally rare. In the Brazilian Amazon, many of the most highly valued timber species occur at extremely low densities yet are intensively harvested with little regard for impacts on population structures and dynamics. These include big-leaf mahogany (Swietenia macrophylla), ipe (Tabebuia serratifolia and Tabebuia impetiginosa), jatoba (Hymenaea courbaril), and freijo cinza (Cordia goeldiana). Brazilian forest regulations prohibit harvests of species that meet the legal definition of rare - fewer than three trees per 100 ha - but treat all species populations exceeding this density threshold equally. In this paper we simulate logging impacts on a group of timber species occurring at low densities that are widely distributed across eastern and southern Amazonia, based on field data collected at four research sites since 1997, asking: under current Brazilian forest legislation, what are the prospects for second harvests on 30-year cutting cycles given observed population structures, growth, and mortality rates? Ecologically `rare` species constitute majorities in commercial species assemblages in all but one of the seven large-scale inventories we analyzed from sites spanning the Amazon (range 49-100% of total commercial species). Although densities of only six of 37 study species populations met the Brazilian legal definition of a rare species, timber stocks of five of the six timber species declined substantially at all sites between first and second harvests in simulations based on legally allowable harvest intensities. Reducing species-level harvest intensity by increasing minimum felling diameters or increasing seed tree retention levels improved prospects for second harvests of those populations with a relatively high proportion of submerchantable stems, but did not dramatically improve projections for populations with relatively flat diameter distributions. We argue that restrictions on logging very low-density timber tree populations, such as the current Brazilian standard, provide inadequate minimum protection for vulnerable species. Population declines, even if reduced-impact logging (RIL) is eventually adopted uniformly, can be anticipated for a large pool of high-value timber species unless harvest intensities are adapted to timber species population ecology, and silvicultural treatments are adopted to remedy poor natural stocking in logged stands. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sustainability of current harvest practices for high-value Meliaceae can be assessed by quantifying logging intensity and projecting growth and survival by post-logging populations over anticipated intervals between harvests. From 100%-area inventories of big-leaf mahogany (Swietenia macrophylla) covering 204 ha or more at eight logged and unlogged forest sites across southern Brazilian Amazonia, we report generally higher landscape-scale densities and smaller population-level mean diameters in eastern forests compared to western forests, where most commercial stocks survive. Density of trees >= 20 cm diameter varied by two orders of magnitude and peaked at 1.17 ha(-1). Size class frequency distributions appeared unimodal at two high-density sites, but were essentially arnodal or flat elsewhere; diameter increment patterns indicate that populations were multi- or all-aged. At two high-density sites, conventional logging removed 93-95% of commercial trees (>= 45 cm diameter at the time of logging), illegally eliminated 31-47% of sub-merchantable trees, and targeted trees as small as 20 cm diameter. Projected recovery by commercial stems during 30 years after conventional logging represented 9.9-37.5% of initial densities and was highly dependent on initial logging intensity and size class frequency distributions of commercial trees. We simulated post-logging recovery over the same period at all sites according to the 2003 regulatory framework for mahogany in Brazil, which raised the minimum diameter cutting limit to 60 cm and requires retention during the first harvest of 20% of commercial-sized trees. Recovery during 30 years ranged from approximately 0 to 31% over 20% retention densities at seven of eight sites. At only one site where sub-merchantable trees dominated the population did the simulated density of harvestable stems after 30 years exceed initial commercial densities. These results indicate that 80% harvest intensity will not be sustainable over multiple cutting cycles for most populations without silvicultural interventions ensuring establishment and long-term growth of artificial regeneration to augment depleted natural stocks, including repeated tending of outplanted seedlings. Without improved harvest protocols for mahogany in Brazil as explored in this paper, future commercial supplies of this species as well as other high-value tropical timbers are endangered. Rapid changes in the timber industry and land-use in the Amazon are also significant challenges to sustainable management of mahogany. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Parana (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Allele frequency distributions and population data for 12 Y-chromosomal short tandem repeats (STRs) included in the PowerPlex (R) Y Systems (Promega) were obtained for a sample of 200 healthy unrelated males living in S (a) over tildeo Paulo State (Southeast of Brazil). A total of 192 haplotypes were identified, of which 184 were unique and 8 were found in 2 individuals. The average gene diversity of the 12 Y-STR was 0.6746 and the haplotype diversity was 0.9996. Pairwise analysis confirmed that our population is more similar with the Italy, North Portugal and Spain, being more distant of the Japan. (c) 2007 Elsevier Ireland Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Brazilian Network of Food Data Systems (BRASILFOODS) has been keeping the Brazilian Food Composition Database-USP (TBCA-USP) (http://www.fcf.usp.br/tabela) since 1998. Besides the constant compilation, analysis and update work in the database, the network tries to innovate through the introduction of food information that may contribute to decrease the risk for non-transmissible chronic diseases, such as the profile of carbohydrates and flavonoids in foods. In 2008, data on carbohydrates, individually analyzed, of 112 foods, and 41 data related to the glycemic response produced by foods widely consumed in the country were included in the TBCA-USP. Data (773) about the different flavonoid subclasses of 197 Brazilian foods were compiled and the quality of each data was evaluated according to the USDAs data quality evaluation system. In 2007, BRASILFOODS/USP and INFOODS/FAO organized the 7th International Food Data Conference ""Food Composition and Biodiversity"". This conference was a unique opportunity for interaction between renowned researchers and participants from several countries and it allowed the discussion of aspects that may improve the food composition area. During the period, the LATINFOODS Regional Technical Compilation Committee and BRASILFOODS disseminated to Latin America the Form and Manual for Data Compilation, version 2009, ministered a Food Composition Data Compilation course and developed many activities related to data production and compilation. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This document records the process of migrating eprints.org data to a Fez repository. Fez is a Web-based digital repository and workflow management system based on Fedora (http://www.fedora.info/). At the time of migration, the University of Queensland Library was using EPrints 2.2.1 [pepper] for its ePrintsUQ repository. Once we began to develop Fez, we did not upgrade to later versions of eprints.org software since we knew we would be migrating data from ePrintsUQ to the Fez-based UQ eSpace. Since this document records our experiences of migration from an earlier version of eprints.org, anyone seeking to migrate eprints.org data into a Fez repository might encounter some small differences. Moving UQ publication data from an eprints.org repository into a Fez repository (hereafter called UQ eSpace (http://espace.uq.edu.au/) was part of a plan to integrate metadata (and, in some cases, full texts) about all UQ research outputs, including theses, images, multimedia and datasets, in a single repository. This tied in with the plan to identify and capture the research output of a single institution, the main task of the eScholarshipUQ testbed for the Australian Partnership for Sustainable Repositories project (http://www.apsr.edu.au/). The migration could not occur at UQ until the functionality in Fez was at least equal to that of the existing ePrintsUQ repository. Accordingly, as Fez development occurred throughout 2006, a list of eprints.org functionality not currently supported in Fez was created so that programming of such development could be planned for and implemented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The final-year project for Mechanical & Space Engineering students at UQ often involves the design and flight testing of an experiment. This report describes the design and use of a simple data logger that should be suitable for collecting data from the students' flight experiments. The exercise here was taken as far as the construction of a prototype device that is suitable for ground-based testing, say, the static firing of a hybrid rocket motor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A combination of deductive reasoning, clustering, and inductive learning is given as an example of a hybrid system for exploratory data analysis. Visualization is replaced by a dialogue with the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports a comparative study of Australian and New Zealand leadership attributes, based on the GLOBE (Global Leadership and Organizational Behavior Effectiveness) program. Responses from 344 Australian managers and 184 New Zealand managers in three industries were analyzed using exploratory and confirmatory factor analysis. Results supported some of the etic leadership dimensions identified in the GLOBE study, but also found some emic dimensions of leadership for each country. An interesting finding of the study was that the New Zealand data fitted the Australian model, but not vice versa, suggesting asymmetric perceptions of leadership in the two countries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.