158 resultados para Wikis (Computer science)
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This work presents a method for predicting resource availability in opportunistic grids by means of use pattern analysis (UPA), a technique based on non-supervised learning methods. This prediction method is based on the assumption of the existence of several classes of computational resource use patterns, which can be used to predict the resource availability. Trace-driven simulations validate this basic assumptions, which also provide the parameter settings for the accurate learning of resource use patterns. Experiments made with an implementation of the UPA method show the feasibility of its use in the scheduling of grid tasks with very little overhead. The experiments also demonstrate the method`s superiority over other predictive and non-predictive methods. An adaptative prediction method is suggested to deal with the lack of training data at initialization. Further adaptative behaviour is motivated by experiments which show that, in some special environments, reliable resource use patterns may not always be detected. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
This work discusses a 4D lung reconstruction method from unsynchronized MR sequential images. The lung, differently from the heart, does not have its own muscles, turning impossible to see its real movements. The visualization of the lung in motion is an actual topic of research in medicine. CT (Computerized Tomography) can obtain spatio-temporal images of the heart by synchronizing with electrocardiographic waves. The FOV of the heart is small when compared to the lung`s FOV. The lung`s movement is not periodic and is susceptible to variations in the degree of respiration. Compared to CT, MR (Magnetic Resonance) imaging involves longer acquisition times and it is not possible to obtain instantaneous 3D images of the lung. For each slice, only one temporal sequence of 2D images can be obtained. However, methods using MR are preferable because they do not involve radiation. In this paper, based on unsynchronized MR images of the lung an animated B-Repsolid model of the lung is created. The 3D animation represents the lung`s motion associated to one selected sequence of MR images. The proposed method can be divided in two parts. First, the lung`s silhouettes moving in time are extracted by detecting the presence of a respiratory pattern on 2D spatio-temporal MR images. This approach enables us to determine the lung`s silhouette for every frame, even on frames with obscure edges. The sequence of extracted lung`s silhouettes are unsynchronized sagittal and coronal silhouettes. Using our algorithm it is possible to reconstruct a 3D lung starting from a silhouette of any type (coronal or sagittal) selected from any instant in time. A wire-frame model of the lung is created by composing coronal and sagittal planar silhouettes representing cross-sections. The silhouette composition is severely underconstrained. Many wire-frame models can be created from the observed sequences of silhouettes in time. Finally, a B-Rep solid model is created using a meshing algorithm. Using the B-Rep solid model the volume in time for the right and left lungs were calculated. It was possible to recognize several characteristics of the 3D real right and left lungs in the shaded model. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes and describes an architecture that allows the both engineer and programmer for defining and quantifying which peripheral of a microcontroller will be important to the particular project. For each application, it is necessary to use different types of peripherals. In this study, we have verified the possibility for emulating the behavior of peripheral in specifically CPUs. These CPUs hold a RAM memory, where code spaces specifically written for them could represent the behavior of some target peripheral, which are loaded and executed on it. We believed that the proposed architecture will provide larger flexibility in the use of the microcontrolles since this ""dedicated hardware components"" don`t execute to a special function, but it is a hardware capable to self adapt to the needs of each project. This research had as fundament a comparative study of four current microcontrollers. Preliminary tests using VHDL and FPGAs were done.
Resumo:
An important topic in genomic sequence analysis is the identification of protein coding regions. In this context, several coding DNA model-independent methods based on the occurrence of specific patterns of nucleotides at coding regions have been proposed. Nonetheless, these methods have not been completely suitable due to their dependence on an empirically predefined window length required for a local analysis of a DNA region. We introduce a method based on a modified Gabor-wavelet transform (MGWT) for the identification of protein coding regions. This novel transform is tuned to analyze periodic signal components and presents the advantage of being independent of the window length. We compared the performance of the MGWT with other methods by using eukaryote data sets. The results show that MGWT outperforms all assessed model-independent methods with respect to identification accuracy. These results indicate that the source of at least part of the identification errors produced by the previous methods is the fixed working scale. The new method not only avoids this source of errors but also makes a tool available for detailed exploration of the nucleotide occurrence.
Resumo:
A bathtub-shaped failure rate function is very useful in survival analysis and reliability studies. The well-known lifetime distributions do not have this property. For the first time, we propose a location-scale regression model based on the logarithm of an extended Weibull distribution which has the ability to deal with bathtub-shaped failure rate functions. We use the method of maximum likelihood to estimate the model parameters and some inferential procedures are presented. We reanalyze a real data set under the new model and the log-modified Weibull regression model. We perform a model check based on martingale-type residuals and generated envelopes and the statistics AIC and BIC to select appropriate models. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A four parameter generalization of the Weibull distribution capable of modeling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone as well as non-monotone failure rates, which are quite common in lifetime problems and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the Weibull, extreme value, exponentiated Weibull, generalized Rayleigh and modified Weibull distributions, among others. We derive two infinite sum representations for its moments. The density of the order statistics is obtained. The method of maximum likelihood is used for estimating the model parameters. Also, the observed information matrix is obtained. Two applications are presented to illustrate the proposed distribution. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a regression model considering the modified Weibull distribution. This distribution can be used to model bathtub-shaped failure rate functions. Assuming censored data, we consider maximum likelihood and Jackknife estimators for the parameters of the model. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and we also present some ways to perform global influence. Besides, for different parameter settings, sample sizes and censoring percentages, various simulations are performed and the empirical distribution of the modified deviance residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for a martingale-type residual in log-modified Weibull regression models with censored data. Finally, we analyze a real data set under log-modified Weibull regression models. A diagnostic analysis and a model checking based on the modified deviance residual are performed to select appropriate models. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
The zero-inflated negative binomial model is used to account for overdispersion detected in data that are initially analyzed under the zero-Inflated Poisson model A frequentist analysis a jackknife estimator and a non-parametric bootstrap for parameter estimation of zero-inflated negative binomial regression models are considered In addition an EM-type algorithm is developed for performing maximum likelihood estimation Then the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and some ways to perform global influence analysis are derived In order to study departures from the error assumption as well as the presence of outliers residual analysis based on the standardized Pearson residuals is discussed The relevance of the approach is illustrated with a real data set where It is shown that zero-inflated negative binomial regression models seems to fit the data better than the Poisson counterpart (C) 2010 Elsevier B V All rights reserved
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We study in detail the so-called beta-modified Weibull distribution, motivated by the wide use of the Weibull distribution in practice, and also for the fact that the generalization provides a continuous crossover towards cases with different shapes. The new distribution is important since it contains as special sub-models some widely-known distributions, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among several others. It also provides more flexibility to analyse complex real data. Various mathematical properties of this distribution are derived, including its moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are also derived for the chf, mean deviations, Bonferroni and Lorenz curves, reliability and entropies. The estimation of parameters is approached by two methods: moments and maximum likelihood. We compare by simulation the performances of the estimates from these methods. We obtain the expected information matrix. Two applications are presented to illustrate the proposed distribution.
Resumo:
A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.
Resumo:
Tuberculosis (TB) is the primary cause of mortality among infectious diseases. Mycobacterium tuberculosis monophosphate kinase (TMPKmt) is essential to DNA replication. Thus, this enzyme represents a promising target for developing new drugs against TB. In the present study, the receptor-independent, RI, 4D-QSAR method has been used to develop QSAR models and corresponding 3D-pharmacophores for a set of 81 thymidine analogues, and two corresponding subsets, reported as inhibitors of TMPKmt. The resulting optimized models are not only statistically significant with r (2) ranging from 0.83 to 0.92 and q (2) from 0.78 to 0.88, but also are robustly predictive based on test set predictions. The most and the least potent inhibitors in their respective postulated active conformations, derived from each of the models, were docked in the active site of the TMPKmt crystal structure. There is a solid consistency between the 3D-pharmacophore sites defined by the QSAR models and interactions with binding site residues. Moreover, the QSAR models provide insights regarding a probable mechanism of action of the analogues.
Resumo:
Thymidine monophosphate kinase (TMPK) has emerged as an attractive target for developing inhibitors of Mycobacterium tuberculosis growth. In this study the receptor-independent (RI) 4D-QSAR formalism has been used to develop QSAR models and corresponding 3D-pharmacophores for a set of 5`-thiourea-substituted alpha-thymidine inhibitors. Models were developed for the entire training set and for a subset of the training set consisting of the most potent inhibitors. The optimized (RI) 4D-QSAR models are statistically significant (r(2) = 0.90, q(2) = 0.83 entire set, r(2) = 0.86, q(2) = 0.80 high potency subset) and also possess good predictivity based on test set predictions. The most and least potent inhibitors, in their respective postulated active conformations derived from the models, were docked in the active site of the TMPK crystallographic structure. There is a solid consistency between the 3D-pharmacophore sites defined by the QSAR models and interactions with binding site residues. This model identifies new regions of the inhibitors that contain pharmacophore sites, such as the sugar-pyrimidine ring structure and the region of the 5`-arylthiourea moiety. These new regions of the ligands can be further explored and possibly exploited to identify new, novel, and, perhaps, better antituberculosis inhibitors of TMPKmt. Furthermore, the 3D-pharmacophores defined by these models can be used as a starting point for future receptor-dependent antituberculosis drug design as well as to elucidate candidate sites for substituent addition to optimize ADMET properties of analog inhibitors.
Resumo:
Histamine is an important biogenic amine, which acts with a group of four G-protein coupled receptors (GPCRs), namely H(1) to H(4) (H(1)R - H(4)R) receptors. The actions of histamine at H(4)R are related to immunological and inflammatory processes, particularly in pathophysiology of asthma, and H(4)R ligands having antagonistic properties could be helpful as antiinflammatory agents. In this work, molecular modeling and QSAR studies of a set of 30 compounds, indole and benzimidazole derivatives, as H(4)R antagonists were performed. The QSAR models were built and optimized using a genetic algorithm function and partial least squares regression (WOLF 5.5 program). The best QSAR model constructed with training set (N = 25) presented the following statistical measures: r (2) = 0.76, q (2) = 0.62, LOF = 0.15, and LSE = 0.07, and was validated using the LNO and y-randomization techniques. Four of five compounds of test set were well predicted by the selected QSAR model, which presented an external prediction power of 80%. These findings can be quite useful to aid the designing of new anti-H(4) compounds with improved biological response.