934 resultados para Computer Science(all)
Resumo:
Birnbaum and Saunders (1969a) introduced a probability distribution which is commonly used in reliability studies For the first time based on this distribution the so-called beta-Birnbaum-Saunders distribution is proposed for fatigue life modeling Various properties of the new model including expansions for the moments moment generating function mean deviations density function of the order statistics and their moments are derived We discuss maximum likelihood estimation of the model s parameters The superiority of the new model is illustrated by means of three failure real data sets (C) 2010 Elsevier B V All rights reserved
Resumo:
This paper provides general matrix formulas for computing the score function, the (expected and observed) Fisher information and the A matrices (required for the assessment of local influence) for a quite general model which includes the one proposed by Russo et al. (2009). Additionally, we also present an expression for the generalized leverage on fixed and random effects. The matrix formulation has notational advantages, since despite the complexity of the postulated model, all general formulas are compact, clear and have nice forms. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The main purpose of this work is to study the behaviour of Skovgaard`s [Skovgaard, I.M., 2001. Likelihood asymptotics. Scandinavian journal of Statistics 28, 3-32] adjusted likelihood ratio statistic in testing simple hypothesis in a new class of regression models proposed here. The proposed class of regression models considers Dirichlet distributed observations, and the parameters that index the Dirichlet distributions are related to covariates and unknown regression coefficients. This class is useful for modelling data consisting of multivariate positive observations summing to one and generalizes the beta regression model described in Vasconcellos and Cribari-Neto [Vasconcellos, K.L.P., Cribari-Neto, F., 2005. Improved maximum likelihood estimation in a new class of beta regression models. Brazilian journal of Probability and Statistics 19,13-31]. We show that, for our model, Skovgaard`s adjusted likelihood ratio statistics have a simple compact form that can be easily implemented in standard statistical software. The adjusted statistic is approximately chi-squared distributed with a high degree of accuracy. Some numerical simulations show that the modified test is more reliable in finite samples than the usual likelihood ratio procedure. An empirical application is also presented and discussed. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We introduce, for the first time, a new class of Birnbaum-Saunders nonlinear regression models potentially useful in lifetime data analysis. The class generalizes the regression model described by Rieck and Nedelman [Rieck, J.R., Nedelman, J.R., 1991. A log-linear model for the Birnbaum-Saunders distribution. Technometrics 33, 51-60]. We discuss maximum-likelihood estimation for the parameters of the model, and derive closed-form expressions for the second-order biases of these estimates. Our formulae are easily computed as ordinary linear regressions and are then used to define bias corrected maximum-likelihood estimates. Some simulation results show that the bias correction scheme yields nearly unbiased estimates without increasing the mean squared errors. Two empirical applications are analysed and discussed. Crown Copyright (C) 2009 Published by Elsevier B.V. All rights reserved.
A robust Bayesian approach to null intercept measurement error model with application to dental data
Resumo:
Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Molecular orbital calculations were carried out on a set of 28 non-imidazole H(3) antihistamine compounds using the Hartree-Fock method in order to investigate the possible relationships between electronic structural properties and binding affinity for H3 receptors (pK(i)). It was observed that the frontier effective-for-reaction molecular orbital (FERMO) energies were better correlated with pK(i) values than highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energy values. Exploratory data analysis through hierarchical cluster (HCA) and principal component analysis (PCA) showed a separation of the compounds in two sets, one grouping the molecules with high pK(i) values, the other gathering low pK(i) value compounds. This separation was obtained with the use of the following descriptors: FERMO energies (epsilon(FERMO)), charges derived from the electrostatic potential on the nitrogen atom (N(1)), electronic density indexes for FERMO on the N(1) atom (Sigma((FERMO))c(i)(2)). and electrophilicity (omega`). These electronic descriptors were used to construct a quantitative structure-activity relationship (QSAR) model through the partial least-squares (PLS) method with three principal components. This model generated Q(2) = 0.88 and R(2) = 0.927 values obtained from a training set and external validation of 23 and 5 molecules, respectively. After the analysis of the PLS regression equation and the values for the selected electronic descriptors, it is suggested that high values of FERMO energies and of Sigma((FERMO))c(i)(2), together with low values of electrophilicity and pronounced negative charges on N(1) appear as desirable properties for the conception of new molecules which might have high binding affinity. 2010 Elsevier Inc. All rights reserved.
Resumo:
In this work, two different docking programs were used, AutoDock and FlexX, which use different types of scoring functions and searching methods. The docking poses of all quinone compounds studied stayed in the same region in the trypanothione reductase. This region is a hydrophobic pocket near to Phe396, Pro398 and Leu399 amino acid residues. The compounds studied displays a higher affinity in trypanothione reductase (TR) than glutathione reductase (GR), since only two out of 28 quinone compounds presented more favorable docking energy in the site of human enzyme. The interaction of quinone compounds with the TR enzyme is in agreement with other studies, which showed different binding sites from the ones formed by cysteines 52 and 58. To verify the results obtained by docking, we carried out a molecular dynamics simulation with the compounds that presented the highest and lowest docking energies. The results showed that the root mean square deviation (RMSD) between the initial and final pose were very small. In addition, the hydrogen bond pattern was conserved along the simulation. In the parasite enzyme, the amino acid residues Leu399, Met400 and Lys402 are replaced in the human enzyme by Met406, Tyr407 and Ala409, respectively. In view of the fact that Leu399 is an amino acid of the Z site, this difference could be explored to design selective inhibitors of TR.
Resumo:
In e-Science experiments, it is vital to record the experimental process for later use such as in interpreting results, verifying that the correct process took place or tracing where data came from. The process that led to some data is called the provenance of that data, and a provenance architecture is the software architecture for a system that will provide the necessary functionality to record, store and use process documentation. However, there has been little principled analysis of what is actually required of a provenance architecture, so it is impossible to determine the functionality they would ideally support. In this paper, we present use cases for a provenance architecture from current experiments in biology, chemistry, physics and computer science, and analyse the use cases to determine the technical requirements of a generic, technology and application-independent architecture. We propose an architecture that meets these requirements and evaluate a preliminary implementation by attempting to realise two of the use cases.
Resumo:
O tema parceria vem sendo objeto de estudo em todo o mundo. Ganhando força, com as Joint-venture Internacionais, nas áreas de telecomunicações, informática, automobilística, aeroespacial etc., o elemento inovador passou a ser a constituição de alianças estratégicas com potenciais concorrentes, seja na exploração de novos nichos de mercado, seja em atividades de pesquisa e desenvolvimento de novos serviços e produtos. Considerando como normal esta troca ou sinergia entre parceiros, o trabalho apresenta e analisa os dados relativos ao estudo de campo realizado (pesquisa seletiva) junto a profissionais envolvidos com o dia-a-dia do processo de parceria de empresas líderes dos setores de Telecomunicações (Empresa Estatal) e do setor de serviços de Rede e Informática ( Empresa Privada). As considerações estão estruturadas nas seguintes partes: a caracterização e segmentação da amostra utilizada (caracterização dos respondentes); a análise descritiva e avaliação das variáveis representativas dos fatores inibidores e facilitadores; as percepções diferenciadas face aos elementos dependendo da experiência profissional e grau de envolvimento com os processos da parceria; os elementos inibidores e facilitadores apontados como de maior relevância; e os resultados obtidos a partir do uso da técnica de análise fatorial.
Resumo:
Este estudo teve como objetivo verificar até que ponto o processo de descentralização adotado pelo Departamento de Polícia Técnica da Bahia (DPT-BA) foi eficiente no atendimento às demandas de perícias de Computação Forense geradas pelas Coordenadorias Regionais de Polícia Técnica (CRPTs) do interior do Estado. O DPT-BA foi reestruturado obedecendo aos princípios da descentralização administrativa, seguindo a corrente progressista. Assumiu, com a descentralização, o compromisso de coordenar ações para dar autonomia às unidades do interior do Estado, com a criação de estruturas mínimas em todas as esferas envolvidas, com ampla capacidade de articulação entre si e com prestação de serviços voltados para um modelo de organização pública de alto desempenho. Ao abordar a relação existente entre a descentralização e a eficiência no atendimento à demanda de perícias oriundas do interior do estado da Bahia, o estudo, por limitações instrumentais, se manteve adstrito ao campo das perícias de Computação Forense, que reflete e ilustra, de forma expressiva, o cenário ocorrido nas demais áreas periciais. Inicialmente foram identificadas as abordagens teóricas sobre descentralização, evidenciando as distintas dimensões do conceito, e, em seguida, sobre a Computação Forense. Foram realizadas pesquisa documental no Instituto de Criminalística Afrânio Peixoto (Icap) e pesquisa de campo por meio de entrevistas semiestruturadas com juízes de direito lotados nas varas criminais de comarcas relacionadas ao cenário de pesquisa e com peritos criminais das Coordenações Regionais, das CRPTs e da Coordenação de Computação Forense do Icap. Correlacionando os prazos de atendimento que contemplam o conceito de eficiência definido pelos juízes de direito entrevistados, clientes finais do trabalho pericial e os prazos reais obtidos mediante a pesquisa documental os dados revelaram alto grau de ineficiência, morosidade e inadimplência, além de realidades discrepantes entre capital e interior. A análise das entrevistas realizadas com os peritos criminais revelou um cenário de insatisfação e desmotivação generalizadas, com a centralização quase absoluta do poder decisório, demonstrando que o processo de descentralização praticado serviu, paradoxalmente, como uma ferramenta de viabilização e camuflagem da centralização.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This article describes a methodological approach to conditional reasoning in online asynchronous learning environments such as Virtual-U VGroups, developed by SFU, BC, Canada, consistent with the notion of meaning implication: If part of a meaning C is embedded in B and a part of a meaning B is embedded in A, then A implies C in terms of meaning [Piaget 91]. A new transcript analysis technique was developed to assess the flows of conditional meaning implications and to identify the occurrence of hypotheses and connections among them in two human science graduate mixed-mode online courses offered in the summer/spring session of 1997 by SFU. Flows of conditional meaning implications were confronted with Virtual-U VGroups threads and results of the two courses were compared. Findings suggest that Virtual-U VGroups is a knowledge-building environment although the tree-like Virtual-U VGroups threads should be transformed into neuronal-like threads. Findings also suggest that formulating hypotheses together triggers a collaboratively problem-solving process that scaffolds knowledge-building in asynchronous learning environments: A pedagogical technique and an built-in tool for formulating hypotheses together are proposed. © Springer Pub. Co.
Resumo:
We have recently proposed an extension to Petri nets in order to be able to directly deal with all aspects of embedded digital systems. This extension is meant to be used as an internal model of our co-design environment. After analyzing relevant related work, and presenting a short introduction to our extension as a background material, we describe the details of the timing model we use in our approach, which is mainly based in Merlin's time model. We conclude the paper by discussing an example of its usage. © 2004 IEEE.